Publications

LAFD: Local-differentially Private and Asynchronous Federated Learning with Direct Feedback Alignment

Published

IEEE Access

Date

2023.08.14

Research Areas

Abstract

Federated learning is a promising approach for training machine learning models using distributed data from multiple mobile devices. However, privacy concerns arise when sensitive data are used for training. In this paper, we discuss the challenges of applying local differential privacy to federated learning, which is compounded by the limited resources of mobile clients and the asynchronicity of federated learning. To address these challenges, we propose a framework called LAFD, which stands for Local-differentially Private and Asynchronous Federated Learning with Direct Feedback Alignment. LAFD consists of two parts: (a) LFL-DFALS: Local differentially private Federated Learning with Direct Feedback Alignment and Layer Sampling, and (b) AFL-LMTGR: Asynchronous Federated Learning with Local Model Training and Gradient Rebalancing. LFL-DFALS effectively reduces the computation and communication costs via direct feedback alignment and layer sampling during the training process of federated learning. AFL-LMTGR handles the problem of stragglers via local model training and gradient rebalancing. Local model training enables asynchronous federated learning to the participants of the federated learning. In addition, gradient rebalancing mitigates the gap between the local model and aggregated model. We demonstrate the performance of LFL-DFALS and AFL-LMTGR through the experiments using multivariate datasets and image datasets.