NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data

Kavli Affiliate: Jia Liu

| First 5 Authors: Xin Zhang, Minghong Fang, Zhuqing Liu, Haibo Yang, Jia Liu

| Summary:

Federated learning (FL) has received a surge of interest in recent years
thanks to its benefits in data privacy protection, efficient communication, and
parallel data processing. Also, with appropriate algorithmic designs, one could
achieve the desirable linear speedup for convergence effect in FL. However,
most existing works on FL are limited to systems with i.i.d. data and
centralized parameter servers and results on decentralized FL with
heterogeneous datasets remains limited. Moreover, whether or not the linear
speedup for convergence is achievable under fully decentralized FL with data
heterogeneity remains an open question. In this paper, we address these
challenges by proposing a new algorithm, called NET-FLEET, for fully
decentralized FL systems with data heterogeneity. The key idea of our algorithm
is to enhance the local update scheme in FL (originally intended for
communication efficiency) by incorporating a recursive gradient correction
technique to handle heterogeneous datasets. We show that, under appropriate
parameter settings, the proposed NET-FLEET algorithm achieves a linear speedup
for convergence. We further conduct extensive numerical experiments to evaluate
the performance of the proposed NET-FLEET algorithm and verify our theoretical
findings.

| Search Query: ArXiv Query: search_query=au:”Jia Liu”&id_list=&start=0&max_results=10

Read More