Kavli Affiliate: Jia Liu
| First 5 Authors: Haibo Yang, Peiwen Qiu, Prashant Khanduri, Minghong Fang, Jia Liu
| Summary:
Existing works in federated learning (FL) often assume an ideal system with
either full client or uniformly distributed client participation. However, in
practice, it has been observed that some clients may never participate in FL
training (aka incomplete client participation) due to a myriad of system
heterogeneity factors. A popular approach to mitigate impacts of incomplete
client participation is the server-assisted federated learning (SA-FL)
framework, where the server is equipped with an auxiliary dataset. However,
despite SA-FL has been empirically shown to be effective in addressing the
incomplete client participation problem, there remains a lack of theoretical
understanding for SA-FL. Meanwhile, the ramifications of incomplete client
participation in conventional FL are also poorly understood. These theoretical
gaps motivate us to rigorously investigate SA-FL. Toward this end, we first
show that conventional FL is {em not} PAC-learnable under incomplete client
participation in the worst case. Then, we show that the PAC-learnability of FL
with incomplete client participation can indeed be revived by SA-FL, which
theoretically justifies the use of SA-FL for the first time. Lastly, to provide
practical guidance for SA-FL training under {em incomplete client
participation}, we propose the $mathsf{SAFARI}$ (server-assisted federated
averaging) algorithm that enjoys the same linear convergence speedup guarantees
as classic FL with ideal client participation assumptions, offering the first
SA-FL algorithm with convergence guarantee. Extensive experiments on different
datasets show $mathsf{SAFARI}$ significantly improves the performance under
incomplete client participation.
| Search Query: ArXiv Query: search_query=au:”Jia Liu”&id_list=&start=0&max_results=3