Tackling the Unlimited Staleness in Federated Learning with Intertwined Data and Device Heterogeneities

Kavli Affiliate: Wei Gao

| First 5 Authors: Haoming Wang, Wei Gao, , ,

| Summary:

The efficiency of Federated Learning (FL) is often affected by both data and
device heterogeneities. Data heterogeneity is defined as the heterogeneity of
data distributions on different clients. Device heterogeneity is defined as the
clients’ variant latencies in uploading their local model updates due to
heterogeneous conditions of local hardware resources, and causes the problem of
staleness when being addressed by asynchronous FL. Traditional schemes of
tackling the impact of staleness consider data and device heterogeneities as
two separate and independent aspects in FL, but this assumption is unrealistic
in many practical FL scenarios where data and device heterogeneities are
intertwined. In these cases, traditional schemes of weighted aggregation in FL
have been proved to be ineffective, and a better approach is to convert a stale
model update into a non-stale one. In this paper, we present a new FL framework
that leverages the gradient inversion technique for such conversion, hence
efficiently tackling unlimited staleness in clients’ model updates. Our basic
idea is to use gradient inversion to get estimations of clients’ local training
data from their uploaded stale model updates, and use these estimations to
compute non-stale client model updates. In this way, we address the problem of
possible data quality drop when using gradient inversion, while still
preserving the clients’ local data privacy. We compared our approach with the
existing FL strategies on mainstream datasets and models, and experiment
results demonstrate that when tackling unlimited staleness, our approach can
significantly improve the trained model accuracy by up to 20% and speed up the
FL training progress by up to 35%.

| Search Query: ArXiv Query: search_query=au:”Wei Gao”&id_list=&start=0&max_results=3

Read More