Kavli Affiliate: Xiang Zhang
| First 5 Authors: Xiang Zhang, Xiang Zhang, , ,
| Summary:
In decentralized federated learning (FL), multiple clients collaboratively
learn a shared machine learning (ML) model by leveraging their privately held
datasets distributed across the network, through interactive exchange of the
intermediate model updates. To ensure data security, cryptographic techniques
are commonly employed to protect model updates during aggregation. Despite
growing interest in secure aggregation, existing works predominantly focus on
protocol design and computational guarantees, with limited understanding of the
fundamental information-theoretic limits of such systems. Moreover, optimal
bounds on communication and key usage remain unknown in decentralized settings,
where no central aggregator is available. Motivated by these gaps, we study the
problem of decentralized secure aggregation (DSA) from an information-theoretic
perspective. Specifically, we consider a network of $K$ fully-connected users,
each holding a private input — an abstraction of local training data — who
aim to securely compute the sum of all inputs. The security constraint requires
that no user learns anything beyond the input sum, even when colluding with up
to $T$ other users. We characterize the optimal rate region, which specifies
the minimum achievable communication and secret key rates for DSA. In
particular, we show that to securely compute one symbol of the desired input
sum, each user must (i) transmit at least one symbol to others, (ii) hold at
least one symbol of secret key, and (iii) all users must collectively hold no
fewer than $K – 1$ independent key symbols. Our results establish the
fundamental performance limits of DSA, providing insights for the design of
provably secure and communication-efficient protocols in distributed learning
systems.
| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=3