FedDec: Peer-to-peer Aided Federated Learning
Résumé
Federated learning (FL) has enabled training machine learning models that exploit the data of multiple agents without compromising privacy. However, FL is known to be vulnerable to data heterogeneity, partial device participation, and infrequent communication with the server, which are nonetheless distinctive characteristics of this framework. While much of the literature has tackled these weaknesses using different tools, only a few works have considered inter-agent communication to improve FL's performance. In this work, we present FedDec, an algorithm that interleaves peer-to-peer communication and parameter averaging between the local gradient updates of FL. We analyze the convergence of FedDec and show that interagent communication alleviates the negative impact of infrequent communication rounds with the server by reducing the dependence on the number of local updates H from O(H^2) to O(H). Furthermore, our analysis reveals that the term improved in the bound vanishes quickly the more connected the network is. We confirm the predictions of our theory in numerical simulations, where we show that FedDec converges faster than FedAvg, and that the gains are greater as either H or the connectivity of the network increase.
Origine | Fichiers produits par l'(les) auteur(s) |
---|