Skip to yearly menu bar Skip to main content


Poster

Communication-Efficient Federated Learning with Accelerated Client Gradient

Geeho Kim · Jinkyu Kim · Bohyung Han

Arch 4A-E Poster #267
[ ] [ Paper PDF ]
[ Poster
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets.Such a tendency is aggravated when the client participation ratio is low since the information collected from the clients has large variations.To address this challenge, we propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model.This is achieved by making the server broadcast a global model with a gradient acceleration.This strategy enables the proposed approach to convey the projective global update information to participants effectively without additional client memory for storing previous models, and extra communication costs.We also regularize local updates by aligning each client with the overshot global model to reduce bias and improve the stability of our algorithm.We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency compared to the state-of-the-art methods, especially with low client participation rates.We plan to release our code to facilitate the reproduction of our work.

Chat is not available.