https://unsplash.com/photos/MECKPoKJYjM

Fast Federated Learning by Balancing Communication Trade-Offs

Milad Khademi Nori
2 min readMay 26, 2021

--

In our recent paper published in IEEE Transactions on Communications (https://arxiv.org/abs/2105.11028, https://ieeexplore.ieee.org/document/9439935), we studied the problem of communication-efficiency of Federated Learning (FL) which has recently received a lot of attention for large-scale privacy-preserving machine learning. We discussed that high communication overheads due to frequent gradient transmissions decelerate FL for which two main techniques have been studied in the literature: (i) local update of weights characterizing the trade-off between communication and computation and (ii) gradient compression characterizing the trade-off between communication and precision. To the best of our knowledge, studying and balancing those two trade-offs jointly and dynamically while considering their impacts on convergence had remained unresolved even though it promised significantly faster FL.

In the paper, we first formulated our problem to minimize learning error with respect to two variables: local update coefficients and sparsity budgets of gradient compression who characterized trade-offs between communication and computation/precision, respectively. We then derived an upper bound of the learning error in a given wall-clock time considering the interdependency between the two variables. Based on this theoretical analysis, we proposed an enhanced FL scheme, namely Fast FL (FFL), that jointly and dynamically adjusted the two variables to minimize the learning error. Finally, we demonstrate that FFL consistently achieved higher accuracies faster than similar schemes existing in the literature.

For details, please refer to either

or

.

--

--