Faster Rates for Compressed Federated Learning with Client-Variance Reduction
The new paper Faster Rates for Compressed Federated Learning with Client-Variance Reduction has been outed.
Our paper “Faster rates for compressed federated learning with client-variance reduction” has been out.
- The arXiv link for the paper: https://arxiv.org/abs/2112.13097
- The DeeepAI link for the paper: https://deepai.org/publication/faster-rates-for-compressed-federated-learning-with-client-variance-reduction
I was glad to work with my peers Haoyu Zhao from Princeton University, Zhize Li, and prof. Peter Richtarik from King Abdullah University of Science and Technology.
We provide rigorous theory and a rich amount of practical experiments to highlight the benefits of our methods. In terms of practice, we are providing comparisons of several state-of-the-art optimization Federated Learning (FL) algorithms with theoretical and tunable parameters that control the behavior of optimization algorithms. The provided experiments include the distributed learning of nonconvex binary classification with convex and nonconvex logistic regression and image classifier ResNet-18 at the CIFAR10 dataset.
Our COFIG algorithm has excellent results in honest comparison.
The experimental part of that paper was done in an advanced research simulator for Federated Learning called FL_PyTorch.