Error Feedback Reloaded at ICLR 2024.


“Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants” has been accepted and will be presented at ICLR 2024.


Our paper, titled “Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants” has been accepted for presentation at the 2024 International Conference on Learning Representations (ICLR 2024).

You can access the paper on arXiv through the following link: https://arxiv.org/abs/2402.10774.

Collaborating with my co-authors, Elnur Gasanov and Peter Richtárik, has been an immense experience for me. Without the hard work of all of us, the completion of this research would not have been possible.

We look forward to sharing our findings and insights at ICLR 2024!

Abstract

Error Feedback (EF) is a highly popular and immensely effective mechanism for fixing convergence issues which arise in distributed training methods (such as distributed GD or SGD) when these are enhanced with greedy communication compression techniques such as TopK. While EF was proposed almost a decade ago (Seide et al., 2014), and despite concentrated effort by the community to advance the theoretical understanding of this mechanism, there is still a lot to explore.

In this work we study a modern form of error feedback called EF21 (Richtárik et al., 2021) which offers the currently best-known theoretical guarantees, under the weakest assumptions, and also works well in practice. In particular, while the theoretical communication complexity of EF21 depends on the quadratic mean of certain smoothness parameters, we improve this dependence to their arithmetic mean, which is always smaller, and can be substantially smaller, especially in heterogeneous data regimes.

We take the reader on a journey of our discovery process. Starting with the idea of applying EF21 to an equivalent reformulation of the underlying problem which (unfortunately) requires (often impractical) machine cloning, we continue to the discovery of a new weighted version of EF21 which can (fortunately) be executed without any cloning, and finally circle back to an improved analysis of the original EF21 method. While this development applies to the simplest form of EF21, our approach naturally extends to more elaborate variants involving stochastic gradients and partial participation. Further, our technique improves the best-known theory of EF21 in the rare features regime (Richtárik et al., 2023). Finally, we validate our theoretical findings with suitable experiments.

Acknowledgements

This work of all authors was supported by the KAUST Baseline Research Scheme (KAUST BRF). The work of Peter Richtárik and Konstantin Burlachenko was also supported by the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI). We wish to thank Babis Kostopoulos — a VSRP intern at KAUST who spent some time working on this project in Summer 2023—for helping with some parts of the project. We offered Babis co-authorship, but he declined.

Conference



Written on February 19, 2024