Unlocking FedNL at Rising Stars AI Symposium 2024
“Unlocking FedNL: Self-Contained Compute-Optimized Implementation” will be presented at KAUST Rising Stars AI Symposium 2024.
I am glad to have the opportunity to present our recent work Unlocking FedNL: Self-Contained Compute-Optimized Implementation at Rising Stars AI Symposium 2024.
Abstract. Federated Learning (FL) is an emerging paradigm that enables a possibly huge number of intelligent agents to collaboratively train Machine Learning (ML) models in a distributed manner, eliminating the need for sharing their local data. The recent work (Safaryan et al., 2021) introduces a family of Federated Newton Learn (FedNL) algorithms, marking a significant step towards applying second-order methods to FL and large-scale optimization. However, the reference FedNL prototype exhibits three practical drawbacks:
- It requires \(4.8\) hours to launch a single experiment in a server-grade workstation.
- The prototype supports only a single node.
- The FedNL algorithms family prototypes were created in the scripting language Python. Integration of such implementation into resource-constrained ML applications is challenging.
To bridge the gap between theory and practice, we present a self-contained implementation of FedNL, FedNL-LS, FedNL-PP for single-node and multi-node scenarios. Our work resolves the aforementioned issues in the following sense:
A) The work reduces the wall clock time by \(\times 1000\) in single-node simulation.
B) We do not depend on any 3rd party computation and data processing software frameworks. We tested our implementation on [x86-64, AArch64] x [ macOS, Linux, Windows ]. In principle, it can be run on any POSIX IEEE 1003 and Microsoft Windows API compatible Operating Systems with available ISO/IEC 14882 C++2020 compiler. For multi-node settings, we do not require the ability to have the opportunity to build or use any specific middleware communication library in your OS. But your hardware and OS software stack should support either TCP/IPv4 or TCP/IPv6 via standard Berkley Sockets API.
C) As a part of the project we proposed two practical-orientated compressors for FedNL:
- Adaptive contractive compressor TopLEK build on top of TopK compressor.
- Cache-aware RandSeqK build on top of RandK randomized sparsification compressor.
D) Finally FedNL outperforms alternatives employed for training logistic regression:
- In a single-node setting - CVXPY (Diamond & Boyd, 2016),
- In a multi-node setting - Apache Spark (Meng et al., 2016), Rays/Scikit-Learn (Moritz et al., 2018).
Scope of our Work. We believe our work will be of interest to the audience working in ML in general (and in FL in particular) because half of the established principles are general enough and valuable in scenarios when a theoretically compelling ML algorithm requires a strong implementation. I am glad that (almost all) the previous work authors on top of which we built this practical implementation plan participated in KAUST Rising Stars in AI Symposium 2024:
- M.Safaryan. Postdoctoral MSCA Fellow, IST Austria.
- R.Islamov. PhD student, University of Basel.
- Prof. P.Richtárik. Professor of Computer Science, KAUST.
About Event.
- Rising Stars in AI Symposium 2024:
- Registration Link
- Date and Time: February 19 - February 21, UTC+3.
- Location: Building 19, Halls 1,2,3, 4700 King Abdullah University of Science and Technology, Thuwal 23955-6900, Saudi Arabia.