Unlocking FedNL in Virtual Radio Studio
Unlocking FedNL Self-Contained Compute-Optimized Implementation (Research from KAUST) in Virtual Radio Studio
I am excited to share my latest experience using a Personalized AI research assistant from Google - NotebookLM. I fed one of my recent research paper created jointly with my peer (and advisor) P.Richtárik Unlocking FedNL: Self-Contained Compute-Optimized Implementation and generated a highly engaging and informative radio podcast from NotebookLM. Generated audio:
- Online: https://www.podbean.com/eas/pb-zs34b-16d2942
- Offline: u-fednl-before-rebuttal.mp3
The podcast captures the core message of the work and delivers it in an entertaining format. Currently, the paper is undergoing peer review and is not publicly available.
Abstract
Federated Learning (FL) is an innovative paradigm that allows a large number of intelligent agents to collaboratively train machine learning (ML) models. A recent paper by Safaryan,Islamov,Qian,Richtárik (2021) introduced the FedNL (Federated Newton Learn) algorithm, marking a significant milestone In applying second-order optimization methods to FL and large-scale optimization. The reference FedNL (Federated Newton Learn) prototype faces three notable challenges:
-
It takes approximately 4.8 hours to run a single experiment on a server-grade workstation.
-
The prototype supports only single-node execution.
-
The FedNL algorithms were implemented in Python, making integration into resource-constrained ML applications difficult.
Contributions
Our work addresses these challenges as follows:
- We reduced the wall-clock time by a factor of 1000 for single-node simulations on the same hardware and on the same configuration
- The implementation does not rely on third-party computation or data-processing frameworks
- We developed two practical compressors: one is Problem Adaptive and the other is CPU Cache-aware
- Finally, FedNL outperforms existing solutions in both single-node and multi-node settings
Results
- In single-node scenarios, it outperforms all available solvers which can solve logistic regression model from CVXPY (Diamond & Boyd, 2016).
- In multi-node scenarios, it surpasses Apache Spark (Meng et al., 2016)
- Also in multi-node scenarios, it surpasses Rays/Scikit-Learn (Moritz et al., 2018).