Federated Learning is Better with Non-Homomorphic Encryption.


“Federated Learning is Better with Non-Homomorphic Encryption” at the 4th International Workshop on Distributed Machine Learning(DistributedML 2023).


The paper “Federated Learning is Better with Non-Homomorphic Encryption” has been accepted as part of the technical program for the 19th ACM International Conference on Emerging Networking EXperiments and Technologies (ACM CoNEXT 2023), a conference sponsored by:

Our paper will be presented at the 4th International Workshop on Distributed Machine Learning (DistributedML 2023) co-located with CoNext on the 8th of December 2023 and will be available in ACM CoNEXT Proceedings in ACM Digital Library. The DistributedML will take place at the Conservatoire National des Arts et Métiers (CNAM) located at 292 Rue Saint-Martin, 75003, Paris, France.

Information about other workshops within ACM CoNext 2023 is available from this link.


Links to our paper and recorded video with presentation:


I was glad to work with my peers and hope for further cooperation with them:

Traditional AI methodologies necessitate centralized data collection, which becomes impractical when facing problems with network communication, data privacy, or storage capacity. Federated Learning (FL) offers a paradigm that empowers distributed AI model training without collecting raw data. There are different choices for providing privacy during FL training. One of the popular methodologies is employing Homomorphic Encryption (HE) - a breakthrough in privacy-preserving computation from Cryptography. However, these methods have a serious price in the form of extra computation and memory footprint. To resolve these issues, we propose an innovative framework that synergizes permutation-based compressors with Classical Cryptography, even though employing Classical Cryptography was assumed to be impossible in the past in the context of FL. Our framework offers a way to replace HE with cheaper Classical Cryptography primitives to provide security for the training process. It fosters asynchronous communication and provides flexible deployment options in various communication topologies.

Our work opens a new possibility for applying Classical Cryptography to FL and challenges existing claims about its limitations which has been done in works such as:

As a part of the proceedings, we provide an Appendix for our work that includes additional details about our framework:

  • Studies about the effect of the optimization problem dimension
  • Overlapping communication and computation during training machine learning models
  • Deployment options of our framework in various network topologies

Written on November 2, 2023