Enhancing Privacy and Efficiency in Federated Learning

The recent advancements in federated learning (FL) have significantly focused on enhancing privacy, efficiency, and robustness against various adversarial scenarios. Innovations in homomorphic encryption and quantization techniques have been pivotal in securing model updates during training, thereby mitigating privacy risks associated with inference attacks. Notably, novel algorithms combining low-bit quantization with pruning have demonstrated substantial reductions in computational costs while maintaining model accuracy. Additionally, the integration of adaptive optimizers with gradient compression methods has shown promise in reducing communication overhead, with theoretical analyses providing logarithmic dependence on model parameters rather than linear, which is crucial for deep learning models. Furthermore, the field has seen advancements in addressing noisy labels through client pruning, which identifies and excludes clients with potentially noisy data, thereby improving overall model performance. Privacy-preserving inference services have also made strides, ensuring both data privacy and inference verifiability through advanced cryptographic techniques. These developments collectively underscore a shift towards more secure, efficient, and reliable FL frameworks, catering to the growing demands of decentralized and privacy-conscious machine learning applications.

Sources

QuanCrypt-FL: Quantized Homomorphic Encryption with Pruning for Secure Federated Learning

Protection against Source Inference Attacks in Federated Learning using Unary Encoding and Shuffling

Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis

Federated Learning Client Pruning for Noisy Labels

Privacy-Preserving Verifiable Neural Network Inference Service

Towards efficient compression and communication for prototype-based decentralized learning

The Communication-Friendly Privacy-Preserving Machine Learning against Malicious Adversaries

Built with on top of