Enhancing Privacy and Security in Federated Learning

The field of federated learning (FL) is witnessing significant advancements aimed at enhancing privacy, security, and performance in decentralized environments. Recent developments focus on addressing the challenges posed by non-Independent and Identically Distributed (non-IID) data, label distribution skew, and data drifts, which are critical for maintaining model accuracy and convergence. Innovations in client selection algorithms, such as bias-aware and entropy-based methods, are being proposed to optimize FL performance by strategically choosing clients that contribute to a more balanced and representative global model. Additionally, there is a growing emphasis on anomaly detection within FL frameworks to safeguard against malicious client behavior and improve system efficiency. Clustering approaches are also being refined to adapt to data drifts, ensuring that model training remains effective despite changes in client data distributions. Furthermore, novel methods for handling label shift, such as aligned distribution mixtures, are being introduced to better utilize available data and improve model generalization. Security enhancements, particularly against poisoning attacks, are being developed using formal logic-guided approaches to ensure robustness in federated learning environments. These advancements collectively aim to make FL more adaptable, secure, and efficient, particularly in sensitive applications like healthcare and IoT.

Noteworthy papers include one proposing a bias-aware client selection algorithm that significantly improves FL performance in non-IID data scenarios, and another introducing an entropy-based client selection method that outperforms state-of-the-art algorithms by up to 6% in classification accuracy.

Sources

BACSA: A Bias-Aware Client Selection Algorithm for Privacy-Preserving Federated Learning in Wireless Healthcare Networks

Optimizing Federated Learning by Entropy-Based Client Selection

Anomalous Client Detection in Federated Learning

Federated Learning Clients Clustering with Adaptation to Data Drifts

Theory-inspired Label Shift Adaptation via Aligned Distribution Mixture

Formal Logic-guided Robust Federated Learning against Poisoning Attacks

Overcoming label shift in targeted federated learning

Built with on top of