Federated Learning

Current Developments in Federated Learning

General Direction of the Field

The field of Federated Learning (FL) is rapidly evolving, with recent advancements focusing on enhancing the robustness, efficiency, and privacy of distributed machine learning models. A significant trend is the integration of advanced techniques to address the inherent challenges posed by data heterogeneity, communication constraints, and adversarial threats. Innovations in model calibration, dynamic resource allocation, and personalized learning are driving improvements in both model accuracy and reliability. Additionally, the development of novel defense mechanisms against Byzantine attacks and targeted adversarial strategies is bolstering the security of FL systems.

One of the key areas of innovation is the adaptation of generative models for parameter aggregation in personalized FL, which aims to better capture the complex, high-dimensional nature of model parameters across diverse data distributions. This approach not only enhances model performance but also addresses the limitations of traditional linear aggregation methods. Furthermore, the introduction of privacy-preserving mechanisms that leverage knowledge distillation and conditional generators is providing new ways to balance performance and privacy in FL.

Another notable development is the exploration of hierarchical coordination frameworks that utilize pre-trained model blocks, enabling more efficient and scalable training processes. These frameworks are particularly beneficial for heterogeneous environments, where low-end devices can contribute to the global model without excessive resource consumption. Additionally, the integration of reinforcement learning for dynamic model selection in multi-model FL systems is showing promise in mitigating model poisoning attacks.

Noteworthy Innovations

  • Model Calibration in FL: A novel framework dynamically adjusts calibration objectives based on local and global model relationships, enhancing both accuracy and reliability in heterogeneous settings.
  • Dynamic Resource Allocation: An adaptive FL framework allocates communication resources based on data heterogeneity, achieving significant performance improvements while optimizing communication costs.
  • Generative Parameter Aggregation: A diffusion-based approach for personalized FL effectively decouples parameter complexity, leading to superior performance across multiple datasets.
  • Hybrid Defense Against Byzantine Attacks: A general-purpose aggregation rule demonstrates resilience against a wide range of attacks, highlighting the ongoing need for robust FL algorithms.
  • Privacy-Preserving Knowledge Distillation: A method using conditional generators ensures high performance and privacy, addressing the conflict between privacy and efficiency in FL.
  • Hierarchical Coordination with Pre-trained Blocks: A framework that stitches pre-trained blocks improves model accuracy and reduces resource consumption, making FL more accessible to low-end devices.
  • Data-Free Adversarial Distillation: A one-shot FL method leverages dual-generator training to explore broader local model spaces, achieving significant accuracy gains.
  • Multi-Model FL for Attack Mitigation: A proactive mechanism using multiple models dynamically changes client model structures to enhance robustness against model poisoning attacks.
  • Prototype-Based FL with Proxy Classes: A method for embedding networks in classification tasks conceals true class prototypes, enhancing privacy while maintaining discriminative learning.

Sources

Unlocking the Potential of Model Calibration in Federated Learning

DynamicFL: Federated Learning with Dynamic Communication Resource Allocation

pFedGPA: Diffusion-based Generative Parameter Aggregation for Personalized Federated Learning

DV-FSR: A Dual-View Target Attack Framework for Federated Sequential Recommendation

Advancing Hybrid Defense for Byzantine Attacks in Federated Learning

Privacy-Preserving Federated Learning with Consistency via Knowledge Distillation Using Conditional Generator

Heterogeneity-Aware Coordination for Federated Learning via Stitching Pre-trained blocks

Federated $\mathcal{X}$-armed Bandit with Flexible Personalisation

PDC-FRS: Privacy-preserving Data Contribution for Federated Recommender System

DFDG: Data-Free Dual-Generator Adversarial Distillation for One-Shot Federated Learning

Multi-Model based Federated Learning Against Model Poisoning Attack: A Deep Learning Based Model Selection for MEC Systems

FedHide: Federated Learning by Hiding in the Neighbors