Advances in Decentralized Federated Learning

The field of federated learning is moving towards more decentralized and heterogeneous approaches, allowing for greater flexibility and scalability. Researchers are exploring new methods to address challenges such as catastrophic forgetting, biased optimization, and communication errors in decentralized federated learning. Notably, innovative solutions like dynamic allocation hypernetworks and route-and-aggregate strategies are being proposed to improve the performance and robustness of federated learning models. Noteworthy papers include:

  • FedSKD, which introduces a novel MHFL framework that facilitates direct knowledge exchange through round-robin model circulation, eliminating the need for centralized aggregation.
  • FedDAH, which proposes a dynamic allocation hypernetwork with adaptive model recalibration for federated continual learning, demonstrating superiority over other FCL methods on sites with different task streams.

Sources

Decentralized Federated Dataset Dictionary Learning for Multi-Source Domain Adaptation

Dynamic Allocation Hypernetwork with Adaptive Model Recalibration for FCL

FedSKD: Aggregation-free Model-heterogeneous Federated Learning using Multi-dimensional Similarity Knowledge Distillation

Unlocking the Value of Decentralized Data: A Federated Dual Learning Approach for Model Aggregation

Dynamic Allocation Hypernetwork with Adaptive Model Recalibration for Federated Continual Learning

Route-and-Aggregate Decentralized Federated Learning Under Communication Errors

Built with on top of