The field of federated learning is moving towards more decentralized and heterogeneous approaches, allowing for greater flexibility and scalability. Researchers are exploring new methods to address challenges such as catastrophic forgetting, biased optimization, and communication errors in decentralized federated learning. Notably, innovative solutions like dynamic allocation hypernetworks and route-and-aggregate strategies are being proposed to improve the performance and robustness of federated learning models. Noteworthy papers include:
- FedSKD, which introduces a novel MHFL framework that facilitates direct knowledge exchange through round-robin model circulation, eliminating the need for centralized aggregation.
- FedDAH, which proposes a dynamic allocation hypernetwork with adaptive model recalibration for federated continual learning, demonstrating superiority over other FCL methods on sites with different task streams.