Enhancing Federated Learning Through Dynamic Client Management and Semi-Supervised Approaches

The recent advancements in federated learning (FL) have primarily focused on addressing the challenges posed by dynamic client states, skewed data distributions, and the need for semi-supervised learning approaches. A notable trend is the development of trust-aware client scheduling mechanisms that enhance model training efficiency by selectively involving clients based on their contributions and communication states. These mechanisms often employ adaptive models to estimate client behavior, leading to improved global training outcomes. Additionally, there is a growing emphasis on federated unlearning techniques that address the recovery of models in scenarios with skewed label distributions, using methods such as oversampling and density-based denoising to enhance dataset quality. Semi-supervised paradigms for decentralized FL are also emerging, leveraging pseudo-labeling and consensus-based diffusion models to effectively utilize unlabeled data, thereby improving performance in scenarios with limited labeled data. Furthermore, innovative client stratification and sampling methods are being explored to optimize client participation and enhance communication efficiency in FL. These developments collectively aim to make FL more robust, efficient, and applicable to a wider range of real-world scenarios.

Sources

TRAIL: Trust-Aware Client Scheduling for Semi-Decentralized Federated Learning

Federated Unlearning Model Recovery in Data with Skewed Label Distributions

SemiDFL: A Semi-Supervised Paradigm for Decentralized Federated Learning

FedSTaS: Client Stratification and Client Level Sampling for Efficient Federated Learning

Built with on top of