Efficient and Personalized Federated Learning Solutions

The recent advancements in federated learning (FL) have demonstrated significant progress in addressing various challenges such as communication overhead, data heterogeneity, and latency. A common theme across the latest research is the optimization of FL frameworks to enhance efficiency and performance, often through novel algorithms and architectural innovations. For instance, adaptive quantization and power control schemes have been introduced to mitigate the straggler effect and reduce communication overhead, achieving comparable test accuracy with substantial savings. Additionally, the integration of multi-task learning with Bayesian approaches has shown promise in handling diverse tasks across local devices, improving predictive performance and uncertainty quantification. Other notable contributions include decentralized FL methods that leverage knowledge distillation and prototype learning to manage communication efficiently, and approaches that utilize label smoothing and balanced training to enhance domain generalization. Non-convex optimization techniques with variance reduction and adaptive learning have also been proposed to improve convergence rates and communication complexity. Personalized FL frameworks are being developed to address data skew, and source-free domain adaptation methods are being explored to handle classification tasks with unlabeled data. Furthermore, the use of pre-trained models and covariance estimation has been shown to reduce communication costs and improve performance. Overall, the field is moving towards more efficient, personalized, and robust FL solutions that can handle diverse and heterogeneous data environments.

Sources

Adaptive Quantization Resolution and Power Control for Federated Learning over Cell-free Networks

Task Diversity in Bayesian Federated Learning: Simultaneous Processing of Classification and Regression

Predicting Survival of Hemodialysis Patients using Federated Learning

ProFe: Communication-Efficient Decentralized Federated Learning via Distillation and Prototypes

Federated Domain Generalization with Label Smoothing and Balanced Decentralized Training

Non-Convex Optimization in Federated Learning via Variance Reduction and Adaptive Learning

UA-PDFL: A Personalized Approach for Decentralized Federated Learning

Federated Source-free Domain Adaptation for Classification: Weighted Cluster Aggregation for Unlabeled Data

Covariances for Free: Exploiting Mean Distributions for Federated Learning with Pre-Trained Models

Summary of Point Transformer with Federated Learning for Predicting Breast Cancer HER2 Status from Hematoxylin and Eosin-Stained Whole Slide Images

LoLaFL: Low-Latency Federated Learning via Forward-only Propagation

Built with on top of