The field of federated learning (FL) is rapidly evolving to address the challenges of data privacy, communication efficiency, and robustness in distributed machine learning environments. Recent developments have focused on enhancing the robustness and generalization of global models against adversarial attacks and data heterogeneity, reducing communication overhead through innovative compression and partial gradient sharing techniques, and improving the efficiency and scalability of FL systems in undependable and resource-constrained environments. Additionally, there is a growing emphasis on optimizing the energy efficiency and latency of FL in wireless networks, and on reducing the computational workload on edge devices through data selection and model splitting strategies. These advancements collectively aim to make FL more practical and effective for real-world applications, particularly in mobile edge networks and IoT environments.
Noteworthy papers include:
- Federated Hybrid Training and Self-Adversarial Distillation: Introduces FedBAT, a framework that combines hybrid adversarial training and self-adversarial distillation to enhance model robustness and generalization.
- Delayed Random Partial Gradient Averaging for Federated Learning: Proposes DPGA, a method that reduces communication bottlenecks by sharing only partial gradients and enabling parallel computation and communication.
- Caesar: A Low-deviation Compression Approach for Efficient Federated Learning: Presents Caesar, a framework that optimizes compression ratios based on model staleness and local data properties to reduce traffic costs without significant accuracy loss.
- A Robust Federated Learning Framework for Undependable Devices at Scale: Introduces FLUDE, which improves model performance and resource efficiency in undependable environments through adaptive device selection and a staleness-aware strategy.
- Accelerating Energy-Efficient Federated Learning in Cell-Free Networks with Adaptive Quantization: Offers an energy-efficient FL framework with adaptive quantization and optimized power allocation for cell-free networks.
- Federated Learning with Workload Reduction through Partial Training of Client Models and Entropy-Based Data Selection: Proposes FedFT-EDS, reducing training workload by selecting informative data and fine-tuning partial models, enhancing client learning efficiency.
- Federated Dropout: Convergence Analysis and Resource Allocation: Provides a theoretical analysis of Federated Dropout, offering insights into optimizing dropout rates and bandwidth allocation for faster convergence.
- Communication-and-Computation Efficient Split Federated Learning: Introduces a novel SFL framework that optimizes model splitting and resource allocation to reduce communication overhead and improve convergence performance.