Enhancing Privacy, Efficiency, and Robustness in Federated Learning

The recent developments in federated learning (FL) have shown a significant shift towards enhancing privacy, efficiency, and robustness in distributed learning environments. A common theme across the latest research is the integration of advanced optimization techniques and novel algorithms to address the inherent challenges of data heterogeneity, privacy preservation, and resource constraints. Specifically, there is a growing emphasis on locally differentially private (LDP) algorithms, swarm intelligence-driven client selection, and robust graph learning methods to mitigate the effects of non-IID data and adversarial noise. Additionally, energy-efficient and split learning frameworks are being explored to fine-tune large language models in edge networks, while adaptive and personalized FL approaches are gaining traction for more tailored learning outcomes. The field is also witnessing advancements in optimization algorithms, such as quasi-Newton methods and fractional order distributed optimization, which promise faster convergence and better stability. Notably, the use of deep reinforcement learning for resource allocation in mobile networks and the application of conformal symplectic optimization for stable RL training are emerging as promising areas. These innovations collectively aim to push the boundaries of FL in terms of scalability, performance, and applicability across diverse real-world scenarios.

Noteworthy papers include 'Locally Differentially Private Online Federated Learning With Correlated Noise,' which introduces a novel LDP algorithm with temporally correlated noise, and 'Swarm Intelligence-Driven Client Selection for Federated Learning in Cybersecurity applications,' which demonstrates the superior adaptability of swarm intelligence algorithms in decentralized FL settings.

Sources

Locally Differentially Private Online Federated Learning With Correlated Noise

Swarm Intelligence-Driven Client Selection for Federated Learning in Cybersecurity applications

FedRGL: Robust Federated Graph Learning for Label Noise

Controlling Participation in Federated Learning with Feedback

Rethinking the initialization of Momentum in Federated Learning with Heterogeneous Data

Adaptive Coordinate-Wise Step Sizes for Quasi-Newton Methods: A Learning-to-Optimize Approach

Energy-Efficient Split Learning for Fine-Tuning Large Language Models in Edge Networks

EFTViT: Efficient Federated Training of Vision Transformers with Masked Images on Resource-Constrained Edge Devices

Learning Locally, Revising Globally: Global Reviser for Federated Learning with Noisy Labels

Federated Progressive Self-Distillation with Logits Calibration for Personalized IIoT Edge Intelligence

EnFed: An Energy-aware Opportunistic Federated Learning in Resource Constrained Environments for Human Activity Recognition

Revisiting Self-Supervised Heterogeneous Graph Learning from Spectral Clustering Perspective

Incentivizing Truthful Collaboration in Heterogeneous Federated Learning

FedPAW: Federated Learning with Personalized Aggregation Weights for Urban Vehicle Speed Prediction

FedAH: Aggregated Head for Personalized Federated Learning

Review of Mathematical Optimization in Federated Learning

Near-Optimal Resilient Labeling Schemes

Dynamics of Resource Allocation in O-RANs: An In-depth Exploration of On-Policy and Off-Policy Deep Reinforcement Learning for Real-Time Applications

Generalized EXTRA stochastic gradient Langevin dynamics

Learn More by Using Less: Distributed Learning with Energy-Constrained Devices

Conformal Symplectic Optimization for Stable Reinforcement Learning

Fractional Order Distributed Optimization

Defending Against Diverse Attacks in Federated Learning Through Consensus-Based Bi-Level Optimization

BGTplanner: Maximizing Training Accuracy for Differentially Private Federated Recommenders via Strategic Privacy Budget Allocation

Reactive Orchestration for Hierarchical Federated Learning Under a Communication Cost Budget

Adaptive Personalized Over-the-Air Federated Learning with Reflecting Intelligent Surfaces

Beyond Local Sharpness: Communication-Efficient Global Sharpness-aware Minimization for Federated Learning

FedMetaMed: Federated Meta-Learning for Personalized Medication in Distributed Healthcare Systems

GP-FL: Model-Based Hessian Estimation for Second-Order Over-the-Air Federated Learning

BEFL: Balancing Energy Consumption in Federated Learning for Mobile Edge IoT

Traffic-cognitive Slicing for Resource-efficient Offloading with Dual-distillation DRL in Multi-edge Systems

Federated Automated Feature Engineering

Providing Differential Privacy for Federated Learning Over Wireless: A Cross-layer Framework

FedDUAL: A Dual-Strategy with Adaptive Loss and Dynamic Aggregation for Mitigating Data Heterogeneity in Federated Learning

Built with on top of