The field of federated learning and network optimization is moving towards more efficient and effective methods for handling communication errors, data heterogeneity, and network latencies. Researchers are exploring new strategies for decentralized federated learning, age-aware partial gradient updates, and client selection methods that can handle multiple heterogeneities. These innovations have the potential to improve the performance and scalability of federated learning systems. Notable papers in this area include:
- Route-and-Aggregate Decentralized Federated Learning Under Communication Errors, which proposes a new D-FL strategy that outperforms traditional flooding-based methods.
- Age-Aware Partial Gradient Update Strategy for Federated Learning Over the Air, which achieves higher accuracy and more stable convergence performance compared to baseline methods.
- Client Selection in Federated Learning with Data Heterogeneity and Network Latencies, which proposes two novel client selection schemes that can handle both data and latency heterogeneities.