Enhancing Personalization and Efficiency in Federated Learning

The field of federated learning (FL) is witnessing a significant shift towards more personalized, efficient, and robust solutions, driven by the need to address data heterogeneity, computational constraints, and privacy concerns. Recent advancements focus on developing novel frameworks that leverage parameter-efficient fine-tuning, adaptive pruning, and innovative aggregation strategies to enhance model performance while reducing communication overhead and computational demands. Notably, there is a growing emphasis on continual learning within the federated setting, aiming to mitigate catastrophic forgetting and improve model adaptability to new data over time. Additionally, the integration of advanced architectures like Mixture of Experts (MoE) and the use of knowledge distillation techniques are emerging as key strategies to enhance personalization and generalization capabilities. These developments collectively push the boundaries of FL, making it more practical for real-world applications, particularly in resource-constrained environments such as industrial IoT and edge computing.

Noteworthy Papers:

  • Client-Customized Adaptation for Parameter-Efficient Federated Learning: Introduces a hypernetwork-based framework to generate client-specific adapters, significantly improving convergence stability in heterogeneous FL scenarios.
  • FedDTPT: Federated Discrete and Transferable Prompt Tuning for Black-Box Large Language Models: Proposes a novel prompt tuning method that enhances privacy and reduces computational costs, demonstrating high accuracy and robustness in non-iid data settings.
  • FedMoE-DA: Federated Mixture of Experts via Domain Aware Fine-grained Aggregation: Utilizes a domain-aware aggregation strategy with MoE architecture to enhance personalization and reduce communication burdens, showing excellent performance in FL settings.

Sources

C2A: Client-Customized Adaptation for Parameter-Efficient Federated Learning

Personalized Federated Learning via Feature Distribution Adaptation

FedDTPT: Federated Discrete and Transferable Prompt Tuning for Black-Box Large Language Models

Automatic Structured Pruning for Efficient Architecture in Federated Learning

FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients

FPPL: An Efficient and Non-IID Robust Federated Continual Learning Framework

Masked Autoencoders are Parameter-Efficient Federated Continual Learners

FedMoE-DA: Federated Mixture of Experts via Domain Aware Fine-grained Aggregation

Towards Personalized Federated Learning via Comprehensive Knowledge Distillation

Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis

Built with on top of