Federated Learning Advances

The field of federated learning is witnessing significant advancements, particularly in addressing the challenges of communication overhead, privacy concerns, and resource utilization. Researchers are exploring innovative approaches to optimize federated learning frameworks, including the integration of split learning, asynchronous training, and low-rank adaptation techniques. These efforts aim to improve the efficiency, scalability, and accuracy of federated learning models, especially in resource-constrained settings. Noteworthy papers in this area include VLLFL, which proposes a vision-language model-based lightweight federated learning framework for smart agriculture, and FedOptima, which introduces a resource-optimized federated learning system designed to minimize idle time and improve model accuracy. Other notable works, such as FedsLLM and Collaborative-Split Federated Learning, demonstrate the potential of federated learning in large language models and split federated learning, respectively. These developments are poised to drive further innovations in the field, enabling more efficient and effective federated learning solutions for real-world applications.

Sources

VLLFL: A Vision-Language Model Based Lightweight Federated Learning Framework for Smart Agriculture

SFL-LEO: Asynchronous Split-Federated Learning Design for LEO Satellite-Ground Network Framework

Resource Utilization Optimized Federated Learning

Efficient Federated Split Learning for Large Language Models over Communication Networks

FedFetch: Faster Federated Learning with Adaptive Downstream Prefetching

A LoRA-Based Approach to Fine-Tuning LLMs for Educational Guidance in Resource-Constrained Settings

Collaborative Split Federated Learning with Parallel Training and Aggregation

Towards a Distributed Federated Learning Aggregation Placement using Particle Swarm Intelligence

Federated Learning of Low-Rank One-Shot Image Detection Models in Edge Devices with Scalable Accuracy and Compute Complexity

Cross-region Model Training with Communication-Computation Overlapping and Delay Compensation

Replay to Remember: Retaining Domain Knowledge in Streaming Language Models

Built with on top of