The field of federated learning is witnessing significant advancements, particularly in addressing the challenges of communication overhead, privacy concerns, and resource utilization. Researchers are exploring innovative approaches to optimize federated learning frameworks, including the integration of split learning, asynchronous training, and low-rank adaptation techniques. These efforts aim to improve the efficiency, scalability, and accuracy of federated learning models, especially in resource-constrained settings. Noteworthy papers in this area include VLLFL, which proposes a vision-language model-based lightweight federated learning framework for smart agriculture, and FedOptima, which introduces a resource-optimized federated learning system designed to minimize idle time and improve model accuracy. Other notable works, such as FedsLLM and Collaborative-Split Federated Learning, demonstrate the potential of federated learning in large language models and split federated learning, respectively. These developments are poised to drive further innovations in the field, enabling more efficient and effective federated learning solutions for real-world applications.