The field of large language models (LLMs) is moving towards more efficient fine-tuning methods, particularly in scenarios with limited data availability or heterogeneous resource-constrained devices. Research is focused on developing innovative approaches to adapt LLMs to specific tasks or environments while reducing training expenses and enhancing communication efficiency. Notable advancements include data-driven initialization methods, split learning frameworks, and device-cloud collaborative inference architectures. These innovations aim to improve the performance of LLMs in various applications while addressing challenges related to data privacy, computational resources, and latency. Noteworthy papers include: $D^2LoRA$ which introduces a data-driven approach for initializing LoRA metrics, resulting in improved training efficiency. SplitFrozen proposes a split learning framework that enables efficient LLM fine-tuning by strategically freezing device-side model layers, achieving significant reductions in device-side computations and total training time. HierFedLoRA presents a hierarchical framework to address data heterogeneity and resource constraints in federated fine-tuning, demonstrating improved model accuracy and reduced fine-tuning time.