The recent advancements in the field of large language models (LLMs) have primarily focused on enhancing efficiency and performance through novel tuning and compression techniques. A significant trend is the development of methods that leverage semantic knowledge and self-supervised learning to optimize prompt tuning, reducing computational costs while maintaining or even improving task performance. These approaches, such as Semantic Knowledge Tuning and Selection-p, demonstrate superior performance in tasks like text classification and understanding, often with fewer parameters and faster training times. Additionally, there is a growing interest in improving the handling of long documents, with methods like ChuLo offering innovative solutions to retain key information without compromising on computational efficiency. The field is also witnessing a shift towards multi-task learning, where task groupings based on relatedness metrics, such as pointwise V-usable information, are being explored to enhance model performance across diverse domains. Furthermore, adaptive and composite prompt tuning strategies, exemplified by ACCEPT, are being employed to make prompt tuning more efficient and effective, particularly in few-shot learning scenarios. Overall, these developments indicate a move towards more intelligent and adaptable LLMs that can handle complex tasks with greater efficiency and accuracy.