The recent developments in the field of large language models (LLMs) and multimodal models have significantly advanced the efficiency and scalability of these models. Key innovations include adaptive tokenization methods that dynamically allocate tokens based on data complexity, reducing computational and memory bottlenecks. Techniques such as dynamic token sparsification and KV cache compression have been introduced to enhance the efficiency of large vision-language models, addressing both computational and memory constraints. Additionally, frameworks for optimizing GPU memory usage in large models have been proposed, enabling the execution of models that would otherwise exceed available hardware resources. These advancements are paving the way for more powerful multimodal models and world models, enhancing their applicability in various tasks. Notably, the introduction of memory-enhanced temporal compression in video understanding models has significantly improved the temporal-spatial interaction, leading to better comprehension of longer videos. Furthermore, the development of proxy models with cost-saving optimizations has made access to large language models more economical, broadening their accessibility. These innovations collectively represent a significant leap forward in making advanced AI models more accessible and efficient, potentially accelerating innovation across numerous machine learning applications.