The recent advancements in autonomous driving research are significantly leveraging the capabilities of Vision-Language Models (VLMs) and Multimodal Large Language Models (MLLMs). These models are being employed to enhance the reasoning and decision-making processes in autonomous vehicles, particularly in complex and dynamic scenarios. The integration of knowledge-driven insights and data-driven methods is leading to more robust and adaptive systems, capable of handling high-risk traffic situations and long-tail events with improved safety and efficiency. Notably, the use of VLMs as teachers during training, without requiring their presence during inference, is proving to be a practical approach for real-time deployment. Additionally, open-source frameworks are emerging, offering more accessible and efficient solutions for end-to-end autonomous driving, thereby democratizing the development of advanced autonomous systems. These developments collectively push the boundaries of autonomous driving technology, making it more reliable, adaptable, and safer for real-world applications.