The field of robotics is witnessing a significant shift towards increased autonomy and flexibility, driven by the integration of large language models (LLMs) and vision-language models (VLMs). This trend is enabling robots to better understand and respond to natural language instructions, perceive their environment, and make decisions based on uncertain or partial information. Researchers are exploring innovative approaches to leverage LLMs and VLMs for planning, control, and perception in robotics, leading to improved performance and generalizability across various tasks and environments. Notable developments include the use of LLMs for strategy decision and realization, VLMs for uncertainty estimation and belief-space planning, and the integration of language models with model predictive control for manipulation planning and trajectory generation. These advancements have the potential to transform the field of robotics, enabling more efficient, adaptive, and human-like robotic systems. Noteworthy papers include: AuDeRe, which proposes a framework for automated strategy decision and realization in robot planning and control via LLMs, demonstrating enhanced robotic autonomy and reduced need for manual tuning. Vision-Language Model Predictive Control, which integrates VLMs with model predictive control for manipulation planning and trajectory generation, achieving excellent performance in real-world robotic manipulation tasks.