The recent advancements in the field of reinforcement learning and autonomous agents have shown a significant shift towards leveraging large language models (LLMs) for generating and refining reward functions, planning, and skill acquisition. A notable trend is the use of video-to-reward methods, which directly translate observed behaviors into reward functions, enabling more precise and controllable learning in legged robots. Additionally, there is a growing interest in zero-shot learning, where agents can generate behaviors from language instructions without supervision, broadening the applicability of reinforcement learning to diverse tasks. Query-efficient planning with LLMs has also emerged as a promising approach, with generative planners showing superior adaptability compared to heuristic-based methods. Furthermore, the integration of foundation models into interactive environments for hypothesis testing and information gathering has opened new avenues for strategic decision-making. Safety and generalizability in LLM-based policy learning are being addressed through innovative training pipelines that enhance data efficiency and adaptability to unseen scenarios. The development of model-based planning frameworks for general-purpose manipulation tasks, driven by multi-modal inputs, is another area of progress, emphasizing the importance of visual and language representations in planning and execution. Lastly, AI-assisted skill design is being explored to create adaptable agents, leveraging LLMs for reward design and complex behavior implementation. These developments collectively push the boundaries of what autonomous systems can achieve, emphasizing the role of LLMs in enhancing learning, planning, and decision-making processes.