The field of humanoid robotics is rapidly advancing, with a focus on improving locomotion control and stability. Researchers are exploring new methods to integrate reinforcement learning with stabilizing reward functions, enabling robots to learn stable postures and accelerate the learning process. Another area of focus is the development of kinematic actuation models that can efficiently handle non-linear transmission effects in closed-loop kinematic chains, allowing for more accurate and efficient motion strategies. Additionally, there is a growing interest in using iterative algorithms to derive generalized kinematic models for articulated vehicles and multi-axle systems, which can lead to improved control-oriented models. Noteworthy papers in this area include FLAM, which proposes a foundation model-based method for humanoid locomotion and manipulation, and Quattro, which introduces a transformer-accelerated iterative Linear Quadratic Regulator framework for fast trajectory optimization. These advancements have the potential to significantly improve the performance and capabilities of humanoid robots, enabling them to navigate complex environments and perform a variety of tasks with greater ease and efficiency.