Legged Robot Locomotion on Challenging Terrains

The field of legged robot locomotion is moving towards developing robust and adaptive control methods for navigating complex and uneven terrains. Researchers are exploring the use of deep reinforcement learning and hierarchical control frameworks to enable legged robots to traverse challenging environments with improved stability and efficiency. A key focus area is the development of sim-to-real transfer methods that can effectively bridge the gap between simulated and real-world environments. Additionally, there is a growing interest in designing control strategies that can adapt to changing robot morphology and environmental conditions. Noteworthy papers in this area include:

  • Robust Humanoid Walking on Compliant and Uneven Terrain with Deep Reinforcement Learning, which demonstrates the effectiveness of a simple training curriculum for exposing RL agents to randomized terrains in simulation.
  • Post-Convergence Sim-to-Real Policy Transfer, which introduces a principled approach to selecting policies for real-world deployment by optimizing worst-case performance transference.

Sources

Robust Humanoid Walking on Compliant and Uneven Terrain with Deep Reinforcement Learning

Coordinating Spinal and Limb Dynamics for Enhanced Sprawling Robot Mobility

Dynamic Legged Ball Manipulation on Rugged Terrains with Hierarchical Reinforcement Learning

Post-Convergence Sim-to-Real Policy Transfer: A Principled Alternative to Cherry-Picking

Fast and Modular Whole-Body Lagrangian Dynamics of Legged Robots with Changing Morphology

Built with on top of