Advancements in Robot Policy Evaluation and Control

The field of robotics is moving towards more efficient and accurate policy evaluation and control methods. One of the key challenges in this area is the sim-to-real gap, where policies trained in simulation often fail to generalize to real-world scenarios. To address this issue, researchers are exploring new approaches such as dynamic digital twins, which can bridge the gap between simulation and reality. Another important aspect is the development of more robust and adaptable control methods, including reinforcement learning and model-based optimal control. These advances have the potential to significantly improve the performance and reliability of robots in complex tasks such as locomotion and manipulation. Notable papers in this area include:

  • Real-is-Sim, which proposes a novel behavior cloning framework that incorporates a dynamic digital twin throughout the policy development pipeline.
  • PTRL, which introduces a fine-tuning mechanism for transferring policies across different robots, improving training efficiency and model transferability.
  • RAMBO, which integrates model-based reaction force optimization with a feedback policy trained with reinforcement learning, enabling precise manipulation and robust locomotion.

Sources

Real-is-Sim: Bridging the Sim-to-Real Gap with a Dynamic Digital Twin for Real-World Robot Policy Evaluation

Fine Tuning a Data-Driven Estimator

PTRL: Prior Transfer Deep Reinforcement Learning for Legged Robots Locomotion

Controller Distillation Reduces Fragile Brain-Body Co-Adaptation and Enables Migrations in MAP-Elites

Sim-to-Real of Humanoid Locomotion Policies via Joint Torque Space Perturbation Injection

RAMBO: RL-augmented Model-based Optimal Control for Whole-body Loco-manipulation

Built with on top of