The field of robotics is moving towards more efficient and accurate policy evaluation and control methods. One of the key challenges in this area is the sim-to-real gap, where policies trained in simulation often fail to generalize to real-world scenarios. To address this issue, researchers are exploring new approaches such as dynamic digital twins, which can bridge the gap between simulation and reality. Another important aspect is the development of more robust and adaptable control methods, including reinforcement learning and model-based optimal control. These advances have the potential to significantly improve the performance and reliability of robots in complex tasks such as locomotion and manipulation. Notable papers in this area include:
- Real-is-Sim, which proposes a novel behavior cloning framework that incorporates a dynamic digital twin throughout the policy development pipeline.
- PTRL, which introduces a fine-tuning mechanism for transferring policies across different robots, improving training efficiency and model transferability.
- RAMBO, which integrates model-based reaction force optimization with a feedback policy trained with reinforcement learning, enabling precise manipulation and robust locomotion.