Advances in Reinforcement Learning and Robotic Control

The field of reinforcement learning and robotic control is rapidly advancing, with a focus on improving sample efficiency, robustness, and generalizability. Recent developments have highlighted the importance of model-based approaches, which leverage environment models to optimize policy performance. Additionally, there is a growing interest in offline reinforcement learning, which aims to learn from historical data without requiring access to the environment. Researchers are also exploring the use of skills and hierarchies to improve policy transfer and adaptability in complex, long-horizon tasks. Noteworthy papers in this area include the introduction of the Conditional Diffusion Model Planner, which enables accurate modeling of environmental dynamics and planning of superior policies. The NeoRL-2 benchmark has also been proposed, which provides a more realistic and challenging evaluation platform for offline reinforcement learning algorithms. Furthermore, the Quality-focused Active Adversarial Policy has been developed, which improves safety in human-robot interaction by actively mitigating the risk of grasping the human hand. Other notable contributions include the development of discrete diffusion skills for offline reinforcement learning, model-based offline reinforcement learning with adversarial data augmentation, and state-aware perturbation optimization for robust deep reinforcement learning.

Sources

Unsourced Random Access in MIMO Quasi-Static Rayleigh Fading Channels: Finite Blocklength and Scaling Law Analyses

Conditional Diffusion Model with OOD Mitigation as High-Dimensional Offline Resource Allocation Planner in Clustered Ad Hoc Networks

NeoRL-2: Near Real-World Benchmarks for Offline Reinforcement Learning with Extended Realistic Scenarios

Quality-focused Active Adversarial Policy for Safe Grasping in Human-Robot Interaction

Look Before Leap: Look-Ahead Planning with Uncertainty in Reinforcement Learning

Offline Reinforcement Learning with Discrete Diffusion Skills

Model-Based Offline Reinforcement Learning with Adversarial Data Augmentation

State-Aware Perturbation Optimization for Robust Deep Reinforcement Learning

Robust Deep Reinforcement Learning in Robotics via Adaptive Gradient-Masked Adversarial Attacks

Learning Generalizable Skills from Offline Multi-Task Data for Multi-Agent Cooperation

Pretrained Bayesian Non-parametric Knowledge Prior in Robotic Long-Horizon Reinforcement Learning

Built with on top of