Advancements in Multi-Agent Reinforcement Learning

The field of multi-agent reinforcement learning is moving towards addressing complex challenges in dynamic environments. Researchers are focusing on developing innovative methods to optimize policies, handle non-stationary environments, and ensure robust decision-making. Notable developments include the use of sensitivity-based optimization, trust region methods, and constrained multi-agent reinforcement learning approaches. These advancements have the potential to improve performance in various applications, such as autonomous driving, traffic signal control, and energy management. Noteworthy papers include: Policy Optimization and Multi-agent Reinforcement Learning for Mean-variance Team Stochastic Games, which proposes a novel algorithm for mean-variance team stochastic games. Markov Potential Game Construction and Multi-Agent Reinforcement Learning with Applications to Autonomous Driving, which provides sufficient conditions for constructing Markov potential games. A Constrained Multi-Agent Reinforcement Learning Approach to Autonomous Traffic Signal Control, which views traffic signal control as a constrained multi-agent reinforcement learning problem and proposes a novel algorithm to produce effective policies.

Sources

Policy Optimization and Multi-agent Reinforcement Learning for Mean-variance Team Stochastic Games

Markov Potential Game Construction and Multi-Agent Reinforcement Learning with Applications to Autonomous Driving

Efficient Twin Migration in Vehicular Metaverses: Multi-Agent Split Deep Reinforcement Learning with Spatio-Temporal Trajectory Generation

Agent-Based Modeling and Deep Neural Networks for Establishing Digital Twins of Secure Facilities under Sensing Restrictions

A Constrained Multi-Agent Reinforcement Learning Approach to Autonomous Traffic Signal Control

CHARMS: Cognitive Hierarchical Agent with Reasoning and Motion Styles

A Set-Theoretic Robust Control Approach for Linear Quadratic Games with Unknown Counterparts

Built with on top of