Distributed and Adaptive Solutions in Multi-Agent Systems and Autonomous Vehicles

Advances in Multi-Agent Systems and Autonomous Vehicles

Recent developments in the field of multi-agent systems and autonomous vehicles have shown a significant shift towards more distributed, adaptive, and scalable solutions. The focus has been on enhancing coordination, localization, and decision-making processes through innovative algorithms and frameworks. Key advancements include the integration of deep reinforcement learning (DRL) with other optimization techniques to handle complex, dynamic environments, as well as the introduction of novel control architectures that promote emergent cooperative behaviors.

One of the notable trends is the generalization of strategies to arbitrary user distributions, which has been addressed through the development of multi-agent deep Q learning algorithms enhanced with convolutional neural networks (CNNs). These algorithms are capable of real-time analysis and decision-making, significantly improving user connectivity in multi-UAV networks.

Another area of innovation is in the realm of decentralized reinforcement learning approaches for multi-agent shepherding problems. These methods allow for the natural emergence of cooperative strategies, enabling efficient task completion in large-scale systems without the need for explicit communication or high-level control.

The field is also witnessing advancements in relative pose estimation and formation control for nonholonomic robots, leveraging distributed algorithms that can operate in local frames, thereby overcoming the limitations of traditional global frame-based methods.

Noteworthy papers include:

  • A study on distributed user connectivity maximization in multi-UAV networks, which proposes a novel multi-agent CNN-enhanced deep Q learning algorithm.
  • Research on decentralized reinforcement learning for multi-agent shepherding, introducing a two-layer control architecture that fosters emergent cooperation.
  • A paper on relative pose estimation for nonholonomic robot formations, which presents a concurrent-learning based estimator and a cooperative localization algorithm.

These developments collectively push the boundaries of what is possible in multi-agent systems and autonomous vehicles, offering more robust, efficient, and adaptable solutions for real-world applications.

Sources

Maximizing User Connectivity in AI-Enabled Multi-UAV Networks: A Distributed Strategy Generalized to Arbitrary User Distributions

Development of an indoor localization and navigation system based on monocular SLAM for mobile robots

Emergent Cooperative Strategies for Multi-Agent Shepherding via Reinforcement Learning

Relative Pose Estimation for Nonholonomic Robot Formation with UWB-IO Measurements

Tangled Program Graphs as an alternative to DRL-based control algorithms for UAVs

Data-Driven Distributed Common Operational Picture from Heterogeneous Platforms using Multi-Agent Reinforcement Learning

Research on reinforcement learning based warehouse robot navigation algorithm in complex warehouse layout

SniffySquad: Patchiness-Aware Gas Source Localization with Multi-Robot Collaboration

Predictability Awareness for Efficient and Robust Multi-Agent Coordination

MA-DV2F: A Multi-Agent Navigation Framework using Dynamic Velocity Vector Field

Optimal Driver Warning Generation in Dynamic Driving Environment

Results of the 2023 CommonRoad Motion Planning Competition for Autonomous Vehicles

DP and QP Based Decision-making and Planning for Autonomous Vehicle

Scaling Long-Horizon Online POMDP Planning via Rapid State Space Sampling

Learning Collective Dynamics of Multi-Agent Systems using Event-based Vision

Distributed Spatial Awareness for Robot Swarms

Dynamic Zoning of Industrial Environments with Autonomous Mobile Robots

Two-Layer Attention Optimization for Bimanual Coordination

Convergence Guarantees for Differentiable Optimization-based Control Policy

A Simple Multi-agent Joint Prediction Method for Autonomous Driving

Multiple Non-cooperative Targets Encirclement by Relative Distance based Positioning and Neural Anti-Synchronization Control

Collision-Free Multi-Agent Coverage Control for Non-Cooperating Swarms: Preliminary Results

DNN Task Assignment in UAV Networks: A Generative AI Enhanced Multi-Agent Reinforcement Learning Approach

Anonymous Distributed Localisation via Spatial Population Protocols

An alignment problem

Experience-based Subproblem Planning for Multi-Robot Motion Planning

Wireless Federated Learning over UAV-enabled Integrated Sensing and Communication

Information-Optimal Multi-Spacecraft Positioning for Interstellar Object Exploration

Enhancing reinforcement learning for population setpoint tracking in co-cultures

A ROS~2-based Navigation and Simulation Stack for the Robotino

Strategic Sacrifice: Self-Organized Robot Swarm Localization for Inspection Productivity

Built with on top of