The field of complex systems optimization is witnessing a significant shift towards the adoption of reinforcement learning and graph neural networks. Recent developments have demonstrated the potential of these approaches in tackling complex problems in supply chain management, power grid control, and network optimization. The use of iterative multi-agent reinforcement learning and graph-enhanced model-free reinforcement learning has shown promising results in optimizing inventory policies and power grid operations. Furthermore, the application of deep reinforcement learning to AllReduce scheduling and robust DNN partitioning has led to improved performance and energy efficiency. Notably, the combination of graph attention networks and distributed optimization has enabled fast mixed-integer convex programming for multi-robot navigation. The development of open-source benchmarks, such as the production planning benchmark for refinery-petrochemical complexes, is also facilitating the advancement of research in this area. Noteworthy papers include:
- Iterative Multi-Agent Reinforcement Learning, which demonstrated superior scalability and effectiveness in optimizing inventory policies.
- Energy-Efficient Dynamic Training and Inference for GNN-Based Network Modeling, which proposed a novel framework for energy-efficient network modeling.
- Graph-Enhanced Model-Free Reinforcement Learning Agents for Efficient Power Grid Topological Control, which introduced a masked topological action space for optimizing power network operations.