Adaptive Scheduling and Resource Management in Real-Time Systems

The recent advancements in scheduling algorithms for real-time systems and smart manufacturing have shown a significant shift towards integrating deep reinforcement learning (DRL) and multi-agent systems. These innovations aim to enhance operational efficiency by dynamically adjusting task priorities and resource allocations based on real-time data and complex system behaviors. Notably, the use of Lyapunov-guided DRL for wireless scheduling demonstrates a novel approach to minimizing delay jitter and ensuring delay bounds, which is critical for maintaining system reliability. Additionally, the application of Decision Transformers in dynamic dispatching for material handling systems leverages enterprise big data to optimize throughput, showcasing the potential of machine learning models to improve upon traditional heuristics. These developments collectively underscore a trend towards more adaptive, data-driven, and scalable solutions in scheduling and resource management across various industrial domains.

Sources

Enhancing Adaptive Mixed-Criticality Scheduling with Deep Reinforcement Learning

Multi-Agent Deep Q-Network with Layer-based Communication Channel for Autonomous Internal Logistics Vehicle Scheduling in Smart Manufacturing

Lyapunov-guided Multi-Agent Reinforcement Learning for Delay-Sensitive Wireless Scheduling

Multi-Agent Decision Transformers for Dynamic Dispatching in Material Handling Systems Leveraging Enterprise Big Data

Built with on top of