The recent advancements in scheduling algorithms for real-time systems and smart manufacturing have shown a significant shift towards integrating deep reinforcement learning (DRL) and multi-agent systems. These innovations aim to enhance operational efficiency by dynamically adjusting task priorities and resource allocations based on real-time data and complex system behaviors. Notably, the use of Lyapunov-guided DRL for wireless scheduling demonstrates a novel approach to minimizing delay jitter and ensuring delay bounds, which is critical for maintaining system reliability. Additionally, the application of Decision Transformers in dynamic dispatching for material handling systems leverages enterprise big data to optimize throughput, showcasing the potential of machine learning models to improve upon traditional heuristics. These developments collectively underscore a trend towards more adaptive, data-driven, and scalable solutions in scheduling and resource management across various industrial domains.