Integrating Communication, Control, and Machine Learning for Next-Gen Wireless Networks

The recent developments in the research area are significantly advancing the integration of communication, control, and machine learning techniques to address complex challenges in wireless networks. A notable trend is the shift towards goal-oriented communication strategies that optimize for real-time inference under variable delay conditions, which is crucial for applications like remote sensing and intelligent thermal management. Reinforcement learning (RL) is emerging as a powerful tool for dynamic decision-making in scenarios such as thermal control of base stations and multi-AUV data collection in underwater environments, where traditional methods fall short due to the unpredictability of environmental factors. Additionally, the field is witnessing innovative approaches to resource allocation and optimization in large-scale wireless networked control systems (WNCSs) and multi-operator networks with reconfigurable intelligent surfaces (RIS). These advancements are paving the way for more efficient, scalable, and robust solutions in next-generation wireless networks, particularly in the context of ultra-reliable and low-latency communications (mURLLC). Notably, the integration of finite blocklength coding (FBC) with RL for optimizing Age of Information (AoI) in multi-QoS provisioning scenarios is a promising direction that addresses the dual challenges of delay and error-rate in mURLLC services.

Noteworthy Papers:

  • A goal-oriented communication strategy for remote inference under two-way delay demonstrates significant benefits, especially in highly variable delay scenarios.
  • An RL approach for intelligent thermal management of interference-coupled base stations achieves near-optimal throughput while managing thermal constraints.
  • A multi-AUV data collection framework based on multi-agent offline RL significantly improves data utilization and energy efficiency in dynamic underwater environments.
  • A hierarchical DRL approach for resource optimization in multi-RIS multi-operator networks shows faster convergence and improved performance in large-scale scenarios.

Sources

Goal-Oriented Communications for Real-time Inference with Two-Way Delay

Online Learning for Intelligent Thermal Management of Interference-coupled and Passively Cooled Base Stations

Multi-Objective-Optimization Multi-AUV Assisted Data Collection Framework for IoUT Based on Offline Reinforcement Learning

Communication-Control Codesign for Large-Scale Wireless Networked Control Systems

FBC-Enhanced {\epsilon}-Effective Capacity Optimization for NOMA

Channel Charting-Based Channel Prediction on Real-World Distributed Massive MIMO CSI

Optimizing Version Innovation Age for Monitoring Markovian Source in Energy-Harvesting Systems

Transmission Scheduling of Millimeter Wave Communication for High-Speed Railway in Space-Air-Ground Integrated Network

A Hierarchical DRL Approach for Resource Optimization in Multi-RIS Multi-Operator Networks

AoI-Aware Resource Allocation for Smart Multi-QoS Provisioning

Built with on top of