Optimizing Resource Allocation and Privacy in Edge AI and TinyML

Current Trends in Edge AI and TinyML

The field of Edge AI and Tiny Machine Learning (TinyML) is witnessing significant advancements, particularly in optimizing resource allocation, enhancing privacy, and improving model efficiency. Resource allocation strategies are becoming more sophisticated, with innovative methods like stable matching algorithms and multi-hop Reconfigurable Intelligent Surfaces (RIS) being employed to manage interference and improve communication efficiency in vehicular networks. These approaches not only enhance the Quality of Service (QoS) but also reduce computational complexity.

Privacy preservation is another focal point, with split learning emerging as a promising technique that balances energy efficiency and privacy in Natural Language Processing (NLP) tasks. This method shows potential in reducing both processing power and CO2 emissions while maintaining high accuracy, making it suitable for deployment on edge devices.

Model efficiency is being addressed through tensor decomposition and activation map compression, which significantly reduce memory footprint without compromising learning features. These techniques are crucial for enabling backpropagation on resource-constrained devices, thereby advancing the integration of Deep Learning with the Internet of Things (IoT).

Noteworthy Developments:

  • A novel resource allocation algorithm for 5G-based V2X systems demonstrates competitive performance with lower complexity, effectively meeting QoS demands.
  • Multi-hop RIS-aided learning model sharing significantly improves onboard learning performance for urban air mobility, enhancing total reward by 85%.
  • Split learning for TinyML NLP offers a balanced compromise between efficiency and privacy, reducing processing power and CO2 emissions while maintaining high accuracy.
  • Activation map compression through tensor decomposition provides considerable memory savings and theoretical guarantees to convergence, demonstrating Pareto-superiority in terms of generalization and memory footprint.
  • A hierarchical inference framework for predictive maintenance in mining machinery balances accuracy, latency, and energy consumption, advancing PdM frameworks for industrial applications.

Sources

A Lightweight QoS-Aware Resource Allocation Method for NR-V2X Networks

Multi-hop RIS-aided Learning Model Sharing for Urban Air Mobility

TinyML NLP Approach for Semantic Wireless Sentiment Classification

Activation Map Compression through Tensor Decomposition for Deep Learning

Model Partition and Resource Allocation for Split Learning in Vehicular Edge Networks

TinyML Security: Exploring Vulnerabilities in Resource-Constrained Machine Learning Systems

Enhancing Predictive Maintenance in Mining Mobile Machinery through a TinyML-enabled Hierarchical Inference Network

Depthwise Separable Convolutions with Deep Residual Convolutions

Towards Vision Mixture of Experts for Wildlife Monitoring on the Edge

Built with on top of