Current Trends in Edge AI and TinyML
The field of Edge AI and Tiny Machine Learning (TinyML) is witnessing significant advancements, particularly in optimizing resource allocation, enhancing privacy, and improving model efficiency. Resource allocation strategies are becoming more sophisticated, with innovative methods like stable matching algorithms and multi-hop Reconfigurable Intelligent Surfaces (RIS) being employed to manage interference and improve communication efficiency in vehicular networks. These approaches not only enhance the Quality of Service (QoS) but also reduce computational complexity.
Privacy preservation is another focal point, with split learning emerging as a promising technique that balances energy efficiency and privacy in Natural Language Processing (NLP) tasks. This method shows potential in reducing both processing power and CO2 emissions while maintaining high accuracy, making it suitable for deployment on edge devices.
Model efficiency is being addressed through tensor decomposition and activation map compression, which significantly reduce memory footprint without compromising learning features. These techniques are crucial for enabling backpropagation on resource-constrained devices, thereby advancing the integration of Deep Learning with the Internet of Things (IoT).
Noteworthy Developments:
- A novel resource allocation algorithm for 5G-based V2X systems demonstrates competitive performance with lower complexity, effectively meeting QoS demands.
- Multi-hop RIS-aided learning model sharing significantly improves onboard learning performance for urban air mobility, enhancing total reward by 85%.
- Split learning for TinyML NLP offers a balanced compromise between efficiency and privacy, reducing processing power and CO2 emissions while maintaining high accuracy.
- Activation map compression through tensor decomposition provides considerable memory savings and theoretical guarantees to convergence, demonstrating Pareto-superiority in terms of generalization and memory footprint.
- A hierarchical inference framework for predictive maintenance in mining machinery balances accuracy, latency, and energy consumption, advancing PdM frameworks for industrial applications.