The field of deep learning is moving towards more energy-efficient solutions, with a focus on approximate computing, on-device learning, and heterogeneous processing. Researchers are exploring new methodologies to reduce energy consumption while maintaining accuracy, such as leveraging explainable AI and early-exit mechanisms. Another trend is the development of frameworks for distributed deep learning over heterogeneous clusters, which can mitigate the effects of stragglers and stale updates. On-device federated continual learning is also gaining attention, particularly for applications in intelligent nano-drone swarms. Furthermore, optimizing multi-DNN inference on mobile devices through heterogeneous processor co-execution is becoming increasingly important. Noteworthy papers include:
- XAI-Gen, which achieves up to 7x lower energy consumption with only 1-2% accuracy loss.
- OmniLearn, which reduces training time by 14-85% through adaptive batch-scaling.
- ADMS, which reduces multi-DNN inference latency by 4.04 times compared to vanilla frameworks.