In-Memory Computing

Report on Current Developments in In-Memory Computing

General Direction of the Field

The field of in-memory computing (IMC) is witnessing significant advancements, particularly in integrating machine learning (ML) algorithms, hardware security, and neuromorphic computing. The primary focus is on overcoming the von Neumann bottleneck by enabling data processing and storage within the memory itself. This approach not only enhances scalability and on-edge learning capabilities but also significantly improves power efficiency and computational speed.

  1. Machine Learning Integration: There is a growing trend towards utilizing memristor-based analog computing for ML applications. Novel architectures, such as the Tsetlin Machine (TM), are being mapped into memristive devices, offering enhanced scalability and on-edge learning capabilities. These developments are crucial for real-time decision-making in ML applications.

  2. Hardware Security: The integration of in-memory computing with hardware security protocols, such as AES (Advanced Encryption Standard), is gaining traction. These advancements aim to enhance cybersecurity measures for IoT applications, particularly in robotics and autonomous systems. The proposed designs demonstrate significant improvements in power efficiency and throughput, making them suitable for protecting against unintentional incidents and hostile assaults.

  3. Neuromorphic Computing: The intersection of in-memory computing with Spiking Neural Networks (SNNs) is being explored to develop low-power neuromorphic solutions. This approach emphasizes the need for comprehensive system-level analyses and co-design techniques to address device limitations and optimize performance. The focus is on achieving synergies between SNNs and IMC architectures for low-power edge computing environments.

  4. Energy Efficiency in Edge Computing: There is a notable shift towards implementing AI algorithms on event-based embedded devices to enhance real-time processing, minimize latency, and improve power efficiency. Studies on spiking recurrent neural networks (SRNNs) for gesture recognition on embedded GPUs highlight significant improvements in power efficiency, making them suitable for energy-constrained applications.

Noteworthy Papers

  • In-Memory Learning Automata Architecture using Y-Flash Cell: This paper introduces a novel approach utilizing floating-gate Y-Flash memristive devices for in-memory processing architecture, particularly for Tsetlin machines, demonstrating enhanced scalability and on-edge learning capabilities.

  • In-Memory Computing Architecture for Efficient Hardware Security: The development of a 4-bit state memristor device tailored for AES system and a pipeline AES design highlights significant improvements in power efficiency and throughput, making it a notable contribution to hardware security.

  • Energy-Efficient Spiking Recurrent Neural Network for Gesture Recognition on Embedded GPUs: The deployment of an SRNN with liquid time constant neurons on NVIDIA Jetson Nano embedded GPU platforms showcases a 14-fold increase in power efficiency, validating its robustness for interpreting temporal-spatial data in gesture recognition.

Sources

In-Memory Learning Automata Architecture using Y-Flash Cell

In-Memory Computing Architecture for Efficient Hardware Security

When In-memory Computing Meets Spiking Neural Networks -- A Perspective on Device-Circuit-System-and-Algorithm Co-design

Energy-Efficient Spiking Recurrent Neural Network for Gesture Recognition on Embedded GPUs