Advancements in Memory-Centric Computing and Neuromorphic Engineering

The field of memory-centric computing and neuromorphic engineering is witnessing significant advancements, particularly in the areas of Processing-in-DRAM (PiDRAM), Compute-in-Memory (CiM) accelerators, and biologically plausible neural circuits. Innovations are focusing on enhancing computation capabilities within memory arrays to reduce data movement and access latency, thereby improving system performance and energy efficiency. Recent developments include novel DRAM modifications for bulk-bitwise computation, adaptive charge-based CIM-CNN accelerators, and ReRAM-based analog computing accelerators that eliminate the need for ADCs and DACs. Additionally, there's a growing interest in designing biologically plausible learning circuits and neuromorphic systems for real-time odor encoding and global routing optimization in semiconductors. These advancements are not only pushing the boundaries of what's possible in hardware design but also opening new avenues for energy-efficient and scalable AI applications.

Noteworthy Papers

  • Memory-Centric Computing: Recent Advances in Processing-in-DRAM: Introduces techniques for enhanced computation and programmability in DRAM, demonstrating bulk-bitwise computational capabilities without chip modifications.
  • IMAGINE: An 8-to-1b 22nm FD-SOI Compute-In-Memory CNN Accelerator: Presents a workload-adaptive CIM-CNN accelerator with a novel charge-based macro, achieving significant energy efficiency improvements.
  • A Fully Hardware Implemented Accelerator Design in ReRAM Analog Computing without ADCs: Proposes a ReRAM-based accelerator that leverages stochastically binarized neurons, eliminating the need for DACs and ADCs.
  • Self-Assembly of a Biologically Plausible Learning Circuit: Offers a biologically plausible circuit for weight updates in neural networks, challenging the traditional backpropagation approach.
  • Neuromorphic circuit for temporal odor encoding in turbulent environments: Designs a neuromorphic electronic nose for efficient odor detection and concentration estimation, inspired by mammalian olfactory systems.
  • Analog Alchemy: Neural Computation with In-Memory Inference, Learning and Routing: Explores memristive devices for neural computation, aiming to combine memory and computation efficiently.
  • Machine Learning Optimal Ordering in Global Routing Problems in Semiconductors: Introduces a machine learning-based method for optimizing net ordering in global routing, outperforming traditional heuristic approaches.
  • A Pseudo-random Number Generator for Multi-Sequence Generation with Programmable Statistics: Develops a hardware PRNG capable of generating multiple uncorrelated sequences with programmable statistics, enhancing solution exploration in optimization algorithms.
  • TReCiM: Lower Power and Temperature-Resilient Multibit 2FeFET-1T Compute-in-Memory Design: Proposes a temperature-resilient CiM design using FeFETs, achieving high accuracy and energy efficiency in neural network computations.

Sources

Memory-Centric Computing: Recent Advances in Processing-in-DRAM

IMAGINE: An 8-to-1b 22nm FD-SOI Compute-In-Memory CNN Accelerator With an End-to-End Analog Charge-Based 0.15-8POPS/W Macro Featuring Distribution-Aware Data Reshaping

A Fully Hardware Implemented Accelerator Design in ReRAM Analog Computing without ADCs

Self-Assembly of a Biologically Plausible Learning Circuit

From Worms to Mice: Homeostasis Maybe All You Need

Neuromorphic circuit for temporal odor encoding in turbulent environments

Analog Alchemy: Neural Computation with In-Memory Inference, Learning and Routing

Machine Learning Optimal Ordering in Global Routing Problems in Semiconductors

A Pseudo-random Number Generator for Multi-Sequence Generation with Programmable Statistics

TReCiM: Lower Power and Temperature-Resilient Multibit 2FeFET-1T Compute-in-Memory Design

Built with on top of