The current developments in the field of in-memory computing (IMC) are significantly advancing the integration of Bayesian inference and deep neural networks (DNNs) with hardware, particularly through the use of emerging non-volatile memory (NVM) technologies. Researchers are focusing on optimizing the efficiency and compactness of Bayesian inference engines by leveraging ferroelectric field-effect transistor (FeFET)-based IMC, which allows for in-memory processing of Bayesian models without additional computational circuitry. This approach not only enhances storage density and computing efficiency but also addresses the interpretability and reliability challenges posed by conventional neural network models. Additionally, advancements in memristive crossbar simulation tools are providing researchers with faster and more accurate methods to analyze and optimize IMC architectures, thereby accelerating the development of energy-efficient machine learning hardware. Scaling issues related to transistor leakage and IR drop in 1T1R memory arrays are also being addressed, offering guidelines for optimizing memristor properties to ensure reliable performance as technology nodes scale down. Furthermore, innovative reprogramming strategies for memristive crossbars are being developed to extend the endurance of NVM devices used in DNNs, significantly reducing the frequency of reprogramming while maintaining model accuracy.
Noteworthy papers include one proposing FeBiM, an efficient Bayesian inference engine using FeFET-based IMC, which demonstrates substantial improvements in compactness and efficiency. Another paper introduces XbarSim, a fast and accurate circuit-level simulator for memristive crossbars, significantly speeding up simulations compared to traditional tools.