Advances in Hardware Design Automation with Large Language Models

The field of hardware design automation is witnessing a significant shift with the integration of large language models (LLMs). Recent research has demonstrated the potential of LLMs in validating network protocol parsers, generating Verilog code, and optimizing circuit designs. The use of LLMs has shown promising results in detecting inconsistencies between protocol implementations and their official standards, as well as in generating functionally correct RTL code. Moreover, multi-agent frameworks and hybrid reasoning strategies have been proposed to improve the efficiency and accuracy of LLM-based hardware design automation. Noteworthy papers in this area include: PARVAL, which leverages LLMs to validate parser implementations against protocol standards, achieving a low false positive rate and uncovering unique bugs. ReasoningV, which employs a hybrid reasoning strategy to generate Verilog code, achieving competitive performance with leading commercial models. CircuitMind, which achieves human-competitive efficiency in circuit generation through multi-agent collaboration and collective intelligence. HLSTester, which efficiently detects behavioral discrepancies in high-level synthesis using LLMs and guided testbenches. VeriCoder, which enhances LLM-based RTL code generation through functional correctness validation, achieving state-of-the-art metrics in functional correctness. Insights from Verification, which integrates verification insights from testbench into the training of Verilog generation LLMs, ensuring functional correctness. FLAG, which generates formal specifications of on-chip communication protocols from informal documents using a two-stage framework.

Sources

Large Language Models for Validating Network Protocol Parsers

ReasoningV: Efficient Verilog Code Generation with Adaptive Hybrid Reasoning Model

Towards Optimal Circuit Generation: Multi-Agent Collaboration Meets Collective Intelligence

HLSTester: Efficient Testing of Behavioral Discrepancies with LLMs for High-Level Synthesis

VeriCoder: Enhancing LLM-Based RTL Code Generation through Functional Correctness Validation

Insights from Verification: Training a Verilog Generation LLM with Reinforcement Learning with Testbench Feedback

FLAG: Formal and LLM-assisted SVA Generation for Formal Specifications of On-Chip Communication Protocols

Built with on top of