The field of hardware design automation is witnessing a significant shift with the integration of large language models (LLMs). Recent research has demonstrated the potential of LLMs in validating network protocol parsers, generating Verilog code, and optimizing circuit designs. The use of LLMs has shown promising results in detecting inconsistencies between protocol implementations and their official standards, as well as in generating functionally correct RTL code. Moreover, multi-agent frameworks and hybrid reasoning strategies have been proposed to improve the efficiency and accuracy of LLM-based hardware design automation. Noteworthy papers in this area include: PARVAL, which leverages LLMs to validate parser implementations against protocol standards, achieving a low false positive rate and uncovering unique bugs. ReasoningV, which employs a hybrid reasoning strategy to generate Verilog code, achieving competitive performance with leading commercial models. CircuitMind, which achieves human-competitive efficiency in circuit generation through multi-agent collaboration and collective intelligence. HLSTester, which efficiently detects behavioral discrepancies in high-level synthesis using LLMs and guided testbenches. VeriCoder, which enhances LLM-based RTL code generation through functional correctness validation, achieving state-of-the-art metrics in functional correctness. Insights from Verification, which integrates verification insights from testbench into the training of Verilog generation LLMs, ensuring functional correctness. FLAG, which generates formal specifications of on-chip communication protocols from informal documents using a two-stage framework.