Advancing Software Engineering with Large Language Models

The integration and optimization of Large Language Models (LLMs) in software engineering continue to drive significant advancements in various aspects of the development lifecycle. Recent research highlights innovative approaches to code generation, security, and testing, emphasizing the need for robust, secure, and efficient solutions. LLMs are being leveraged to enhance exception handling, improve vulnerability detection, and optimize test case generation, among other applications. Notably, hybrid approaches combining LLMs with traditional machine learning and dynamic analysis are emerging as powerful tools for addressing complex software engineering challenges. These developments not only promise to streamline development processes but also to improve the quality and safety of software systems. However, the field is also grappling with challenges related to the reliability, security, and ethical implications of relying heavily on LLMs for critical tasks. Future research will likely focus on refining these models to ensure they meet the high standards required for professional software development, while also addressing the broader implications of their use in the industry.

Noteworthy papers include one that explores the use of LLMs for exception handling in real-world development scenarios, proposing a multi-agent framework to enhance code reliability. Another paper investigates the potential of LLMs in vulnerability detection, demonstrating significant improvements in accuracy through innovative prompting strategies.

Sources

Optimizing AI-Assisted Code Generation

Labeling NIDS Rules with MITRE ATT&CK Techniques: Machine Learning vs. Large Language Models

Seeker: Towards Exception Safety Code Generation with Intermediate Language Agents Framework

Can LLM Prompting Serve as a Proxy for Static Analysis in Vulnerability Detection

A Large Language Model Approach to Identify Flakiness in C++ Projects

An Exploratory Study of ML Sketches and Visual Code Assistants

Design choices made by LLM-based test generators prevent them from finding bugs

Syzygy: Dual Code-Test C to (safe) Rust Translation using LLMs and Dynamic Analysis

Reinforcement Learning from Automatic Feedback for High-Quality Unit Test Generation

LLMSA: A Compositional Neuro-Symbolic Approach to Compilation-free and Customizable Static Analysis

The Current Challenges of Software Engineering in the Era of Large Language Models

Helping LLMs Improve Code Generation Using Feedback from Testing and Static Analysis

Le chameau et le serpent rentrent dans un bar : v\'erification quasi-automatique de code OCaml en logique de s\'eparation

Large Language Models and Code Security: A Systematic Literature Review

Compiling C to Safe Rust, Formalized

Built with on top of