Software Development and Quality Assurance

Report on Current Developments in Software Development and Quality Assurance

General Direction of the Field

The field of software development and quality assurance is witnessing a significant shift towards leveraging advanced artificial intelligence technologies, particularly Large Language Models (LLMs), to enhance various aspects of the software development lifecycle. This trend is driven by the need to improve efficiency, reduce costs, and elevate the overall quality of software products. The integration of LLMs is being explored across multiple domains, including requirements engineering, unit test generation, Kubernetes manifest synthesis, and fault localization.

In requirements engineering, LLMs are being utilized to assess and improve the quality of software requirements, aligning with industry standards such as ISO 29148. This approach not only aids in identifying and rectifying deficiencies but also enhances stakeholder engagement by providing explainable decision-making processes and proposing improved requirement versions.

Unit test generation is another area where LLMs are making significant strides. By decomposing complex methods into manageable slices, LLMs are now capable of generating comprehensive test cases that cover more lines and branches, thereby improving the robustness of software testing. Additionally, efforts are being made to enhance the understandability of generated unit tests, making them more accessible to software engineers and improving their effectiveness in bug-fixing tasks.

The migration of container workloads to Kubernetes is being facilitated by LLMs, which assist in generating Kubernetes manifests. This approach simplifies the management of containerized applications, making it more accessible to developers unfamiliar with Kubernetes' complexities. However, challenges remain in ensuring the comprehensibility and accuracy of the generated manifests.

Fault localization is being revolutionized by combining static analysis with LLMs, providing explainable crashing fault localization. This combination helps in identifying and understanding buggy methods by revealing their relationship with the crashing point, thereby improving the debugging process.

Noteworthy Developments

  • LLM-based Unit Test Generation via Method Slicing: This approach significantly outperforms current methods in terms of line and branch coverage, demonstrating the potential of LLMs in generating comprehensive test cases.
  • Enhancing Understandability of Generated Unit Tests: UTGen's integration of search-based software testing and LLMs improves the understandability of test cases, leading to better bug-fixing outcomes.
  • Explainable Crashing Fault Localization: The combination of static analysis and LLMs provides a robust approach to fault localization, enhancing the explainability of localization results and improving user satisfaction.

These developments highlight the transformative impact of LLMs in advancing the field of software development and quality assurance, paving the way for more efficient, effective, and user-friendly software products.

Sources

Leveraging LLMs for the Quality Assurance of Software Requirements

LLM4VV: Exploring LLM-as-a-Judge for Validation and Verification Testsuites

HITS: High-coverage LLM-based Unit Test Generation via Method Slicing

Migrating Existing Container Workload to Kubernetes -- LLM Based Approach and Evaluation

Leveraging Large Language Models for Enhancing the Understandability of Generated Unit Tests

Impact of Usability Mechanisms: A Family of Experiments on Efficiency, Effectiveness and User Satisfaction

Better Debugging: Combining Static Analysis and LLMs for Explainable Crashing Fault Localization

Which Combination of Test Metrics Can Predict Success of a Software Project? A Case Study in a Year-Long Project Course

Effect of Requirements Analyst Experience on Elicitation Effectiveness: A Family of Empirical Studies

AutoTest: Evolutionary Code Solution Selection with Test Cases

Scalable Similarity-Aware Test Suite Minimization with Reinforcement Learning