The field of software development and security is rapidly evolving with the integration of Artificial Intelligence (AI) and Machine Learning (ML) technologies. Recent research has focused on improving the safety and security of AI systems, with a particular emphasis on transparent and accountable AI development. The use of Large Language Models (LLMs) and other AI-powered tools is becoming increasingly prevalent in software development, with applications in areas such as code review, bug detection, and environmental hazard reporting. However, these advancements also introduce new challenges and concerns, including the need for standardized evaluation and reporting of flaws in AI systems, and the importance of addressing ethical and regulatory considerations in AI development. Noteworthy papers in this area include Bugdar, which introduces an AI-augmented code review system for secure coding, and AEJIM, which proposes a real-time AI framework for environmental hazard detection and reporting. Overall, the field is moving towards a more proactive and integrated approach to AI-driven software development and security, with a focus on transparency, accountability, and collaboration.
Advancements in AI-Driven Software Development and Security
Sources
Conversational AI as a Coding Assistant: Understanding Programmers' Interactions with and Expectations from Large Language Models for Coding
In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI
AEJIM: A Real-Time AI Framework for Crowdsourced, Transparent, and Ethical Environmental Hazard Detection and Reporting