Holistic Safety in Autonomous Vehicles and AI

Current Trends in Autonomous Vehicle and AI Safety

The recent advancements in the fields of autonomous vehicles (AVs) and artificial intelligence (AI) have significantly shifted the focus towards ensuring both psychological and physical safety. The integration of AI in AVs necessitates a dual approach to safety, addressing not only the traditional physical risks but also the psychological implications of human interaction with these systems. This shift is evident in the development of frameworks that incorporate psychological safety alongside physical safety, emphasizing trust and perceived risk as critical factors in user acceptance.

In the realm of AI, the emphasis is on creating systems that are not only reliable but also auditable and resilient. The need for robust safety management systems (SMS) tailored to the AV industry is gaining traction, drawing lessons from other safety-critical industries like aviation. These systems aim to harmonize safety practices and advance regulatory frameworks, ensuring a mature approach to managing safety risks.

Security remains a paramount concern, particularly in the context of user non-compliance with software updates. Studies are exploring psychological factors that influence user behavior towards updates, proposing models to assess security risks and enhance compliance. This research underscores the importance of user-centric design in improving system security.

The reliability and resilience of AI systems are being redefined through the integration of traditional engineering metrics with AI-specific challenges. Frameworks are emerging that adapt classical reliability and resilience engineering principles to AI, ensuring these systems can perform consistently in real-world environments. These frameworks aim to guide policy, regulation, and development towards trustworthy AI technologies.

Noteworthy developments include the proposal of a safety case template for frontier AI, which aims to make safety arguments explicit and coherent, and the assessment of the auditability of AI-integrating systems, which is crucial for trustworthiness and future legal requirements.

In summary, the field is moving towards a holistic approach to safety, integrating psychological, physical, and security considerations, and leveraging advanced technologies to enhance reliability and resilience in AI and autonomous systems.

Noteworthy Papers

  • Foundations for the psychological safety of human and autonomous vehicles interaction: Emphasizes the importance of trust and perceived risk in user acceptance of autonomous vehicles.
  • Assessing the Auditability of AI-integrating Systems: A Framework and Learning Analytics Case Study: Presents a framework for assessing the auditability of AI-integrating systems, crucial for trustworthiness and future legal requirements.

Sources

Foundations for the psychological safety of human and autonomous vehicles interaction

Developing a Safety Management System for the Autonomous Vehicle Industry

Security Implications of User Non-compliance Behavior to Software Updates: A Risk Assessment Study

Safety case template for frontier AI: A cyber inability argument

Assessing the Auditability of AI-integrating Systems: A Framework and Learning Analytics Case Study

System Reliability Engineering in the Age of Industry 4.0: Challenges and Innovations

Reliability, Resilience and Human Factors Engineering for Trustworthy AI Systems

Modular Fault Diagnosis Framework for Complex Autonomous Driving Systems

Built with on top of