The recent developments in the research area have shown a significant shift towards enhancing the robustness, interpretability, and safety of machine learning models, particularly in critical applications such as autonomous driving, fault diagnosis, and risk assessment. There is a growing emphasis on integrating uncertainty quantification into model predictions, which is evident in the use of Bayesian neural networks and ensemble learning methods. These approaches aim to provide more reliable and interpretable outputs, especially in scenarios where data is noisy or incomplete. Additionally, there is a notable trend towards incorporating system-level safety requirements into perception models through reinforcement learning, ensuring that model predictions not only improve accuracy but also align with broader safety objectives. The field is also witnessing advancements in the development of tools for probabilistic verification of neural networks, which are crucial for assessing and synthesizing safety guarantees in real-world applications. Overall, the research is moving towards creating more resilient and trustworthy AI systems that can operate effectively in uncertain and dynamic environments.
Enhancing Robustness and Safety in Machine Learning Models
Sources
A Multi-Loss Strategy for Vehicle Trajectory Prediction: Combining Off-Road, Diversity, and Directional Consistency Losses
Probabilistic Prediction of Ship Maneuvering Motion using Ensemble Learning with Feedforward Neural Networks
Safe Adaptive Cruise Control Under Perception Uncertainty: A Deep Ensemble and Conformal Tube Model Predictive Control Approach