Enhancing Robustness and Safety in Machine Learning Models

The recent developments in the research area have shown a significant shift towards enhancing the robustness, interpretability, and safety of machine learning models, particularly in critical applications such as autonomous driving, fault diagnosis, and risk assessment. There is a growing emphasis on integrating uncertainty quantification into model predictions, which is evident in the use of Bayesian neural networks and ensemble learning methods. These approaches aim to provide more reliable and interpretable outputs, especially in scenarios where data is noisy or incomplete. Additionally, there is a notable trend towards incorporating system-level safety requirements into perception models through reinforcement learning, ensuring that model predictions not only improve accuracy but also align with broader safety objectives. The field is also witnessing advancements in the development of tools for probabilistic verification of neural networks, which are crucial for assessing and synthesizing safety guarantees in real-world applications. Overall, the research is moving towards creating more resilient and trustworthy AI systems that can operate effectively in uncertain and dynamic environments.

Sources

On the Unknowable Limits to Prediction

A Multi-Loss Strategy for Vehicle Trajectory Prediction: Combining Off-Road, Diversity, and Directional Consistency Losses

Risk-Averse Certification of Bayesian Neural Networks

Probabilistic Prediction of Ship Maneuvering Motion using Ensemble Learning with Feedforward Neural Networks

Predictive Inference With Fast Feature Conformal Prediction

Calibration through the Lens of Interpretability

Towards Robust Interpretable Surrogates for Optimization

Uncertainty-Aware Artificial Intelligence for Gear Fault Diagnosis in Motor Drives

Learning Ensembles of Vision-based Safety Control Filters

Crash Severity Risk Modeling Strategies under Data Imbalance

Artificial Expert Intelligence through PAC-reasoning

SAVER: A Toolbox for Sampling-Based, Probabilistic Verification of Neural Networks

Incorporating System-level Safety Requirements in Perception Models via Reinforcement Learning

Risk-aware Classification via Uncertainty Quantification

Reinforced Symbolic Learning with Logical Constraints for Predicting Turbine Blade Fatigue Life

Safe Adaptive Cruise Control Under Perception Uncertainty: A Deep Ensemble and Conformal Tube Model Predictive Control Approach

Semi-automated transmission control for motorcycle gearshift: design, data-driven tuning and experimental validation

An In-Depth Examination of Risk Assessment in Multi-Class Classification Algorithms

Built with on top of