The field of safe control and neural network-based systems is rapidly advancing, with a focus on developing innovative methods for ensuring the safety and reliability of complex systems. Recent developments have centered around the use of control barrier functions, neural networks, and probabilistic approaches to guarantee safety and stability in various applications, including autonomous vehicles and robotic systems. Notably, researchers have been exploring the use of adaptive conformal prediction and probabilistic neuro-symbolic layers to improve the efficiency and effectiveness of safe reinforcement learning and constraint satisfaction. Furthermore, advances in cylindrical algebraic decomposition and polyhedral enclosures have enabled more efficient and scalable verification of nonlinear neural feedback systems. Overall, the field is moving towards the development of more robust, efficient, and scalable methods for ensuring safety and stability in complex systems.
Noteworthy papers include: CP-NCBF, which proposes a novel framework for synthesizing verified neural control barrier functions with probabilistic guarantees. A Probabilistic Neuro-symbolic Layer for Algebraic Constraint Satisfaction, which introduces a differentiable probabilistic layer that guarantees the satisfaction of non-convex algebraic constraints over continuous variables. Verifying Nonlinear Neural Feedback Systems using Polyhedral Enclosures, which proposes a novel algorithm for forward reachability analysis of nonlinear neural feedback systems, leveraging the structure of the nonlinear transition functions to compute tight polyhedral enclosures.