Control and Estimation Techniques for Complex Systems

Current Developments in the Research Area

The recent advancements in the research area have shown a strong trend towards enhancing the robustness, efficiency, and applicability of control and estimation methods, particularly in the context of complex and uncertain systems. The following sections outline the key directions and innovations that have emerged from the latest publications.

Generalization and Precision in Eigenvalue Estimation

A significant focus has been on improving the accuracy and applicability of eigenvalue estimation techniques, particularly in the context of matrices with interval-indefinite or non-stationary elements. Innovations such as the introduction of e-circles have enabled more precise localization of eigenvalue regions, which has been particularly beneficial in the stability analysis of network systems. These advancements allow for the analysis of larger networks with a higher number of agents compared to traditional methods, such as CVX, Yalmip, eig, and lyap. Additionally, the application of these methods to systems without diagonal dominance has opened new avenues for the design of control laws for non-stationary systems.

Convex Reformulation for State Estimation

The challenge of accommodating outlier measurements in state estimation has been addressed through the development of convex reformulations. These reformulations transform mixed-binary variables into linear constraints, making the optimization problems solvable using standard convex programming toolboxes. This approach significantly enhances computational efficiency and has been shown to outperform traditional methods like the Kalman Filter and Threshold Decisions in terms of risk minimization and performance specification.

Cubature-Based Uncertainty Estimation

The use of cubature formulas based on sparse grids for uncertainty estimation in nonlinear regression models has gained traction. This method allows for the calculation of the variance of regression results with a number of cubature points close to the theoretical minimum required for a given level of exactness. This approach has been applied to estimate the prediction uncertainty of complex models, such as the NRTL model, demonstrating its effectiveness in handling measurement errors and improving the reliability of model-based predictions.

Kernel-Based Regularization for Continuous-Time System Identification

The identification of continuous-time systems from discrete-time input and output signals has seen advancements through the application of kernel-based regularization methods. These methods, which are free from the issues associated with parametric methods, have been shown to be more robust and accurate, especially when the sample size is small. The development of closed-form estimators for kernel-based regularization methods under typical intersample behaviors, such as zero-order hold or band-limited signals, has paved the way for broader applications in continuous-time system identification.

Adaptive Control for Discrete-Time Systems with Disturbances

A novel adaptive control method for discrete-time systems with disturbances has been proposed, combining directional forgetting and concurrent learning. This method does not require the persistent excitation condition, information on disturbances, unknown parameters, or matching conditions, and guarantees exponential uniform ultimate boundedness. The theoretical demonstration of the upper bound of the uniform ultimate boundedness based on the forgetting factor provides a significant advancement in the design of adaptive controllers for practical applications.

Offline Conditioning for Online Learning in Nonlinear State-Space Models

The challenge of online inference and learning in nonlinear state-space models has been addressed through a procedure that involves offline conditioning of a highly flexible Gaussian Process formulation. This approach restricts online learning to a subspace spanned by expressive basis functions, enabling the use of standard particle filters for Bayesian inference. The method has been shown to enable rapid convergence with significantly fewer particles compared to baseline and state-of-the-art methods, making it a promising approach for real-world applications.

Model-Free Stability-Ensuring Reinforcement Learning

A novel reinforcement learning agent, Critic As Lyapunov Function (CALF), has been introduced, which ensures online environment stabilization without requiring a model of the system. This approach has been demonstrated to greatly improve learning performance in a case study with a mobile robot simulator, outperforming traditional methods like SARSA and its modified versions. CALF represents a viable approach to fusing classical control with reinforcement learning, particularly in scenarios where concurrent approaches are either offline or model-based.

Hierarchical Event-Triggered Control

A hierarchical architecture for event-triggered control has been proposed to improve resource efficiency. This architecture introduces the concept of a deadline policy for optimizing long-term discounted inter-event times, which is a significant improvement over traditional greedy strategies. The application of this scheme to the control of an orbiting spacecraft has shown superior performance in terms of actuation frequency reduction while maintaining safety guarantees.

Provably Efficient Reinforcement Learning with Linear Function Approximation

A computationally tractable algorithm for learning infinite-horizon average-reward linear Markov decision processes (MDPs) and linear mixture MDPs has been proposed. This algorithm achieves the best-known regret upper bound and applies novel techniques to control the covering number of the value function class and the span of optimistic estimators of the value function. The algorithm represents a significant advancement in the field of reinforcement learning, particularly in

Sources

Generalization of Gershgorin's theorem. Analysis and design of control laws

Convex Reformulation of Information Constrained Linear State Estimation with Mixed-Binary Variables for Outlier Accommodation

Cubature-based uncertainty estimation for nonlinear regression models

Kernel-Based Regularized Continuous-Time System Identification from Sampled Data

Discrete-time Indirect Adaptive Control for Systems with State-Dependent Disturbances via Directional Forgetting: Concurrent Learning Approach

Efficient Online Inference and Learning in Partially Known Nonlinear State-Space Models by Learning Expressive Degrees of Freedom Offline

Critic as Lyapunov function (CALF): a model-free, stability-ensuring agent

Hierarchical Event-Triggered Systems: Safe Learning of Quasi-Optimal Deadline Policies

Provably Efficient Infinite-Horizon Average-Reward Reinforcement Learning with Linear Function Approximation

Kernel-Based Learning of Stable Nonlinear Systems

A Model-Free Optimal Control Method With Fixed Terminal States and Delay

Participation Factors for Nonlinear Autonomous Dynamical Systems in the Koopman Operator Framework

Stochastic Data-Driven Predictive Control: Chance-Constraint Satisfaction with Identified Multi-step Predictors

Direct Data-Driven Discounted Infinite Horizon Linear Quadratic Regulator with Robustness Guarantees

Uniform Ergodicity and Ergodic-Risk Constrained Policy Optimization

Robust Reinforcement Learning with Dynamic Distortion Risk Measures

Uncertainty Analysis of Limit Cycle Oscillations in Nonlinear Dynamical Systems with the Fourier Generalized Polynomial Chaos Expansion

Sample Complexity Bounds for Linear System Identification from a Finite Set

Data-conforming data-driven control: avoiding premature generalizations beyond data

Distributed Deep Koopman Learning for Nonlinear Dynamics

3DIOC: Direct Data-Driven Inverse Optimal Control for LTI Systems

Learning Unstable Continuous-Time Stochastic Linear Control Systems

Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features

Data-Efficient Quadratic Q-Learning Using LMIs

Built with on top of