Advances in Krylov Subspace Methods and Matrix Function Approximations

The field of numerical linear algebra is moving towards the development of more efficient and accurate methods for solving large-scale linear systems and approximating matrix functions. Recent innovations have focused on improving the performance of Krylov subspace methods, such as GMRES and the Lanczos algorithm, by introducing new techniques for error estimation, adaptive truncation, and deflation. Additionally, researchers are exploring the application of randomized methods and block Krylov subspace methods for approximating truncated tensor SVD and solving linear systems with dense spectra. Notable papers in this area include: Optimal Krylov On Average, which proposes an adaptive randomized truncation estimator for Krylov subspace methods. A Krylov projection algorithm for large symmetric matrices with dense spectra, which introduces an adaptive Krein-Nudelman extension to the block-Lanczos method, allowing for further acceleration at negligible cost.

Sources

Adaptive Finite State Projection with Quantile-Based Pruning for Solving the Chemical Master Equation

Optimal Krylov On Average

A Note on the Stability of the Sherman-Morrison-Woodbury Formula

Randomized block Krylov method for approximation of truncated tensor SVD

Error formulas for block rational Krylov approximations of matrix functions

Improved Polynomial Bounds and Acceleration of GMRES by Solving a min-max Problem on Rectangles, and by Deflating

A Krylov projection algorithm for large symmetric matrices with dense spectra

Built with on top of