The field of numerical linear algebra is moving towards the development of more efficient and accurate methods for solving large-scale linear systems and approximating matrix functions. Recent innovations have focused on improving the performance of Krylov subspace methods, such as GMRES and the Lanczos algorithm, by introducing new techniques for error estimation, adaptive truncation, and deflation. Additionally, researchers are exploring the application of randomized methods and block Krylov subspace methods for approximating truncated tensor SVD and solving linear systems with dense spectra. Notable papers in this area include: Optimal Krylov On Average, which proposes an adaptive randomized truncation estimator for Krylov subspace methods. A Krylov projection algorithm for large symmetric matrices with dense spectra, which introduces an adaptive Krein-Nudelman extension to the block-Lanczos method, allowing for further acceleration at negligible cost.