Advancements in Tensor Decomposition and Numerical Linear Algebra

The recent developments in the field of tensor decomposition and numerical linear algebra highlight a significant push towards enhancing computational efficiency and accuracy in solving high-dimensional problems. Innovations are particularly noted in the areas of tensor CP decomposition, where new models and algorithms aim to accurately estimate the CP rank and integrate rank estimation with tensor decomposition. Similarly, advancements in robust tensor principal component analysis (RTPCA) introduce scalable gradient descent methods within the t-SVD framework, promising linear convergence and computational efficiency. The field also sees progress in numerical homogenization techniques, with the introduction of the Super-Localized Orthogonal Decomposition (SLOD) method for linear elasticity problems, offering improved sparsity and computational efficiency. Furthermore, the exploration of tensor-train (TT) approximations for the inverse of large-scale structured matrices opens new avenues for solving PDEs with massive degrees of freedom. These developments collectively underscore a trend towards more efficient, accurate, and scalable computational methods in tensor analysis and numerical linear algebra.

Noteworthy papers include:

  • A novel CP decomposition model with group sparse regularization, offering a robust solution for rank estimation and tensor decomposition.
  • The RTPCA-SGD method, which introduces a scalable gradient descent approach for robust tensor PCA, achieving linear convergence and computational efficiency.
  • The SLOD method for numerical homogenization, significantly improving computational efficiency in solving linear elasticity problems.
  • A study on the low-rank property of the inverse of large-scale structured matrices in the TT format, providing a computationally verifiable condition for low-rank TT representation.

Sources

Group Sparse-based Tensor CP Decomposition: Model, Algorithms, and Applications in Chemometrics

LU Decomposition and Generalized Autoone-Takagi Decomposition of Dual Matrices and their Applications

Learnable Scaled Gradient Descent for Guaranteed Robust Tensor PCA

Super-Localized Orthogonal Decomposition Method for Heterogeneous Linear Elasticity

A study on the 1-$\Gamma$ inverse of tensors via the M-Product

Implementation Pitfalls for Carbonate Mineral Dissolution -- a Technical Note

Local particle refinement in terramechanical simulations

Counting Equilibria of the Electrostatic Potential

Complexity of Tensor Product Functions in Representing Antisymmetry

Some results on core EP Drazin matrices and partial isometries

A Nonlocal size modified Poisson-Boltzmann Model and Its Finite Element Solver for Protein in Multi-Species Ionic Solution

TUCKET: A Tensor Time Series Data Structure for Efficient and Accurate Factor Analysis over Time Ranges

Provable Low-Rank Tensor-Train Approximations in the Inverse of Large-Scale Structured Matrices

When is the Resolvent Like a Rank One Matrix?

A Low-Rank QTT-based Finite Element Method for Elasticity Problems

Effective algorithms for tensor train decomposition via the UTV framework

Infinity norm bounds for the inverse of Nekrasov matrices using scaling matrices

Tensor-based Dinkelbach method for computing generalized tensor eigenvalues and its applications

Built with on top of