Advances in Efficient Neural Representations and Tensor Decomposition

The field of neural representations and tensor decomposition is experiencing significant advancements, driven by the need for efficient and scalable models. Recent developments focus on improving the performance and reducing the computational overhead of these models, enabling their application in real-world scenarios such as video understanding, autonomous driving, and robotics. Notable trends include the use of implicit neural representations, sparse tensor decomposition, and generative models for data imputation. These innovations have the potential to revolutionize various applications, from computer vision and scientific machine learning to intelligent tutoring systems. Researchers are exploring novel architectures, such as superexpressive networks and functional tensor decomposition, to further enhance the capabilities of these models. Overall, the field is moving towards more efficient, scalable, and flexible solutions for complex data modeling and analysis. Noteworthy papers include:

  • Temporal Action Detection Model Compression by Progressive Block Drop, which achieves a 25% reduction in computational overhead on two TAD benchmarks.
  • F-INR: Functional Tensor Decomposition for Implicit Neural Representations, which trains 100x faster than existing approaches on video tasks while achieving higher fidelity.
  • SINR: Sparsity Driven Compressed Implicit Neural Representations, which substantially reduces storage requirements for INRs across various configurations.

Sources

Temporal Action Detection Model Compression by Progressive Block Drop

HyperNVD: Accelerating Neural Video Decomposition via Hypernetworks

Samplets: Wavelet concepts for scattered data

End-to-End Implicit Neural Representations for Classification

Accelerating Sparse MTTKRP for Small Tensor Decomposition on GPU

Predictive Performance of Photonic SRAM-based In-Memory Computing for Tensor Decomposition

Generative Data Imputation for Sparse Learner Performance Data Using Generative Adversarial Imputation Networks

SINR: Sparsity Driven Compressed Implicit Neural Representations

Global and Local Structure Learning for Sparse Tensor Completion

Unveiling the Potential of Superexpressive Networks in Implicit Neural Representations

F-INR: Functional Tensor Decomposition for Implicit Neural Representations

Built with on top of