Implicit Neural Representations

Report on Current Developments in Implicit Neural Representations

General Direction of the Field

The field of Implicit Neural Representations (INRs) is witnessing a significant surge in innovation, particularly in the areas of model architecture, feature transferability, and computational efficiency. Recent developments are focused on enhancing the ability of INRs to capture high-frequency details, improve generalization across different tasks, and reduce computational overhead. The introduction of novel network architectures and training paradigms is driving advancements in both the theoretical underpinnings and practical applications of INRs.

One of the key trends is the integration of learnable activation functions and Fourier-based methods into INR models. These approaches aim to better control and learn task-specific frequency components, thereby improving the model's ability to represent complex signals. Additionally, there is a growing emphasis on developing transferable features that can be shared across different INRs, enabling faster and more accurate fitting of new signals from a given distribution.

Another notable direction is the exploration of continuous kernel representations and their efficient scaling through sparse learning in the Fourier domain. This approach addresses the challenges of computational efficiency, parameter efficiency, and spectral bias, making continuous kernels more practical for real-world applications.

Furthermore, the field is seeing advancements in the scale generalisation properties of neural networks, particularly in the context of Gaussian derivative networks. These networks are being extended to handle spatial scaling variations and improve their explainability, which is crucial for applications in computer vision and image processing.

Noteworthy Innovations

  1. Fourier Kolmogorov-Arnold Networks (FKAN): Introduces learnable activation functions modeled as Fourier series to control task-specific frequency components, significantly improving performance in high-resolution and high-dimensional data tasks.

  2. STRAINER: A new INR training framework that learns transferable features, providing a powerful initialization for fitting images from the same domain, with a substantial gain in signal quality.

  3. Sparse Fourier Domain Learning: Proposes a novel approach to scale continuous kernels efficiently, reducing computational and memory demands while mitigating spectral bias.

  4. SL$^{2}$A-INR: A hybrid network with a single-layer learnable activation function, setting new benchmarks in accuracy, quality, and convergence rates across diverse INR tasks.

These innovations are pushing the boundaries of what INRs can achieve, making them more versatile and efficient for a wide range of applications in computer vision, signal processing, and beyond.

Sources

Implicit Neural Representations with Fourier Kolmogorov-Arnold Networks

Learning Transferable Features for Implicit Neural Representations

Scaling Continuous Kernels with Sparse Fourier Domain Learning

Scale generalisation properties of extended scale-covariant and scale-invariant Gaussian derivative networks on image datasets with spatial scaling variations

Single-Layer Learnable Activation for Implicit Neural Representation (SL$^{2}$A-INR)

Relative Representations: Topological and Geometric Perspectives

Extended Deep Submodular Functions

Tight and Efficient Upper Bound on Spectral Norm of Convolutional Layers

Built with on top of