Advancing Neural Networks and Optimization for Computational Imaging

The recent developments in the research area of machine learning and computational imaging have shown a significant shift towards leveraging advanced neural network architectures and novel training methodologies to address complex inverse problems. A notable trend is the integration of hierarchical and mixture-of-experts (MoE) models, which are being employed to enhance the generalizability and adaptability of neural networks across various domains, such as high-level synthesis for FPGA design and implicit neural representations for image reconstruction tasks. These models are designed to handle domain shifts and improve performance on unseen data by learning local piece-wise continuous functions and aggregating multiple granularities of information. Additionally, there is a growing interest in developing noise-robust training techniques for diagnostic decision support systems, where informed deep abstaining classifiers are being proposed to enhance robustness against label noise derived from automated annotation processes. Another emerging area is the use of learned optimization algorithms, such as the Learned Alternating Minimization Algorithm (LAMA), which combine data-driven approaches with classical optimization techniques to solve inverse problems more efficiently and accurately. Furthermore, the exploration of super-resolution techniques in disordered media and the prediction of encoding errors in implicit neural representations are advancing the field by enabling higher fidelity reconstructions and more efficient signal compression. Notably, the application of self-relaxed joint training for severity estimation with ordinal noisy labels demonstrates a novel approach to handling noisy data in medical imaging, leveraging the ordinal nature of severity levels to improve classification accuracy. Overall, these innovations are pushing the boundaries of what is possible in computational imaging and machine learning, offering new tools and insights for researchers and practitioners alike.

Sources

Hierarchical Mixture of Experts: Generalizable Learning for High-Level Synthesis

Resolution Enhancement of Under-sampled Photoacoustic Microscopy Images using Implicit Neural Representations

GUMBEL-NERF: Representing Unseen Objects as Part-Compositional Neural Radiance Fields

Informed Deep Abstaining Classifier: Investigating noise-robust training for diagnostic decision support systems

LAMA: Stable Dual-Domain Deep Reconstruction For Sparse-View CT

Super-resolution in disordered media using neural networks

Neural Experts: Mixture of Experts for Implicit Neural Representations

Predicting the Encoding Error of SIRENs

Self-Relaxed Joint Training: Sample Selection for Severity Estimation with Ordinal Noisy Labels

On Regularisation of Coherent Imagery with Proximal Methods

Regularization of Discrete Ill-Conditioned Problems Done Right -- I

Learned RESESOP for solving inverse problems with inexact forward operator

Chasing Better Deep Image Priors between Over- and Under-parameterization

Built with on top of