Low-Light and Underwater Imaging

Report on Current Developments in Low-Light and Underwater Imaging

General Direction of the Field

The recent advancements in the field of low-light and underwater imaging have been notably innovative, focusing on leveraging novel sensor technologies and advanced computational methods to address the inherent challenges posed by these environments. The field is moving towards more integrated and multi-modal approaches, where different types of data (e.g., RGB frames, events, light fields) are combined to enhance the robustness and accuracy of image processing tasks. This trend is particularly evident in the development of frameworks that not only improve the quality of images but also extend their applicability to downstream tasks such as semantic segmentation and depth estimation.

One of the key directions is the utilization of event cameras, which offer high temporal resolution and high dynamic range, making them ideal for low-light conditions and motion deblurring. The integration of event data with traditional RGB frames is being explored to create more resilient systems that can handle complex scenarios such as autonomous driving at night or underwater imaging. Additionally, the development of large-scale, real-world datasets that include event data is becoming crucial for training and evaluating these models, ensuring their effectiveness in practical applications.

Another significant trend is the adoption of 4-D light fields for underwater imaging. This approach leverages the multi-perspective nature of light fields to capture rich geometric information, which is essential for correcting the distortions caused by light absorption and scattering in underwater environments. The combination of explicit and implicit depth cues in light fields is being used to develop robust methods for underwater image enhancement and depth estimation.

Noteworthy Innovations

  1. NightFormer: Introduces a novel end-to-end approach for night-time semantic segmentation, addressing the unique challenges of low-light conditions with pixel-level texture enhancement and object-level reliable matching.

  2. EvLight++: Proposes a comprehensive event-guided low-light video enhancement method, supported by a large-scale real-world dataset, and demonstrates significant improvements in image quality and downstream task performance.

  3. 4-D Light Field Underwater Imaging: Pioneers the use of 4-D light fields for underwater image enhancement, offering a progressive framework that iteratively optimizes both image quality and depth information, supported by a new dataset.

These innovations represent significant strides in the field, pushing the boundaries of what is possible with current sensor technologies and computational methods.

Sources

Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation

Towards Real-world Event-guided Low-light Video Enhancement and Deblurring

CMTA: Cross-Modal Temporal Alignment for Event-guided Video Deblurring

On the Benefits of Visual Stabilization for Frame- and Event-based Perception

ES-PTAM: Event-based Stereo Parallel Tracking and Mapping

LMT-GP: Combined Latent Mean-Teacher and Gaussian Process for Semi-supervised Low-light Image Enhancement

EvLight++: Low-Light Video Enhancement with an Event Camera: A Large-Scale Real-World Dataset, Novel Method, and More

Enhancing Underwater Imaging with 4-D Light Fields: Dataset and Method