Event-based Vision: Advances in Neuromorphic Sensors and Processing

The field of event-based vision is rapidly advancing, driven by the development of neuromorphic sensors and processing techniques inspired by the human retina. Researchers are exploring new ways to improve the efficiency and accuracy of event-based vision systems, including the use of task-specific spatio-temporal retinal kernels and novel representations such as Event2Vec. These innovations are enabling event-based vision to be applied to a wide range of applications, from high-speed object detection and tracking to 3D reconstruction and depth estimation. Notable papers in this area include Neural Ganglion Sensors, which demonstrated improved performance on video interpolation and optical flow tasks, and DERD-Net, which achieved state-of-the-art results on event-based depth estimation benchmarks. Additionally, SaENeRF showed impressive results in suppressing artifacts in event-based Neural Radiance Fields, while EHGCN introduced a novel approach to perceive event streams in both Euclidean and hyperbolic spaces.

Sources

Neural Ganglion Sensors: Learning Task-specific Event Cameras Inspired by the Neural Circuit of the Human Retina

Zebrafish Counting Using Event Stream Data

Event2Vec: Processing neuromorphic events directly by representations in vector space

DERD-Net: Learning Depth from Event-based Ray Densities

SaENeRF: Suppressing Artifacts in Event-based Neural Radiance Fields

EHGCN: Hierarchical Euclidean-Hyperbolic Fusion via Motion-Aware GCN for Hybrid Event Stream Perception

Dual-Camera All-in-Focus Neural Radiance Fields

Built with on top of