Advances in Event-Based Vision

The field of event-based vision is rapidly advancing, with a growing focus on developing innovative methods for processing and analyzing event-based data. Recent research has explored the intersection of event cameras and deep learning techniques, such as spiking neural networks, to improve the accuracy and efficiency of event-based vision systems. Notably, new frameworks and datasets have been proposed to address the challenges of handling long-term temporal information and adapting to real-world environmental fluctuations. Some noteworthy papers in this area include: The paper on Temporal-Guided Spiking Neural Networks, which introduces two novel frameworks to address the challenge of handling long-term temporal information in event-based human action recognition. The paper on FUSE, which proposes a label-free image-event joint monocular depth estimation method that achieves state-of-the-art performance and exhibits remarkable zero-shot adaptability to challenging scenarios.

Sources

Temporal-Guided Spiking Neural Networks for Event-Based Human Action Recognition

Unsupervised Joint Learning of Optical Flow and Intensity with Event Cameras

Event-Based Crossing Dataset (EBCD)

PS-EIP: Robust Photometric Stereo Based on Event Interval Profile

FUSE: Label-Free Image-Event Joint Monocular Depth Estimation via Frequency-Decoupled Alignment and Degradation-Robust Fusion

A Survey on Event-driven 3D Reconstruction: Development under Different Categories

EventFly: Event Camera Perception from Ground to the Sky

Built with on top of