The recent advancements in video semantic segmentation (VSS) and event-based vision have shown significant progress, particularly in challenging environments such as low-light conditions. Researchers are increasingly leveraging event cameras, known for their robustness to lighting conditions and ability to capture motion dynamics, to enhance VSS performance. Innovations include the development of lightweight frameworks that integrate event data with traditional image features to create illumination-invariant representations, thereby improving segmentation accuracy and temporal consistency. Additionally, there is a growing focus on optimizing event camera settings autonomously to adapt to varying lighting conditions, enhancing the reliability of event-based applications in real-world scenarios. Furthermore, advancements in memory-efficient video object segmentation techniques are enabling higher resolution and longer video processing, which was previously constrained by computational limitations. These developments collectively push the boundaries of what is achievable in VSS and event-based vision, making them more practical and effective for diverse applications.