The recent developments in the field of video-based AI research are pushing the boundaries of multimodal understanding and interaction. There is a notable shift towards enhancing the complexity and depth of responses in visual question answering (VQA) systems, moving beyond simplistic answers to provide detailed explanations, which is crucial for educational and instructional content. Additionally, the integration of grounded video captioning is gaining traction, where captions are not only generated but also linked to specific objects within the video, enhancing the interpretability and accuracy of video content. The field is also witnessing advancements in the evaluation of video data quality, with new methods being introduced to assess the reliability and efficiency of video QA datasets. Furthermore, the use of language to guide and improve the selection of the most informative viewpoints in multi-view videos is emerging as a promising area, leveraging natural language processing to enhance video analysis. Lastly, the creation of benchmarks that test advanced cognitive abilities in video-based tasks, particularly those involving symbolic and abstract concepts, is highlighting the current limitations of large video-language models and setting the stage for future innovations in this domain.