The field of multimodal models is moving towards open-world understanding, where models can classify and understand images and text without being limited to predefined categories. Recent research has focused on evaluating the performance of large multimodal models (LMMs) in open-world settings, highlighting challenges related to granularity and fine-grained capabilities. The use of natural language to classify images and the integration of spatial and temporal information are also being explored. Noteworthy papers in this area include On Large Multimodal Models as Open-World Image Classifiers, which thoroughly evaluates LMM classification performance in an open-world setting, and STI-Bench, which introduces a benchmark to evaluate models' spatial-temporal understanding. Additionally, XLRS-Bench provides a comprehensive benchmark for evaluating the perception and reasoning capabilities of MLLMs in ultra-high-resolution remote sensing scenarios.