The field of multimodal processing in vision-language models is rapidly advancing, with a focus on improving efficiency and reducing computational costs. Recent developments have shown that it is possible to achieve state-of-the-art performance while significantly reducing the number of visual tokens required. This is achieved through various methods, including adaptive token reduction, part knowledge incorporation, and token pruning. These approaches enable the models to retain only the most informative visual tokens, resulting in faster inference times and lower memory requirements. Notable papers in this area include those that propose novel frameworks for efficient multimodal processing, such as the use of continuous VAEs for discrete tokenization and the incorporation of dynamic pyramid networks for hierarchical visual feature compression. Overall, the field is moving towards more efficient and scalable multimodal processing solutions. Noteworthy papers include: CODA, which achieves a remarkable codebook utilization of 100% and notable reconstruction FID, and InternVL-X, which outperforms the InternVL model in both performance and efficiency by incorporating three visual token compression methods.