The field of computer vision and architectural design is witnessing significant advancements with the integration of vision-language models (VLMs). Researchers are leveraging VLMs to improve video retrieval systems, enabling adaptive query refinement and enhancing retrieval accuracy. Additionally, VLMs are being applied to architectural design, facilitating efficient search and recommendation of design case studies. Noteworthy papers in this area include:
- Enhancing Subsequent Video Retrieval via Vision-Language Models, which introduces a novel framework for adaptive video retrieval.
- ArchSeek, an innovative case study search system that enables text and image queries with fine-grained control.
- ArchCAD-400K, a large-scale CAD dataset that boasts extended drawing diversity and broader categories, offering line-grained annotations.
- ViSketch-GPT, a novel algorithm that captures intricate details at multiple scales for sketch recognition and generation.