The recent advancements in vision-language models (VLMs) have significantly enhanced the interpretability and controllability of text embeddings, particularly in the context of zero-shot learning and out-of-distribution detection. Innovations such as semantic token reweighting and conjugated semantic pools have refined the text encoding process, allowing for more nuanced control over emphasis and better performance in tasks like few-shot image classification and image retrieval. Additionally, the integration of large language models (LLMs) with visual question answering systems has enabled more sophisticated zero-shot visual concept learning, bridging the gap between human-like reasoning and machine cognition. These developments not only improve the accuracy and reliability of VLMs but also pave the way for more explainable AI systems, which are crucial for real-world applications. Notably, the introduction of multi-granularity semantic-visual adaption networks has addressed the challenges of attribute diversity and instance diversity in generalized zero-shot learning, further advancing the field. Overall, the current trajectory in this research area is towards more sophisticated, controllable, and interpretable models that can better understand and interact with visual and textual data.