The recent advancements in the field of language and vision models (LLMs and VLMs) have significantly focused on enhancing transparency, trustworthiness, and accuracy. A notable trend is the emphasis on reporting train-test overlap to ensure the reliability of model evaluations. This practice aims to increase community trust by providing clear metrics on data usage. Additionally, there is a growing interest in developing frameworks that audit and improve the trustworthiness of models, particularly in coding and development contexts. These frameworks aim to bridge the gap between training and evaluation data, ensuring a more unified approach to model trustworthiness. Another significant development is the automation of test case generation for multimodal models, which helps in identifying and mitigating visual hallucinations. This is achieved through novel methods like VHExpansion, which not only expands test cases but also introduces unbiased evaluation metrics. Furthermore, the study of sycophancy in VLMs has gained traction, with new benchmarks and mitigation strategies being proposed to address this issue. The integration of large language models into educational technologies is also advancing, with models being fine-tuned to simulate student cognition, including misconceptions, which is crucial for personalized learning systems. Lastly, the field is seeing a shift towards more programmatic and transparent evaluation methods for VLMs, ensuring that responses are both helpful and truthful. These developments collectively push the boundaries of model reliability and applicability in various domains.