The recent advancements in multimodal large language models (MLLMs) and large vision-language models (LVLMs) have significantly enhanced the integration and reasoning capabilities of visual and linguistic modalities. A notable trend is the development of more sophisticated influence functions and contrastive learning techniques to address misalignment and hallucination issues in MLLMs. These innovations aim to improve the transparency and interpretability of models by accurately assessing data impact and model alignment. Additionally, there is a growing focus on accessibility, with frameworks like DexAssist offering novel solutions for individuals with motor impairments, leveraging dual-LLM systems for more reliable web navigation. Another critical area of development is the mitigation of biases in LVLMs, with approaches such as CATCH and LACING introducing novel decoding strategies and dual-attention mechanisms to enhance visual comprehension and reduce language bias. These advancements not only improve model performance but also broaden the applicability of these models in critical domains such as healthcare and autonomous systems. Notably, the Extended Influence Function for Contrastive Loss (ECIF) and the Visual Inference Chain (VIC) framework stand out for their innovative approaches to enhancing multimodal reasoning accuracy and reducing hallucinations. DexAssist is particularly noteworthy for its significant increase in accuracy for accessible web navigation, while CATCH and LACING demonstrate effective strategies for mitigating hallucinations and language bias in LVLMs.