The recent advancements in the field of multimodal large language models (MLLMs) have been focused on enhancing reliability, robustness, and accountability. Innovations in prompting techniques, such as Multi-expert Prompting, have significantly improved the truthfulness and usefulness of model responses, addressing issues of toxicity and hurtfulness. Additionally, there is a growing emphasis on evaluating and mitigating response uncertainty under misleading scenarios, with the development of benchmarks like MUB to assess and enhance model robustness. The issue of data contamination in MLLMs has also gained attention, with frameworks like MM-Detect being introduced to detect and mitigate this problem across different training phases. Furthermore, the field is expanding into multilingual contexts, with datasets like ML-Promise being developed to verify corporate promises, particularly in the context of ESG reports, fostering accountability and transparency. These developments collectively push the boundaries of MLLMs, making them more reliable, robust, and applicable in diverse, real-world scenarios.