The recent developments in the field of large language models (LLMs) and multimodal large language models (MLLMs) have shown a significant shift towards enhancing reasoning and decision-making capabilities through innovative prompting strategies. A notable trend is the integration of visual and textual information in reasoning processes, exemplified by the introduction of image-incorporated multimodal Chain-of-Thought (CoT) methods. These methods aim to improve the fine-grained associations between visual inputs and textual outputs, thereby enhancing the interpretability and accuracy of model responses. Additionally, there is a growing focus on optimizing prompt engineering to balance cost and accuracy, with novel metrics like the Economical Prompting Index (EPI) being proposed to evaluate cost-effectiveness. The field is also witnessing advancements in zero-shot and few-shot learning, where models are being designed to perform tasks without prior experience by mimicking human thought processes. Furthermore, the exploration of different cognitive processes, such as association and counterfactual thinking, is being leveraged to construct thought trees that facilitate complex problem-solving. These developments collectively indicate a move towards more sophisticated and context-aware models that can handle real-world scenarios more effectively.