The current research landscape in text summarization and review processes is witnessing significant advancements, particularly through the integration of Large Language Models (LLMs). One notable trend is the exploration of LLMs to assist in academic peer-review processes, focusing on enhancing efficiency through AI-generated annotations. This approach not only streamlines the review process but also introduces platforms like AnnotateGPT, which aim to improve reviewers' comprehension and focus by leveraging LLMs for specific review tasks. Another emerging area is the development of novel summarization techniques for long texts, such as Chinese novels, where traditional methods struggle with segmentation and readability. These new methods combine unsupervised frameworks with LLMs to generate coherent and accurate outlines, addressing the limitations of existing deep learning models. Additionally, there is a growing emphasis on improving summarization for low-resource languages like Thai, where models like CHIMA are innovating by incorporating headline-guided extractive summarization to enhance the relevance and quality of summaries. Comparative literature summarization is also advancing with the introduction of methods like ChatCite, which use reflective incremental mechanisms to provide deeper comparative insights, significantly enhancing the quality of literature reviews. Overall, these developments highlight a shift towards more sophisticated and context-aware summarization techniques, driven by the capabilities of LLMs and tailored approaches for specific languages and text types.