The field of natural language processing is witnessing significant advancements in the detection of machine-generated text and the development of more sophisticated decoding strategies for large language models. Researchers are exploring new methods to detect generated text, including approaches that target defects in decoding strategies and those that leverage human feedback to improve the quality and detectability of generated texts. A key challenge in this area is the increasing ability of large language models to mimic human writing, making it harder to distinguish between human and machine-generated text. To address this, innovative detection methods are being proposed, including those that are agnostic to the generating language model and those that analyze the distortion caused by local normalization in decoding strategies. These developments have important implications for the future design of decoding algorithms and the detection of machine-generated text. Noteworthy papers in this area include: Understanding the Effects of RLHF on the Quality and Detectability of LLM-Generated Texts, which investigates the impact of reinforcement learning from human feedback on the quality and detectability of generated texts. TempTest: Local Normalization Distortion and the Detection of Machine-generated Text, which proposes a novel detection method that targets local normalization distortion in decoding strategies.