The field of medical image analysis is moving towards more sophisticated and robust techniques for image registration, segmentation, and denoising. Researchers are exploring novel approaches to address the challenges of multi-organ registration, semi-supervised learning, and low-dose CT denoising. Notably, the use of deep learning-based methods, such as transformer-based architectures and generative models, is becoming increasingly popular. These methods have shown promising results in handling complex deformations, improving segmentation accuracy, and preserving anatomical details. Overall, the field is advancing towards more accurate, efficient, and clinically relevant image analysis techniques. Noteworthy papers include: MO-CTranS, which proposes a unified multi-organ segmentation model that can learn from multiple heterogeneously labelled datasets. IMPACT, which introduces a generic semantic loss for multimodal medical image registration that can be seamlessly integrated into diverse image registration frameworks. SelfMedHPM, which proposes a self-pretraining framework with hard patches mining masked autoencoders for medical image segmentation tasks.