Medical Image Segmentation

Report on Current Developments in Medical Image Segmentation

General Direction of the Field

The field of medical image segmentation is witnessing a significant shift towards leveraging advanced foundation models and innovative techniques to enhance segmentation accuracy, data efficiency, and generalizability. Recent developments are characterized by the integration of visual language models, such as the Segment Anything Model (SAM), with specialized adaptations and novel frameworks to address the unique challenges posed by medical imaging data. These advancements are particularly focused on improving segmentation performance in weakly-supervised settings, reducing the reliance on extensive manual annotations, and enhancing the robustness of models against varying image qualities and lesion characteristics.

One of the key trends is the adaptation of foundation models like SAM for medical imaging, where these models are fine-tuned or adapted to better handle the complexities of medical data. This includes the development of global-local adaptors that refine SAM's capabilities both globally and locally, as well as the integration of multi-modal information to create more accurate segmentation masks. Additionally, there is a growing interest in using evidence-guided consistency and hybrid CNN-Mamba frameworks to improve the robustness of segmentation models, especially in scenarios with sparse annotations like scribbles.

Another notable direction is the automation of prompt learning for foundation models, which aims to reduce the need for manual user interaction and make these models more adaptable to specific medical tasks with minimal supervision. This is achieved by learning prompt embeddings directly from image data, thereby enabling automatic segmentation without the need for handcrafted prompts.

Overall, the field is progressing towards more interactive, data-efficient, and robust segmentation methods that can be applied across diverse medical imaging modalities and tasks, ultimately aiding in clinical diagnosis, disease research, and treatment planning.

Noteworthy Papers

  • Global-Local Medical SAM Adaptor Based on Full Adaption: Introduces a novel global-local adaptor that significantly improves segmentation performance on challenging datasets, outperforming state-of-the-art methods.

  • MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation: Proposes a framework that integrates CLIP and SAM for high-accuracy segmentation using text prompts, demonstrating strong performance across diverse medical imaging modalities.

  • MambaEviScrib: Mamba and Evidence-Guided Consistency Make CNN Work Robustly for Scribble-Based Weakly Supervised Ultrasound Image Segmentation: Combines CNN with Mamba and evidence-guided consistency to achieve competitive segmentation results on ultrasound datasets with sparse annotations.

  • Automating MedSAM by Learning Prompts with Weak Few-Shot Supervision: Develops a method to automate prompt learning for SAM, enabling automatic segmentation with minimal supervision, validated on multiple medical datasets.

Sources

Global-Local Medical SAM Adaptor Based on Full Adaption

MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation

MambaEviScrib: Mamba and Evidence-Guided Consistency Make CNN Work Robustly for Scribble-Based Weakly Supervised Ultrasound Image Segmentation

Medical Image Segmentation with SAM-generated Annotations

Automating MedSAM by Learning Prompts with Weak Few-Shot Supervision

AI generated annotations for Breast, Brain, Liver, Lungs and Prostate cancer collections in National Cancer Institute Imaging Data Commons

Built with on top of