Data-Efficient Learning for Segmentation and Positioning

Report on Current Developments in Data-Efficient Learning for Segmentation and Positioning

General Direction

The field of data-efficient learning for segmentation and positioning is witnessing a significant shift towards more sophisticated and annotation-efficient methods. Researchers are increasingly focusing on reducing the reliance on extensive manual labeling by leveraging weak supervision, active learning, and innovative data selection techniques. This trend is driven by the need to minimize the cost and time associated with data annotation, particularly in domains like medical imaging and cellular network positioning.

Key Innovations

  1. Active Learning for Data Selection: There is a growing interest in active learning strategies that intelligently select the most informative samples for labeling. These methods aim to reduce the dataset size required for training high-quality models, thereby minimizing communication overhead and annotation efforts. Techniques such as variational adversarial active learning and ranking-based loss prediction are being refined to better exploit unlabeled data and target task information.

  2. Weak Supervision with Scribble Labels: Scribble-based segmentation methods are gaining traction due to their ability to achieve high-quality results with minimal annotation effort. Researchers are developing algorithms to generate and utilize scribble labels across various datasets, enabling more robust and scalable weakly supervised segmentation. These methods are particularly promising in medical image segmentation, where annotation costs are high.

  3. Annotation-Efficient Strategies: Novel approaches like Entity-Superpixel Annotation (ESA) are being introduced to enhance annotation efficiency by focusing on key entities within images. These methods leverage advanced pre-trained models and structural cues to select a subset of informative samples, significantly reducing the number of required annotations.

  4. Size-Aware and Cross-Shape Scribble Supervision: In medical image segmentation, there is a growing emphasis on methods that address the challenges of varying scale targets and annotation consistency. Techniques like cross-shape scribble annotation and size-aware multi-branch methods are being developed to improve segmentation performance while maintaining annotation efficiency.

Noteworthy Papers

  • Active learning for efficient data selection in radio-signal based positioning via deep learning: Introduces a practical active learning approach for positioning, demonstrating significant gains in accuracy and dataset size reduction.
  • Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets: Provides a comprehensive benchmark for scribble-labeled segmentation, offering datasets and algorithms to advance weakly supervised segmentation research.
  • ESA: Annotation-Efficient Active Learning for Semantic Segmentation: Proposes Entity-Superpixel Annotation, significantly reducing annotation costs and enhancing segmentation performance with minimal queries.
  • Size Aware Cross-shape Scribble Supervision for Medical Image Segmentation: Introduces innovative methods to improve scribble supervision in medical imaging, achieving significant improvements in mDice scores.

These developments highlight the field's progress towards more efficient and scalable data-driven solutions, paving the way for future advancements in segmentation and positioning tasks.

Sources

Active learning for efficient data selection in radio-signal based positioning via deep learning

Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets

Semi-Supervised Variational Adversarial Active Learning via Learning to Rank and Agreement-Based Pseudo Labeling

From Few to More: Scribble-based Medical Image Segmentation via Masked Context Modeling and Continuous Pseudo Labels

ESA: Annotation-Efficient Active Learning for Semantic Segmentation

Size Aware Cross-shape Scribble Supervision for Medical Image Segmentation

Understanding Uncertainty-based Active Learning Under Model Mismatch