The recent developments in the research area of domain adaptation and semantic segmentation have shown a significant shift towards leveraging cross-modal learning and multi-granularity representations to enhance model adaptability and performance across diverse domains. Innovations in fusion techniques, such as the proposed fusion-then-distillation methods, have demonstrated superior results in aligning heterogeneous data modalities, particularly in 3D semantic segmentation tasks. Additionally, the integration of contrastive learning and context-aware knowledge has been highlighted as a key advancement in unsupervised domain adaptation, improving segmentation accuracy by focusing on intra-domain structures and pixel distributions. In the realm of medical image segmentation, adaptive amalgamation frameworks are being developed to mitigate domain shift effects by merging knowledge from specialized expert models, showcasing enhanced adaptability to real-world data heterogeneity. Furthermore, lightweight and efficient frequency masking techniques are emerging as promising solutions for cross-domain few-shot segmentation, significantly improving robustness against domain gaps. Lastly, the introduction of analytic continual test-time adaptation methods for multi-modality corruption scenarios is addressing critical challenges such as error accumulation and catastrophic forgetting, ensuring reliable model adaptation in continuously changing environments.