Contrastive Boundary-Aware Learning for Unsupervised Cross-Modality Whole Heart Segmentation

Authors

  • Anusha Kotte Department of CSE, Jawaharlal Nehru Technological University, Hyderabad, India
  • V. Kamakshi Prasad Department of CSE, Jawaharlal Nehru Technological University, Hyderabad, India
Volume: 15 | Issue: 4 | Pages: 24409-24416 | August 2025 | https://doi.org/10.48084/etasr.10892

Abstract

Whole heart segmentation plays a crucial role in the diagnosis of cardiovascular diseases and in treatment planning. Though many existing works achieve promising results, challenges remain due to domain discrepancies, scarcity of annotated data, and the complex anatomy of the heart. Unsupervised Domain Adaptation (UDA) has emerged as a promising solution to the scarcity of annotated data by transferring knowledge from labeled to unlabeled modalities. Many existing domain adaptation methods address the problems of domain distribution gaps through adversarial training and often generate erroneous results for small cardiac structures like myocardium. This remains a significant challenge due to insufficient boundary preservation and feature misalignment. In this work, we propose Contrastive Learning (CL) for feature alignment across Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities without relying on global prototypes, especially for smaller and more complex regions like the myocardium. The integration of both dice and boundary-aware losses is employed to maximize overlap and to penalize discrepancies at the boundaries. This approach also enhances the precision of the boundaries. A substantial set of experiments was conducted on the Multi-Modality Whole Heart Segmentation (MM-WHS) dataset. The experimental results demonstrate significant improvements in segmentation accuracy, particularly in challenging regions such as myocardium. The experimental results yielded a mean Dice coefficient of 75.3% and an Average Symmetric Surface Distance (ASSD) of 2.7 mm, outperforming existing methods.

Keywords:

deep learning, cardiac CT, domain adaptation, contrastive learning, whole-heart segmentation

Downloads

Download data is not yet available.

References

M. U. Saeed, W. Bin, J. Sheng, and H. Mobarak Albarakati, "An Automated Multi-scale Feature Fusion Network for Spine Fracture Segmentation Using Computed Tomography Images," Journal of Imaging Informatics in Medicine, vol. 37, no. 5, pp. 2216–2226, Oct. 2024. DOI: https://doi.org/10.1007/s10278-024-01091-0

M. U. Saeed, W. Bin, J. Sheng, H. M. Albarakati, and A. Dastgir, "MSFF: An automated multi-scale feature fusion deep learning model for spine fracture segmentation using MRI," Biomedical Signal Processing and Control, vol. 91, May 2024, Art. no. 105943. DOI: https://doi.org/10.1016/j.bspc.2024.105943

M. U. Saeed, W. Bin, J. Sheng, and S. Saleem, "3D MFA: An automated 3D Multi-Feature Attention based approach for spine segmentation using a multi-stage network pruning," Computers in Biology and Medicine, vol. 185, Feb. 2025, Art. no. 109526. DOI: https://doi.org/10.1016/j.compbiomed.2024.109526

M. U. Saeed, W. Bin, J. Sheng, G. Ali, and A. Dastgir, "3D MRU-Net: A novel mobile residual U-Net deep learning model for spine segmentation using computed tomography images," Biomedical Signal Processing and Control, vol. 86, no. A, Sep. 2023, Art. no. 105153. DOI: https://doi.org/10.1016/j.bspc.2023.105153

A. Dastgir, W. Bin, M. U. Saeed, J. Sheng, and S. Saleem, "MAFMv3: An automated Multi-Scale Attention-Based Feature Fusion MobileNetv3 for spine lesion classification," Image and Vision Computing, vol. 155, Mar. 2025, Art. no. 105440. DOI: https://doi.org/10.1016/j.imavis.2025.105440

Y.-Y. Tee, X. Hong, D. Cheng, T. Lin, Y. Shi, and B.-H. Gwee, "Unsupervised Domain Adaptation with Pseudo Shape Supervision for IC Image Segmentation," in 2024 IEEE International Symposium on the Physical and Failure Analysis of Integrated Circuits, Singapore, Singapore, 2024, pp. 1–6. DOI: https://doi.org/10.1109/IPFA61654.2024.10690992

S. F. Ismael, K. Kayabol, and E. Aptoula, "Unsupervised domain adaptation for the semantic segmentation of remote sensing images via a class-aware Fourier transform and a fine-grained discriminator," Digital Signal Processing, vol. 151, Aug. 2024, Art. no. 104551. DOI: https://doi.org/10.1016/j.dsp.2024.104551

C. He, K. Zhou, J. Tang, S. Wu, and Z. Ye, "Unsupervised domain adaptation with hard-sample dividing and processing strategy," Information Sciences, vol. 680, Oct. 2024, Art. no. 121152. DOI: https://doi.org/10.1016/j.ins.2024.121152

T. Kataria, B. S. Knudsen, and S. Y. Elhabian, "Unsupervised Domain Adaptation for Semantic Segmentation Under Target Data Scarcity," in 2024 IEEE International Symposium on Biomedical Imaging, Athens, Greece, 2024, pp. 1–5. DOI: https://doi.org/10.1109/ISBI56570.2024.10635646

S. Wang, Z. Fu, B. Wang, and Y. Hu, "Fusing feature and output space for unsupervised domain adaptation on medical image segmentation," International Journal of Imaging Systems and Technology, vol. 33, no. 5, pp. 1672–1681, 2023. DOI: https://doi.org/10.1002/ima.22879

H. Tang, Y. Wang, and K. Jia, "Unsupervised domain adaptation via distilled discriminative clustering," Pattern Recognition, vol. 127, Jul. 2022, Art. no. 108638. DOI: https://doi.org/10.1016/j.patcog.2022.108638

C. Chen, Q. Dou, H. Chen, J. Qin, and P.-A. Heng, "Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 865–872, Jul. 2019. DOI: https://doi.org/10.1609/aaai.v33i01.3301865

Q. Dou, C. Ouyang, C. Chen, H. Chen, and P.-A. Heng, "Unsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss," in Proceedings of the 27th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 2018, pp. 691–697. DOI: https://doi.org/10.24963/ijcai.2018/96

G. French, M. Mackiewicz, and M. Fisher, "Self-ensembling for visual domain adaptation," in 6th International Conference on Learning Representations, Vancouver, Canada, 2018.

X. Han et al., "Deep Symmetric Adaptation Network for Cross-Modality Medical Image Segmentation," IEEE Transactions on Medical Imaging, vol. 41, no. 1, pp. 121–132, Jan. 2022. DOI: https://doi.org/10.1109/TMI.2021.3105046

Y. Zhang, S. Miao, T. Mansi, and R. Liao, "Task Driven Generative Modeling for Unsupervised Domain Adaptation: Application to X-ray Image Segmentation," in 21st International Conference on Medical Image Computing and Computer-Assisted Intervention, Part II, Granada, Spain, 2018, pp. 599–607. DOI: https://doi.org/10.1007/978-3-030-00934-2_67

N. Karani, K. Chaitanya, C. Baumgartner, and E. Konukoglu, "A Lifelong Learning Approach to Brain MR Segmentation Across Scanners and Protocols," in 21st International Conference on Medical Image Computing and Computer-Assisted Intervention, Part I, Granada, Spain, 2018, pp. 476–484. DOI: https://doi.org/10.1007/978-3-030-00928-1_54

C. Lu, S. Zheng, and G. Gupta, "Unsupervised Domain Adaptation for Cardiac Segmentation: Towards Structure Mutual Information Maximization," in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, New Orleans, LA, USA, 2022, pp. 2587–2596. DOI: https://doi.org/10.1109/CVPRW56347.2022.00291

A. Kotte and V. K. Prasad, "Hybrid 3D U-Net and Attention Mechanisms for Whole Heart Segmentation from CT Images," Engineering, Technology & Applied Science Research, vol. 15, no. 2, pp. 21822–21828, Apr. 2025. DOI: https://doi.org/10.48084/etasr.10115

X. Zhuang, "Multivariate Mixture Model for Myocardial Segmentation Combining Multi-Source Images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 12, pp. 2933–2946, Dec. 2019. DOI: https://doi.org/10.1109/TPAMI.2018.2869576

X. Zhuang and J. Shen, "Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI," Medical Image Analysis, vol. 31, pp. 77–87, Jul. 2016. DOI: https://doi.org/10.1016/j.media.2016.02.006

F. Wu and X. Zhuang, "Minimizing Estimated Risks on Unlabeled Data: A New Formulation for Semi-Supervised Medical Image Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 5, pp. 6021–6036, May 2023.

S. Gao, H. Zhou, Y. Gao, and X. Zhuang, "BayeSeg: Bayesian modeling for medical image segmentation with interpretable generalizability," Medical Image Analysis, vol. 89, Oct. 2023, Art. no. 102889. DOI: https://doi.org/10.1016/j.media.2023.102889

C. Chen, Q. Dou, H. Chen, J. Qin, and P. A. Heng, "Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation," IEEE Transactions on Medical Imaging, vol. 39, no. 7, pp. 2494–2505, Jul. 2020. DOI: https://doi.org/10.1109/TMI.2020.2972701

L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, Apr. 2018. DOI: https://doi.org/10.1109/TPAMI.2017.2699184

J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "ImageNet: A large-scale hierarchical image database," in 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 2009, pp. 248–255. DOI: https://doi.org/10.1109/CVPR.2009.5206848

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-Image Translation with Conditional Adversarial Networks," in 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 5967–5976. DOI: https://doi.org/10.1109/CVPR.2017.632

W. Feng, L. Ju, L. Wang, K. Song, X. Zhao, and Z. Ge, "Unsupervised Domain Adaptation for Medical Image Segmentation by Selective Entropy Constraints and Adaptive Semantic Alignment," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 1, pp. 623–631, Jun. 2023. DOI: https://doi.org/10.1609/aaai.v37i1.25138

Downloads

How to Cite

[1]
A. Kotte and V. K. Prasad, “Contrastive Boundary-Aware Learning for Unsupervised Cross-Modality Whole Heart Segmentation”, Eng. Technol. Appl. Sci. Res., vol. 15, no. 4, pp. 24409–24416, Aug. 2025.

Metrics

Abstract Views: 157
PDF Downloads: 665

Metrics Information