Learning anatomy from unlabelled CT volumes: A self-supervised framework for improving prostate radiotherapy segmentation
Document Type
Article
Publication Date
2-19-2026
Abstract
Background Accurate structure contouring on computed tomography (CT) is critical for prostate cancer radiotherapy, but it remains labour intensive and prone to interobserver variability, particularly for small, low-contrast organs such as the prostate, seminal vesicles (SV) and penile bulb (PB). Deep-learning models can automate this task. However, they typically require large, fully labelled datasets that are often unavailable in clinical settings.Purpose This study evaluated whether self-supervised (label-free) slice-prediction pretraining could enhance segmentation performance, especially in scenarios with limited annotated data.Methods We used 322 pelvic CT volumes (215 from UMMC, 107 from TCIA), split 80:20 into training and testing sets (258 training, 64 testing patients). A novel lightweight 2D U-Net encoder was first pretrained on unlabelled data using a slice-prediction task across axial, sagittal, and coronal planes. The pretrained model was then fine-tuned for multi-class segmentation using either the full dataset or a reduced subset of 60 labelled patients. Baselines trained from scratch with 1-channel or 3-channel input were included for comparison. Segmentation accuracy was assessed using mean distance agreement (MDA) and Dice similarity coefficient (DSC). Paired t-tests with Bonferroni correction were applied to assess statistical significance.Results Models with self-supervised pretraining achieved consistently lower MDA across all major pelvic structures. Notable improvements included reductions in bladder MDA from 0.600 mm to 0.547 mm, femoral heads from 1.370 mm to 0.994 mm, PB from 1.470 mm to 1.283 mm, rectum from 0.792 mm to 0.669 mm, prostate from 1.281 mm to 1.183 mm, and SV from 1.175 mm to 0.893 mm.Conclusions Self-supervised pretraining via slice prediction enables anatomically informed feature learning and improves segmentation robustness under limited data conditions. This strategy enhances accuracy without reliance on manual labels during pretraining and is compatible with computationally lightweight architectures, making it well-suited in resource-constrained clinical environments.
Keywords
Deep learning segmentation, Pelvic, Prostate, Radiotherapy, Self-supervised, Unsupervised
Publication Title
Medical Physics
ISSN
0094-2405
DOI
10.1002/mp.70357
Recommended Citation
Hizam, Diyana Afrina; Ung, Ngie Min; Saad, Marniza; Mohd Salleh, Firdaus; Muaadz, Asyraf; and Tan, Li Kuo, "Learning anatomy from unlabelled CT volumes: A self-supervised framework for improving prostate radiotherapy segmentation" (2026). Research Publications (2026 to 2030). 7.
https://knova.um.edu.my/research_publications_2026_2030/7
Volume
53
Issue
2
First Page
e70357
Publisher
Wiley