Deep learning model development for auto-delineation organs at risk on CT simulation images of head and neck cancer

Authors

  • Rutthapong Nantaprae Medical Physics Program, Department of Radiological Technology, Faculty of Allied Health Sciences, Naresuan University
  • Titipong Kaewlek Department of Radiological Technology, Faculty of Allied Health Sciences, Naresuan University

Keywords:

U-Net architecture, Features Pyramid Network (FPN) architecture, Transfer learning and fine tune

Abstract

Backgrounds: The organs at risk (OARs) delineation process have an important role in treatment planning for head and neck radiation therapy. OARs have been hit by the over limitation of radiation dose. It effects to loses normal function. Usually in the clinic practice OARs are delineated by Radiation Oncologist (RO). Nevertheless, it takes a long time and depending on RO experience.  Lead to variability intra-inter observation.  

Objective: To develop a deep learning model for auto-delineation of OARs on head and neck cancer patients computed tomography images and evaluating the performance of the model. 

Materials and Methods: Head and neck computed tomography simulation images and DICOM structure radiation therapy were used to input data for training deep learning models. The models were developed by using U-Net architecture and Features Pyramid Networks (FPN) architecture. The architectures were modified by using the VGG19 backbone. Fine tune method and Transfer learning method were used to train deep learning models. The models were evaluated by Dice Similarity Coefficient (DSC), 95 percentile Hausdorff Distance (95%HD), The time of model training, and the time of model auto-predictions.  

Results: The averages DSC results of the TUVGG19 model and FFVGG19 were not less than 0.80 all the OARs except DSC of both parotid glands and spinal cord were not less than 0.72. The averages 95%HD were not more than 2 millimeters all the OARs except 95%HD of Mandible and both parotid glands were not more than 4 millimeters. The average time to predict mask was 1.286 and 1.534 seconds per image for the TUVGG19 model and FFVGG19 model, respectively. 

Conclusion: Deep learning models for auto-delineation of OARs developed. It showed good prediction performance. The results were evaluated by DSC value and 95%HD likewise the value of previous studies and fast predictions mask. 

References

สาขารังสีรักษาและมะเร็งวิทยา ฝ่ายรังสีวิทยา โรงพยาบาลจุฬาลงกรณ์. สถิติผู้ป่วยที่มารับบริการที่สาขารังสีรักษาและมะเร็งวิทยา 20 อันดับโรค (ราย) ปี 2562. 2562 [cited 27 ตุลาคม 2563]. Available from: https://www.chulacancer.net/service-statistics-inner.php?id=699.

Merlotti A, Alterio D, Vigna-Taglianti R, Muraglia A, Lastrucci L, Manzo R, et al. Technical guidelines for head and neck cancer IMRT on behalf of the Italian association of radiation oncology - head and neck working group. Radiat Oncol 2014;9:264.

Dirix P, Nuyts S. Evidence-based organ-sparing radiotherapy in head and neck cancer. Lancet Oncol 2010;11:85-91.

Thariat J, Ramus L, Maingon P, Odin G, Gregoire V, Darcourt V, et al. Dentalmaps: automatic dental delineation for radiotherapy planning in head-and-neck cancer. Int J Radiat Oncol Biol Phys 2012;82:1858-65.

Mayo C, Martel MK, Marks LB, Flickinger J, Nam J, Kirkpatrick J. Radiation dose–volume effects of optic nerves and chiasm. Int J Radiat Oncol Biol Phys 2010;76:S28-35.

Kirkpatrick JP, van der Kogel AJ, Schultheiss TE. Radiation dose-volume effects in the spinal cord. Int J Radiat Oncol Biol Phys 2010;76:S42-9.

van der Veen J, Willems S, Deschuymer S, Robben D, Crijns W, Maes F, et al. Benefits of deep learning for delineation of organs at risk in head and neck cancer. Radiother Oncol 2019;138:68-74.

Wu X, Udupa JK, Tong Y, Odhner D, Pednekar GV, Simone CB, 2nd, et al. Auto-contouring via automatic anatomy recognition of organs at risk in head and neck cancer on CT images. Proc SPIE Int Soc Opt Eng 2018;10576.

Ibragimov B, Xing L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med Phys 2017;44:547-57.

Zhu W, Huang Y, Zeng L, Chen X, Liu Y, Qian Z, et al. AnatomyNet: Deep learning for fast and fully automated whole‐volume segmentation of head and neck anatomy. Med Phys 2018;46.

Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. 2015;9351:234-41.

Lin T, Dollár P, Girshick R, He K, Hariharan B, Belongie S, editors. Feature Pyramid Networks for object detection. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 21-26 July 2017.

Yakubovskiy P. Segmentation Models 2018 [cited 2020 Oct 27]. Available from: https://segmentation-models.readthedocs.io/en/latest/.

Turgutlu K. dicom-contour 2018 [cited 2020 Oct 27]. Available from: https://github.com/KeremTurgutlu/dicom-contour.

Head-Neck-PET-CT [Internet]. Available from: https://wiki.cancerimagingarchive.net/display/Public/Head-Neck-PET-CT.

TCGA-HNSC [Internet]. [cited Oct 27]. Available from: https://wiki.cancerimagingarchive.net/display/Public/TCGA-HNSC.

IMAGENET Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) [cited 2020 Oct 29]. Available from: http://image-net.org/challenges/LSVRC/2012/index#data.

Transfer learning & fine-tuning 2020 [updated 2020 May 12; cited 2020 Oct 29]. Available from: https://keras.io/guides/transfer_learning/.

Ren X, Xiang L, Nie D, Shao Y, Zhang H, Shen D, et al. Interleaved 3D-CNNs for joint segmentation of small-volume structures in head and neck CT images. Med Phys 2018;45:2063-75.

Chen A, Dawant BM. Multi-atlas approach for the automatic segmentation of multiple structures in head and neck CT images. The MIDAS 2016.

Downloads

Published

2021-04-30

How to Cite

1.
Nantaprae R, Kaewlek T. Deep learning model development for auto-delineation organs at risk on CT simulation images of head and neck cancer. J Thai Assn of Radiat Oncol [Internet]. 2021 Apr. 30 [cited 2024 May 11];27(1):R12-R28. Available from: https://he01.tci-thaijo.org/index.php/jtaro/article/view/248991

Issue

Section

Original articles