摘要
目的:危及器官的自动勾画能够帮助医师高效、精确地制作放射治疗计划,可有效提高放射治疗的精确性和治疗的疗效。本研究提出一种同时适用于宫颈癌后装和外照射放射治疗场景下危及器官的自动勾画方法,同时利用不同场景间危及器官结构的相似性,提高分割困难的危及器官的分割精度。方法:采用集成学习策略,将基于后装与外照射数据进行预训练得到的模型作为特征提取模块引入集成模型中,交替对不同场景下的数据进行训练,引导模型同时学习场景内危及器官的个性化特征和场景间危及器官的共性特征。实验选择84例后装和46例外照射宫颈癌患者的CT序列数据作为本研究方法的数据来源,使用五折交叉验证划分训练集和测试集。通过五折平均形状相似性评价系数(dice similarity coefficient,DSC)来评价模型的勾画精准度。结果:采用集成学习方法勾画的多数危及器官(后装序列数据中的直肠与膀胱、外照射序列数据中的膀胱)的DSC指标高于0.7。相比于单独的基于残差的U-net模型,本研究的方法对分割困难的危及器官(后装数据中的乙状结肠与外照射图像中的直肠)有更优的分割性能,DSC提高3%以上。结论:本研究提出的方法实现了基于单模型宫颈癌放射治疗中的多个危及器官场景的分割,得到了较为准确的结果,可以极大地缩短医师对危及器官勾画的时间,提高医师的工作效率。
Objective: The automatic delineation of organs at risk(OARs) can help doctors make radiotherapy plans efficiently and accurately, and effectively improve the accuracy of radiotherapy and the therapeutic effect. Therefore, this study aims to propose an automatic delineation method for OARs in cervical cancer scenarios of both after-loading and external irradiation. At the same time, the similarity of OARs structure between different scenes is used to improve the segmentation accuracy of OARs in difficult segmentations.Methods: Our ensemble model adopted the strategy of ensemble learning. The model obtained from the pre-training based on the after-loading and external irradiation was introduced into the integrated model as a feature extraction module. The data in different scenes were trained alternately, and the personalized features of the OARs within the model and the common features of the OARs between scenes were introduced. Computer tomography(CT) images for 84 cases of after-loading and 46 cases of external irradiation were collected as the train data set. Five-fold cross-validation was adopted to split training sets and test sets. The five-fold average dice similarity coefficient(DSC) served as the figure-of-merit in evaluating the segmentation model.Results: The DSCs of the OARs(the rectum and bladder in the after-loading images and the bladder in the external irradiation images) were higher than 0.7. Compared with using an independent residual U-net(convolutional networks for biomedical image segmentation)model [residual U-net(Res-Unet)] delineate OARs, the proposed model can effectively improve the segmentation performance of difficult OARs(the sigmoid in the after-loading CT images and the rectum in the external irradiation images), and the DSCs were increased by more than 3%.Conclusion: Comparing to the dedicated models, our ensemble model achieves the comparable result in segmentation of OARs for different treatment options in cervical cancer radiotherapy, which may be shorten time for doctors to sketch OARs and improve doctor’s work efficiency.
作者
程婷婷
张子健
杨馨
卢山富
钱东东
王先良
朱红
CHENG Tingting;ZHANG Zijian;YANG Xin;LU Shanfu;QIAN Dongdong;WANG Xianliang;ZHU Hong(Department of Oncology,Xiangya Hospital,Central South University,Changsha 410008;National Clinical Research Center for Geriatric Diseases,Xiangya Hospital,Changsha 410008;Guangzhou Perception Vision Medical Technologies Limited Company,Guangzhou 510530;Department of Radiotherapy Center,Sichuan Cancer Hospital,Chengdu 610041,China)
出处
《中南大学学报(医学版)》
CAS
CSCD
北大核心
2022年第8期1058-1064,共7页
Journal of Central South University :Medical Science
基金
湖南省自然科学基金(2022JJ70072)
湘雅医院临床科研基金(2016L06)
四川省重点研发项目(2022YFG0194)
成都市科技局技术创新研发项目(2021-YF05-02107-SN)。
关键词
深度学习
危及器官
自动勾画
宫颈癌放射治疗
集成学习
deep learning
organs at risk
automatic delineation
cervical cancer radiotherapy
ensemble learning