摘要
针对脑部肿瘤分割任务中存在的多模态信息利用率不高,训练样本数据少导致分割结构精度不高的问题,提出了一种以3D U-Net模型为基础,融合变分自编码器(VAE)和注意力模型的分割模型VAE U-Net,实现多模态脑肿瘤MRI图像的自动分割。所提方法在Brats2020数据集上进行实验,在测试集上的整体肿瘤、核心肿瘤以及增强核心区的分割Dice系数分别为81.44、90.82和89.43,相较于原始的3DU-Net提高了2.03、1.05和2.38个百分点。
Aiming at the problems of low utilization of multimodal information and low accuracy of segmentation structure due to small training sample data in brain tumor segmentation tasks,a segmentation model VAE U-Net based on 3D U-Net model with fusion of Variational AutoEncoder(VAE)and Attention Model is proposed to realize automatic segmentation of multimodal brain tumor MRI images.The proposed method is experimented on the Brats 2020 dataset,and the segmentation Dice coefficients of the whole tumor,core tumor and enhanced core region on the test set are 81.44,90.82 and 89.43,respectively,which improved by 2.03,1.05 and 2.38 percentage points compared with the original 3D U-Net.
作者
张丁轲
杨文霞
张园洲
ZHANG Dingke;YANG Wenxia;ZHANG Yuanzhou(Wuhan University of Technology,Wuhan 430070,China)
出处
《现代信息科技》
2023年第13期80-83,87,共5页
Modern Information Technology
基金
武汉理工大学自主创新项目基金(216814016)。