期刊文献+

多尺度深度特征提取的肝脏肿瘤CT图像分类 被引量:4

CT image classification of liver tumors based onmulti-scale and deep feature extraction
原文传递
导出
摘要 目的肝脏肿瘤是人体最具侵袭性的恶性肿瘤之一,传统的肿瘤诊断依靠观察患者的CT(computed tomography)图像,工作量大时易造成疲劳,难免会产生误诊,为此使用计算机辅助的方法进行诊断,但现有的深度学习方法中存在肿瘤分类准确率低、网络的特征表达能力和特征提取能力较弱等问题。对此,本文设计了一种多尺度深度特征提取的分类网络模型。方法首先在原始CT图像中选取感兴趣区域,然后根据CT图像的头文件进行像素值转换,并进行数据增强来扩充构建数据集,最后将处理后的数据输入到本文提出的分类网络模型中输出分类结果。该网络通过多尺度特征提取模块来提取图像的多尺度特征并增加网络的感受野,使用深度特征提取模块降低背景噪声信息,并着重关注病灶区域有效特征,通过集成并行的空洞卷积使得尺度多元化,并将普通卷积用八度卷积替换来减少参数量,提升分类性能,最终实现了对肝脏肿瘤的精确分类。结果本文模型达到了87.74%的最高准确率,比原始模型提升了9.92%;与现有主流分类网络进行比较,多项评价指标占优,达到了86.04%的召回率,87%的精准率,86.42%的F1分数;此外,通过消融实验进一步验证了所提方法的有效性。结论本文方法可以较为准确地对肝脏肿瘤进行分类,将此方法结合到专业的医疗软件当中去,能够为医生早期的诊断和治疗提供可靠依据。 Objective Liver tumors are the most aggressive malignancies in the human body. The definition of lesion type and lesion period based on computed tomography(CT) images determines the diagnosis and strategy of the treatment, which requires professional knowledge and rich experience of experts to classify them. Fatigue is easily experienced when the workload is heavy, and even experienced senior experts have difficulty avoiding misdiagnosis. Deep learning can avoid the drawbacks of traditional machine learning that takes a certain amount of time to manually extract the features of the image and perform dimensionality reduction, and is capable of extracting high-dimensional features of an image. Using deep learning to assist doctors in diagnosis is important. In the existing medical image classification task, the challenge of the low accuracy of tumor classification, the weak capability of the feature extraction, and the rough dataset still remain. To address these tasks, this study presents a method with a multi-scale and deep feature extraction classification network. Method First, we extract the region of interest(ROI) according to the contours of the liver tumors that were labeled by experienced radiologists, along with the ROI of healthy livers. The ROI is extracted to capture the features of the lesion area and surrounding tissue, which is relative to the size of the lesion. Due to the different sizes of the lesion area, the size of the extracted ROI is also different. Then, the pixel value is converted and data augmentation is performed. The dataset is Hounsfield windows, the range of CT values is(-1 024, 3 071), and the range of digital imaging and communications in medicine(DICOM) image is(0, 4 096). The pixel values of DICOM images have to be converted to CT values. First, we read rescaleintercept and rescaleslope from the DICOM header file, and then we use the formula to convert. Thereafter, we limit the CT values of liver datasets to [-100, 400] Hounsfield HU to avoid the influence of the background noise of the unrelated organs or tissues. We perform several data augmentation methods such as flipping, rotation, and transforming to expand the diversity of the datasets. Then, these images are sent into the MDSENet for classification. The MDSENet network is a SEResNet-like convolution neural network that can achieve end-to-end classification. The SEResNet learns the important features automatically from each channel to strengthen the useful features and suppress useless ones. MDSENet network is much deeper than SEResNet. Our contributions are the following: 1) Hierarchical residual-like connections are used to improve multi-scale expression and increase the receptive field of each network layer. In the study, the image features after 1×1 convolution layers are divided into four groups. Each group of features passes through the 3×3 residual-like convolution groups, which improves the multi-scale feature extraction of networks and enhances the acquisition of focus areas features. 2) Channel attention and spatial attention are used to further focus on effective information on medical images. We let the feature images first go through the channel attention module, then we multiply its input and output to go through the spatial attention module. Then, we multiply the output of the spatial attention module and its input, which can pay more attention to the features of the lesion area and reduce the influence of background noise. 3) Atrous convolutions connected in parallel which refer to the spatial pyramid pooling, then we use 1×1 convolution layers to strengthen the feature. Finally, we concatenate the output and use softmax in classification. In this way, we can expand the receptive field and increase the image resolution, which can improve the feature expression ability and prevent the loss of information effectively. 4) The ordinary convolution is replaced by octave convolution to reduce the number of parameters and improve the classification performance. In this study, we compared the results of DenseNet, ResNet, MnasNet, MobileNet, ShuffleNet, SKResNet, and SEResNet with those of our MDSENet, all of which were trained on the liver dataset. During the experiment, due to the limitation of graphics processing unit(GPU) memory, we set a batch size of 16 with Adam optimization and learning rate of 0.002 for 150 epochs. We used the dataset in Pytorch framework, Ubuntu 16.04. All experiments used the NVIDIA GeForce GTX 1060 Ti GPU to verify the effectiveness of our proposed method. Result Our training set consists of 4 096 images and the test set consists of 1 021 images for the liver dataset. The classification accuracy of our proposed method is 87.74% and is 9.92% higher than the baseline(SEResNet101). Our module achieves the best result compared with the state-of-the-art network and achieved 86.04% recall, 87% precision, 86.42% F1-score under various evaluation indicators. Ablation experiments are conducted to verify the effectiveness of the method. Conclusion In this study, we proposed a method to classify the liver tumors accurately. We combined the method into professional medical software so that we can provide a foundation that physicians can use in early diagnosis and treatment.
作者 毛静怡 宋余庆 刘哲 Mao Jingyi;Song Yuqing;Liu Zhe(School of Computer Science and Communication Engineering,Jiangsu University,Zhenjiang 212013,China)
出处 《中国图象图形学报》 CSCD 北大核心 2021年第7期1704-1715,共12页 Journal of Image and Graphics
基金 国家自然科学基金项目(61976106,61772242,61572239) 中国博士后科学基金项目(2017M611737) 江苏省“六大人才高峰”高层次人才项目(DZXX-122) 镇江市卫生计生科技重点项目(SHW2017019)。
关键词 深度学习 肝脏肿瘤分类 多尺度特征 特征提取 空洞卷积 deep learning liver lesion classification multi-scale features feature extraction dilated convolution
  • 相关文献

参考文献1

二级参考文献5

共引文献16

同被引文献11

引证文献4

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部