摘要
针对基于单模态影像数据实现阿尔兹海默病(AD)辅助分类的方法存在单模态数据提取病理信息有限、影像特征提取不稳定以及分类准确率偏低等问题,提出一种基于多模态数据的AD分类方法。该方法首先根据临床中对AD诊断时需要多种检查方式综合分析的特点,采用核磁共振成像(MRI)、量表、生物标志物、基因四种多模态数据实现AD辅助诊断,并针对多模态数据的特点设计了多模态分类网络。多模态分类网络搭建了影像数据和非影像数据两条特征提取网络分支:前者将预处理后的MRI影像数据送入改进后的网络进行特征提取,改进的网络以残差网络(ResNet)为主体,将坐标注意力(CA)模块嵌入残差结构中,使网络模型关注到MRI影像中的AD病变位置区域;后者将量表、生物标志物和基因等非影像数据送入多层感知机提取特征信息。最后将提取到的MRI影像特征和非影像特征通过特征融合后实现分类。实验结果表明,在无泄漏多模态数据集下,改进后的MRI影像特征提取网络相较于基础网络ResNet,AD/轻度认知障碍(MCI)/认知正常(CN)三分类准确率提升了5.42个百分点,AD/CN二分类准确率提升了8.87个百分点,验证了网络改进的有效性;多模态融合后的AD/CN准确率为92.89%,相较于单模态MRI影像数据提升了8.40个百分点,AD/MCI/CN分类准确率则提升了13.51个百分点,有效地验证了提出的方法能融合各种模态的病理信息,有效提高AD分类的准确率。综上,所提方法能有效地提升AD辅助诊断性能。
In response to the issues of limited extraction of pathological information from single-modal imaging data,unstable image feature extraction,and low classification accuracy of current auxiliary classification methods for Alzheimers Disease(AD),a multi-modal data-based AD classification method was proposed.Four types of multi-modal data,including Magnetic Resonance Imaging(MRI),scales,biomarkers and genes,were used to achieve AD auxiliary diagnosis;and a multimodal classification network was designed to handle the characteristics of multi-modal data.The network consists of two feature extraction branches for image and non-image data:in branch for image,the MRI image data was preprocessed to extract features by an improved network,which incorporates Coordinate Attention(CA)modules into the residual structure of the Residual Network(ResNet)to focus on the regions of AD lesions in MRI images;in branch for non-image,feature information was extracted from non-image data by using a multilayer perceptron.Finally,the extracted MRI image features and non-image features were fused for classification.Experimental results on a leakage-free multi-modal dataset show that the improved MRI image feature extraction network achieves a 5.42 percentage points increase in AD/Mild Cognitive Impairment(MCI)/Cognitively Normal(CN)three-class classification accuracy and an 8.87 percentage points increase in AD/CN binary classification accuracy compared to the basic ResNet network,demonstrating the effectiveness of the network improvement.The accuracy of AD/CN classification after multi-modal fusion is 92.89%,with an 8.40 percentage points increase compared to single-modal MRI image data,and the accuracy of AD/MCI/CN classification is increased by 13.51 percentage points,effectively verifying that the proposed method can fuse pathological information from various modalities to improve AD classification accuracy.In summary,the proposed method can effectively enhance the performance of AD auxiliary diagnosis.
作者
张昀枭
吴晓红
唐荔莉
徐庆华
王斌
何小海
ZHANG Yunxiao;WU Xiaohong;TANG Lili;XU Qinghua;WANG Bin;HE Xiaohai(College of Electronics and Information Engineering,Sichuan University,Chengdu Sichuan 610065,China;Chengdu Yixinyuan Health Management Company Limited,Chengdu Sichuan 610051,China)
出处
《计算机应用》
CSCD
北大核心
2023年第S02期298-305,共8页
journal of Computer Applications
基金
成都市重大科技应用示范项目(2019-YF09-00120-SN)。
关键词
多模态数据
深度学习
残差网络
坐标注意力
核磁共振成像
阿尔兹海默病
multimodal data
deep learning
Residual Network(ResNet)
Coordinate Attention(CA)
Magnetic Resonance Imaging(MRI)
Alzheimers Disease(AD)