期刊文献+

基于深度学习的肝脏CT-MR图像无监督配准 被引量:5

Unsupervised Registration for Liver CT-MR Images Based on Deep Learning
下载PDF
导出
摘要 多模态配准是医学图像分析中的关键环节,在肝癌辅助诊断、图像引导的手术治疗中具有重要作用。针对传统的迭代式肝脏多模态配准计算量大、耗时长、配准精度低等问题,提出一种基于多尺度形变融合和双输入空间注意力的无监督深度学习配准算法。利用多尺度形变融合框架提取不同分辨率的图像特征,实现肝脏的逐阶配准,在提高配准精度的同时避免网络陷入局部最优。采用双输入空间注意力模块在编解码阶段融合不同水平的空间和文本信息提取图像间的差异特征,增强特征表达。引入基于邻域描述符的结构信息损失项进行网络迭代优化,不需要任何先验信息即可实现精确的无监督配准。在临床肝脏CT-MR数据集上的实验结果表明,与传统的Affine、Elastix、VoxelMorph等算法相比,该算法达到最优的DSC值和TRE值,分别为0.926 1±0.018 6和6.39±3.03 mm,其平均配准时间为0.35±0.018 s,相比Elastix算法提升了近380倍,能准确地提取特征及估计规则的形变场,具有较高的配准精度和较快的配准速度。 Multimodal registration is a key step in medical image analysis,which plays an important role in the assisted diagnosis and the image-guided surgical treatment of liver cancer.Aiming at the problems of large computation,long time consuming,and low registration accuracy of traditional iterative multimodal registration,this paper proposes an unsupervised deep learning-based image registration method based on multi-scale deformation fusion and dual-input spatial attention.Using the multi-scale deformation fusion architecture captures different resolution features of images to achieve liver registration in a coarse-to-fine pattern and avoids local optimization.The dual-input spatial attention module is used to extract the discrepant features between images by integrating spatial and text information at different levels in the codec stage and enhancing feature expression.Additionally,a structural information loss is introduced to globally optimize the registration network,which does not require any prior information and achieves an accurate unsupervised registration.Experimental results on liver Computed Tomography-Magnetic Resonance(CT-MR)datasets show that the proposed algorithm achieved an optimal global Dice Similarity Coefficient(DSC)and Target Registration Error(TRE)values of 0.926 1 ±0.018 6 and 6.39 ±3.03 mm,respectively,which is superior to Affine,Elastix,and VoxelMorph amongst other algorithms.In addition,the average registration time of the proposed algorithm is 0.35 ±0.018 s,which is nearly 380 times faster than the Elastix algorithm.Results show that the proposed algorithm demonstrates higher registration accuracy and faster registration speed by accurately extracting features and estimating the regular deformation field.
作者 王帅坤 周志勇 胡冀苏 钱旭升 耿辰 陈光强 纪建松 戴亚康 WANG Shuaikun;ZHOU Zhiyong;HU Jisu;QIAN Xusheng;GENG Chen;CHEN Guangqiang;JI Jiansong;DAI Yakang(Division of Life Sciences and Medicine,School of Biomedical Engineering(Suzhou),University of Science and Technology of China,Suzhou,Jiangsu 215163,China;Suzhou Institute of Biomedical Engineering and Technology,Chinese Academy of Science,Suzhou,Jiangsu 215163,China;The Second Affiliated Hospital of Suzhou University,Suzhou,Jiangsu 215000,China;The Lishui Central Hospital,Lishui,Zhejiang 323000,China;Jinan Guoke Medical Engineering Technology Development Co.,Ltd.,Jinan 250000,China)
出处 《计算机工程》 CAS CSCD 北大核心 2023年第1期223-233,共11页 Computer Engineering
基金 国家自然科学基金(81971685) 国家重点研发计划(2018YFA0703101) 中国科学院青年创新促进会会员基金(2021324) 江苏省重点研发计划(BE2021053) 苏州市科技计划(SS202054)。
关键词 深度学习 无监督配准 多模态配准 形变融合 结构信息损失 空间注意力 deep learning unsupervised registration multimodal registration deformation fusion structural information loss spatial attention
  • 相关文献

同被引文献20

引证文献5

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部