期刊文献+

基于LL-GG-LG Net的CT和PET医学图像融合

CT and PET medical image fusion based on LL-GG-LG Net
下载PDF
导出
摘要 多模态医学图像融合在医学临床应用中起着至关重要的作用,为了解决现有方法大多数侧重于局部特征的提取,对全局依赖关系的探索不足,忽略了全局和局部信息交互,导致难以有效解决周围组织与病灶区域之间的模式复杂性和强度相似性问题。该文提出面向PET和CT医学图像融合的LL-GG-LG Net模型。首先,提出了局部-局部融合模块(Local-Local Fusion Module,LL Module),该模块采用双层注意力机制更好地关注局部细节信息特征;其次,设计了全局-全局融合模块(Global-Global Fusion Module,GG Module),该模块通过在Swin Transformer中加入残差连接机制将局部信息引入全局信息中,提高了Transformer对局部信息的关注程度;然后,提出一种基于可微分神经架构搜索自适应的密集融合网络的局部-全局融合模块(Local-Global Fusion Module,LG Module),充分捕获全局关系并保留局部线索,有效解决背景和病灶区域相似度高问题;使用临床多模态肺部医学图像数据集验证模型的有效性,实验结果表明,该文方法在平均梯度,边缘强度,QAB/F,空间频率,标准差,信息熵等感知图像融合质量评价指标上与其他七种方法中最优的方法相比,分别平均提高了21.5%,11%,4%,13%,9%,3%。模型能够突出病变区域信息,融合图像结构清晰且纹理细节丰富。 Multimodal medical image fusion plays a crucial role in clinical medical applications.Most of the existing methods have focused on local feature extraction,whereas global dependencies have been in-sufficiently explored;furthermore,interactions between global and local information have not been consid-ered.This has led to difficulties in effectively addressing the complexity of patterns and the similarity be-tween the surrounding tissue(background)and the lesion area(foreground)in terms of intensity.To ad-dress such issues,this paper proposes an LL-GG-LG Net model for PET and CT medical image fusion.Firstly,a Local-Local fusion(LL)module is proposed,which uses a two-level attention mechanism to better focus on local detailed information features.Next,a Global-Global fusion(GG)module is de-signed,which introduces local information into the global information by adding a residual connection mechanism to the Swin Transformer,thereby improving the Transformer's attention to local information.Subsequently,a Local-Global fusion(LG)module is proposed based on a differentiable architecture search adaptive dense fusion network,which fully captures global relationships and retains local cues,thereby effectively solving the problem of high similarity between background and focus areas.The mod-el's effectiveness is validated using a clinical multimodal lung medical image dataset.The experimental re-sults show that,compared to seven other methods,the proposed method objectively improves the percep-tual image fusion quality evaluation indexes such as the average gradient(AG),edge intensity(EI),QAB/F,spatial frequency(SF),standard deviation(SD)and information entropy(IE)edge retention by 21.5%,11%,4%,13%,9%,and 3%,respectively,on average.The model can highlight the informa-tion of the lesion areas.Moreover,the fused image structure is clear,and detailed texture information can be obtained.
作者 周涛 张祥祥 陆惠玲 李琦 程倩茹 ZHOU Tao;ZHANG Xiangxiang;LU Huiling;LI Qi;CHENG Qianru(School of computer science and engineering,North Minzu University,Yinchuan 750021,China;Key Laboratory of Image and Graphics Intelligent Processing of State Ethnic Affairs Commission,North Minzu University,Yinchuan 750021,China;School of Medical information engineering,Ningxia Medical University,Yinchuan 750004,China)
出处 《光学精密工程》 EI CAS CSCD 北大核心 2023年第20期3050-3064,共15页 Optics and Precision Engineering
基金 国家自然科学基金资助项目(No.62062003) 宁夏自然科学基金资助项目(No.2022AAC03149) 宁夏自治区重点研发计划资助项目(引才专项)(No.2020BEB04022) 北方民族大学2022年研究生创新项目资助(No.YCX22190)。
关键词 医学图像融合 深度学习 注意力机制 可微分架构搜索 密集网 medical image fusion deep learning attention mechanism differentiable architecture search dense network
  • 相关文献

参考文献1

二级参考文献9

共引文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部