期刊文献+

LGNet:Local and global representation learning for fast biomedical image segmentation 被引量:1

下载PDF
导出
摘要 Medical image segmentation plays a crucial role in clinical diagnosis and therapy systems,yet still faces many challenges.Building on convolutional neural networks(CNNs),medical image segmentation has achieved tremendous progress.However,owing to the locality of convolution operations,CNNs have the inherent limitation in learning global context.To address the limitation in building global context relationship from CNNs,we propose LGNet,a semantic segmentation network aiming to learn local and global features for fast and accurate medical image segmentation in this paper.Specifically,we employ a two-branch architecture consisting of convolution layers in one branch to learn local features and transformer layers in the other branch to learn global features.LGNet has two key insights:(1)We bridge two-branch to learn local and global features in an interactive way;(2)we present a novel multi-feature fusion model(MSFFM)to leverage the global contexture information from transformer and the local representational features from convolutions.Our method achieves state-of-the-art trade-off in terms of accuracy and efficiency on several medical image segmentation benchmarks including Synapse,ACDC and MOST.Specifically,LGNet achieves the state-of-the-art performance with Dice's indexes of 80.15%on Synapse,of 91.70%on ACDC,and of 95.56%on MOST.Meanwhile,the inference speed attains at 172 frames per second with 224-224 input resolution.The extensive experiments demonstrate the effectiveness of the proposed LGNet for fast and accurate for medical image segmentation.
出处 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2023年第4期29-39,共11页 创新光学健康科学杂志(英文)
基金 supported by the Open-Fund of WNLO (Grant No.2018WNLOKF027) the Hubei Key Laboratory of Intelligent Robot in Wuhan Institute of Technology (Grant No.HBIRL 202003).
  • 相关文献

同被引文献3

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部