期刊文献+

基于卷积神经网络的眼底图像配准研究 被引量:1

Retinal Image Registration Using Convolutional Neural Network
下载PDF
导出
摘要 传统眼底图像配准方法提取的特征点分布过于密集,导致配准图像无法准确对齐,而眼底血管分叉特征点具有分布稀疏和特征稳定等特性,能提高图像的配准精度和速度。因此,本文针对血管分割和分叉特征点提取,提出一个基于深度学习的眼底图像配准框架。这个框架由2个深度卷积神经网络组成:第一个是眼底血管分割网络SR-UNet,其在U-Net的基础上融合通道注意力(SE)和残差块,用于分割血管去辅助提取特征点;第二个是特征点检测网络FD-Net,用于从血管分割图中提取分叉特征点。提出的配准模型在公共眼底配准数据集FIRE上进行实验,其特征点正确匹配率为90.03%,与较先进的算法进行对比,本文提出的算法在配准定量和视觉分析上都有较好的性能提升,具有较强的鲁棒性。 The distribution of feature points extracted by traditional fundus image registration methods is too dense,which leads to the inaccurate alignment of the images to be registered images.And the retinal vessel bifurcation feature points have sparse distribution and stable features to improve the accuracy and speed of image registration.Therefore,this paper proposed a deep learning-based fundus image registration framework for vessel segmentation and bifurcation feature point extraction.This framework is composed of two deep convolutional neural networks.The first is the fundus blood vessel segmentation network SR-UNet,which combines channel attention(SE)and residual blocks on the basis of U-Net to segment retinal vessels to assist feature points extraction.The second is the feature point detection network FD-Net,which is used to extract bifurcation feature points from the vessel segmentation map.The proposed registration model is tested on the public fundus registration data set FIRE.The correct matching rate of the feature points is 90.03%.Compared with more advanced retinal image registration algorithms,the algorithm proposed has better performance and strong robustness both in registration quantitative and visual analysis.
作者 吴玲玉 蓝洋 夏海英 WU Lingyu;LAN Yang;XIA Haiying(College of Electronics Engineering,Guangxi Normal University,Guilin Guangxi 541004,China)
出处 《广西师范大学学报(自然科学版)》 CAS 北大核心 2021年第5期122-133,共12页 Journal of Guangxi Normal University:Natural Science Edition
基金 国家自然科学基金(61762014) 广西研究生教育创新计划项目(YCSW2020101)。
关键词 眼底图像配准 深度学习 血管分割网络 特征提取网络 U-Net retinal image registration deep learning vessel segmentation network feature point detection network U-Net
  • 相关文献

参考文献6

二级参考文献23

  • 1葛永新,杨丹,张小洪.基于特征点对齐度的图像配准方法[J].电子与信息学报,2007,29(2):425-428. 被引量:18
  • 2张翼,王满宁,宋志坚.脊柱手术导航中分步式2D/3D图像配准方法[J].计算机辅助设计与图形学学报,2007,19(9):1154-1158. 被引量:11
  • 3石雅笋.改进的SURF图像配准算法研究[D].成都:电子科技大学,2008.
  • 4Zitova B, Flusser J. Image registration methods: a survey[J]. Image and vision computing, 2003, 21(11): 977-1000.
  • 5Brown L G. A survey of image registration techniques[J]. ACM computing surveys, 1992, 24(4): 325-376.
  • 6Lowe D G. Object recognition from local scale-invariant features[C]//Proceedings of the Seventh IEEE International Conference on Computer Vision. IEEE, 1999, 2: 1150-1157.
  • 7Bay H, Tuytelaars T, Van Gool L. Surf: speeded up robust features[M]//Computer Vision-ECCV 2006. Berlin Heidelberg: Springer, 2006: 404-417.
  • 8Geusebroek J M, Van den Boomgaard R, Smeulders A W M, et al. Color invariance[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(12): 1338-1350.
  • 9Kubelka P, Munk F, Kubelka P. Ein beitrag ztlr optik der farbanstriche[J]. Z. Teeh. Physik, 193l, 12: 593-601.
  • 10Abdel-Hakim A E, Farag A A. CSIFT: a SIFT descriptor with color invariant characteristics[C]//Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington DC, USA: IEEE, 2006, 2: 1978-1983.

共引文献52

同被引文献2

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部