摘要
传统眼底图像配准方法提取的特征点分布过于密集,导致配准图像无法准确对齐,而眼底血管分叉特征点具有分布稀疏和特征稳定等特性,能提高图像的配准精度和速度。因此,本文针对血管分割和分叉特征点提取,提出一个基于深度学习的眼底图像配准框架。这个框架由2个深度卷积神经网络组成:第一个是眼底血管分割网络SR-UNet,其在U-Net的基础上融合通道注意力(SE)和残差块,用于分割血管去辅助提取特征点;第二个是特征点检测网络FD-Net,用于从血管分割图中提取分叉特征点。提出的配准模型在公共眼底配准数据集FIRE上进行实验,其特征点正确匹配率为90.03%,与较先进的算法进行对比,本文提出的算法在配准定量和视觉分析上都有较好的性能提升,具有较强的鲁棒性。
The distribution of feature points extracted by traditional fundus image registration methods is too dense,which leads to the inaccurate alignment of the images to be registered images.And the retinal vessel bifurcation feature points have sparse distribution and stable features to improve the accuracy and speed of image registration.Therefore,this paper proposed a deep learning-based fundus image registration framework for vessel segmentation and bifurcation feature point extraction.This framework is composed of two deep convolutional neural networks.The first is the fundus blood vessel segmentation network SR-UNet,which combines channel attention(SE)and residual blocks on the basis of U-Net to segment retinal vessels to assist feature points extraction.The second is the feature point detection network FD-Net,which is used to extract bifurcation feature points from the vessel segmentation map.The proposed registration model is tested on the public fundus registration data set FIRE.The correct matching rate of the feature points is 90.03%.Compared with more advanced retinal image registration algorithms,the algorithm proposed has better performance and strong robustness both in registration quantitative and visual analysis.
作者
吴玲玉
蓝洋
夏海英
WU Lingyu;LAN Yang;XIA Haiying(College of Electronics Engineering,Guangxi Normal University,Guilin Guangxi 541004,China)
出处
《广西师范大学学报(自然科学版)》
CAS
北大核心
2021年第5期122-133,共12页
Journal of Guangxi Normal University:Natural Science Edition
基金
国家自然科学基金(61762014)
广西研究生教育创新计划项目(YCSW2020101)。
关键词
眼底图像配准
深度学习
血管分割网络
特征提取网络
U-Net
retinal image registration
deep learning
vessel segmentation network
feature point detection network
U-Net