摘要
为充分利用图像的特征信息,提出一种改进的卷积神经网络(CNN)方法来匹配图像。网络训练阶段,在结构上改用多分支和不同尺寸卷积核,实现图像多尺度信息的提取与融合,使计算的图像块相似度更可靠。视差计算阶段,将网络模型用于度量图像对的匹配程度,利用该相似度初始化匹配代价,通过交叉代价聚合和优化策略获取粗糙的视差图,对视差图精细化处理。实验结果表明,该方法在Middlebury测试集上能获取更精确的视差。
To make full use of the featured information of image,an improved CNN method was proposed to match images.To realize the extraction and fusion of multi-scale information of image,during the network training,this model was modified in structure by using multiple branch and convolution kernel of various sizes,which made the obtained image block similarity more reliable.During the disparity calculation stage,the network model was used to measure the matching degree of image pairs,and this similarity was used to initialize the matching cost.The rough disparity map was obtained by adopting cross cost aggregation and optimization policies.Delicacy treatment was conducted to the disparity map.Experimental results show that the proposed method is beneficial to obtain more accurate disparity on Middlebury datasets.
作者
习路
陆济湘
涂婷
XI Lu;LU Ji-xiang;TU Ting(College of Science,Wuhan University of Technology,Wuhan 430070,China)
出处
《计算机工程与设计》
北大核心
2018年第9期2918-2922,共5页
Computer Engineering and Design
基金
国家自然科学基金面上基金项目(61573012)