摘要
针对异源无人机影像视角、分辨率、灰度值差异大的特点,提出一种基于语义深度局部特征的无人机热红外与可见光影像匹配方法。该方法首先利用全卷积神经网络和注意力机制提取具有语义信息的深度局部特征;其次以多通道特征图作为描述符进行kd-tree匹配;最后将向量场一致性VFC(Vector Field Consensus)和随机采样一致性RANSAC(Random Sample Consensus)相结合(VFC-RANSAC)进行误匹配剔除,从而实现无人机热红外与可见光影像的稳健匹配。匹配试验表明,与SIFT、KAZE等提取的人工特征相比,深度特征可以抵抗更大的影像几何和辐射差异;与RANSAC相比,VFC-RANSAC能够更有效地剔除外点,获得更高的正确匹配率和匹配精度。
Aiming at the large differences in perspective,resolution and gray value of heterogeneous UAV images,a method based on semantic deep local features for matching between UAV thermal infrared images and optical images is proposed.Firstly,full convolutional neural network and attention mechanism are utilized to extract deep local features with semantic information in this method.Secondly,the multi-channel feature graph is used as descriptors for kd-tree matching.Finally,mismatches are eliminated by combining VFC(Vector Field Consensus)and RANSAC(Random Sample Consensus)so as to achieve robust matching between UAV thermal infrared images and optical images.The matching experiment shows that the larger image geometric and radiative differences can be handled by the deep features,outliers can be eliminated more effectively and correct matching rate and matching precision are higher in VFC-RANSAC algorithm than those in RANSAC algorithm.
作者
崔志祥
蓝朝桢
熊新
张永显
侯慧太
刘宸博
CUI Zhixiang;LAN Chaozhen;XIONG Xin;ZHANG Yongxian;HOU Huitai;LIU Chenbo(Information Engineering University,Zhengzhou 450001,China;31682 Troops,Lanzhou 730020,China;92880 Troops,Zhoushan 316000,China)
出处
《测绘科学技术学报》
北大核心
2019年第6期609-613,共5页
Journal of Geomatics Science and Technology
基金
国家自然科学基金项目(41701463)。