期刊文献+

基于随机蕨的光电成像末端制导目标初始化 被引量:3

Target Initialization Based on Random Ferns during Electro-Optical Imaging Terminal Guidance
原文传递
导出
摘要 为实现光电成像末端制导中的自适应目标初始化,针对末端制导景象匹配中图像存在尺度、旋转、灰度和3D视角差异,及传统方法运算量较大的问题,基于随机蕨分类器构造了一种新的景象匹配算法。算法首先利用基准图像进行分类器训练,然后基于该分类器对实时图像进行特征匹配。为剔除误匹配特征对,对初始匹配特征对中的对应区域分别进行尺度不变特征变换(SIFT)特征描述,基于马氏距离准则进行误匹配特征对剔除。根据顺序抽样一致性算法(PROSAC)对剩余的匹配特征对估计两图像的外极几何关系,最终根据外极几何关系求得目标在实时图像中的位置和尺寸信息。仿真结果表明,该算法能够在光电成像末端制导过程中实现稳定的目标初始化,在极端条件下的稳定性优于原随机蕨分类器算法。 In order to solve the problems of image differences in scale,rotation,grayscale and 3D viewing angle,and achieve adaptive target initialization during electro-optical imaging terminal guidance,a new scene matching framework based on random Ferns classifier was constructed.The process of realizing the method includes classifier off-line training which yields fast run-time performance was performed.Candidate match regions between reference image and run-time image were found by the classifier.Scale invariant feature transform(SIFT) descriptors of corresponding regions in each candidate matches were computed and false matches feature pairs rejecting was performed based on Mahalanobis distance criterion.Epipolar geometry of the two images was estimated by applying PROSAC to the central locations of the corresponding regions in the final matches.Target location and size in run-time image were computed based on the epipolar geometry.Simulation results show that the proposed method provides robust target initialization during electro-optical imaging terminal guidance and is more stable than original Ferns methods under severe conditions.
出处 《光学学报》 EI CAS CSCD 北大核心 2010年第11期3164-3170,共7页 Acta Optica Sinica
关键词 模式识别 景象匹配 随机蕨分类器 尺度不变特征变换(SIFT)描述符 误匹配特征对剔除 外极几何关系 pattern recognition scene matching random Ferns classifier scale invariant feature transform(SIFT) descriptor false matches feature pairs rejecting epipolar geometry
  • 相关文献

参考文献19

  • 1T. Tuytelaars, L. V. Gool. Matching widely separated views based on affine invariant regions [J]. International Journal of Computer Vision, 2004, 59(1): 61-85.
  • 2C. Schmid, R. Mohr. Local grayvalue invariants for image retrieval [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(5): 530-535.
  • 3K. Mikolajczyk, C. Schmid. Scale & affine invariant interest point detectors [J]. International Journal of Computer Vision, 2004, 60(1): 63-86.
  • 4D. G. Lowe. Distinctive image features from scale-invariant keypoints [J]. International Journal of Computer Vision, 2004, 60(2): 91-110.
  • 5张锐娟,张建奇,杨翠,张翔.基于CSIFT的彩色图像配准技术研究[J].光学学报,2008,28(11):2097-2103. 被引量:26
  • 6田莹,苑玮琦.尺度不变特征与几何特征融合的人耳识别方法[J].光学学报,2008,28(8):1485-1491. 被引量:14
  • 7H. Bay, T. Tuytelaars, L. V. Gool SURF: Speeded up robust features [C]. European Conference on Computer Vision, 2006, 1: 404-417.
  • 8J. Matas, O. Chum, M. Urban et al.. Robust wide baseline stereo from maximally stable extremal regions [C]. In Proceedings of the British Machine Vision Conference, 2002, 384-393.
  • 9P. E. Forssen, D. G. Lowe. Shape descriptors for maximally stable extremal regions [C]. International Conference on Computer Vision (ICCV), Brazil, 2007, 59-73.
  • 10V. Lepetit, P. Fua. Keypoint recognition using randomized trees [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(9): 1465-1479.

二级参考文献48

共引文献88

同被引文献16

  • 1l,owe D G. Object recognition from local scale-invariant features. The Proceedings of Ihe Seventh IEEE lnlemational Conference on Comput- er Vision, Corfu, Greece. IEEE, 1999, 2:1150-1157.
  • 2Lowe D G. Distinctive image features from scale-invarianl keypoinls. International Journal of Computer Vision, 2004 , 60( 2 ) : 91-110.
  • 3Bay H, Tuytelaars T, Van Gool L. Surf: Speeded up rubust features. Computer Vision-ECCV 2006. Springer, 2006:404-417.
  • 4Bay H, Ess A, Tuytelaars T, et al. Speeded-up robust features (SURF). Computer Vision and Image Understanding, 2008, 110 (3) : 346-359.
  • 5Calonder M, Lepeti! V, Strecha C, et al. Brief: Binary rnbust inde- pendent elementary features. Computer Vision-ECCV 2010. Spring- er. 2010:778-792.
  • 6Calonder M, Lepetit V, Ozuysal M, et cd. BRIEF: computing a local binary descriptor w~ry fast. Pattern Analysis and Mactine Intelli- gence, IEEE Transactions on, 2012, 34(7) : 1281-1298.
  • 7Rublee E, Rabaud V, Konolige K, et al. ORB: an efficient alterna- tive to SIFT or SURF. Computer Vision (ICCV), 2011 IEEE Inter- national Conference on. IEEE, 2011 : 2564-2571.
  • 8Rosten E, Drummond T. Fusing points and lines for high performance tracking// Computer Vision, 2005 ICCV 2005 Tenth IEEE Interna- tional Conference on. IEEE, 2005, 2:1508-1515.
  • 9Lepetit V, Fua P. Towards recognizing feature points using classifica- tion trees. Swiss Federal Institute of Technology, Lausanne, Switzer- land, 2004.
  • 10Lepetit V, Fua P. Keypoint recognition using randomized trees. Pat- tern Analysis and Machine Intelligence. IEEE Transactions on, 2006, 28(9) : 1465-1479.

引证文献3

二级引证文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部