摘要
针对复杂环境下目标跟踪过程中由于遮挡、目标姿势及光照条件变化引起跟踪漂移的问题,提出一种基于多示例学习(MIL)框架的在线视觉目标跟踪算法。该算法针对多示例跟踪算法采用单一haar-like特征不能准确描述目标外观变化及在学习过程中对样本包中各正负样本示例采用相同权值,忽略不同正负样本示例在学习过程中对包的重要性不同的特点,采用多特征联合表示目标外观构造分类器,通过将多特征互补特性融入在线多示例学习过程中,利用多特征的互补属性建立准确的目标外观模型,克服在线多示例跟踪算法对目标外观变化描述不足的问题;同时,依据不同正负样本示例对样本包的重要程度进行权值分配,提高跟踪精度。实验结果表明,本文跟踪算法对场景光线剧烈变化、遮挡、尺度变化及平面旋转等干扰具有较强的跟踪鲁棒性,通过对不同视频序列进行测试,文中算法在5组测试视频序列上的平均中心位置误差远小于对比增量式学习跟踪,仅为10.14像素,其对比算法IVT、MIL和OAB的中心位置误差分别为17.99、20.29和33.64像素。
When most existing tracking algorithms are used,target drift problem is easy to occur under a complex environment such as occlusion,pose and illumination change. This paper proposes an online visual target tracking algorithm based on the framework of multiple instance learning( MIL) tracking. The MIL tracking algorithm cannot describe the target appearance accurately because it only uses single haar-like feature,adopts the same weight during the process of learning sample packages which contain positive samples and negative samples,and ignores the characteristic of different positive samples and negative samples having different importance to the sample bags. Therefore,this paper combines the multiple features to represent the target,constructs the classifiers,integrates the complementary characteristic of multiple features to the MIL online learning process,exploits the characteristics of complementary properties to establish more accurate target appearance model,and overcomes the problem of MIL tracking algorithm that it is insufficient to describe the target appearance. Simultaneously,the weights are assigned based on the importance of different positive samples and negative samples to the sample bags,and the tracking precision is improved. The experimental results show that the proposed algorithm can effectively handle video scene occlusions,illumination changes and scale changes with high accuracy and strong robustness. Compared with incremental learning of visual tracing( IVT),MIL and online Ada Boost( OAB) tracking algorithms,through the different challenging video sequences,the average center position error of the proposed algorithm in 5 groups of test videos is far smaller than the other three algorithms,which is only 10. 14 pixel,while those of IVT,MIL and OAB algorithms are17. 99,20. 29 and 33. 64 pixel,respectively.
出处
《北京航空航天大学学报》
EI
CAS
CSCD
北大核心
2016年第10期2146-2154,共9页
Journal of Beijing University of Aeronautics and Astronautics
基金
航空科学基金(2012ZC53043)
高等学校博士学科点专项科研基金(20096102110027)
航天科技创新基金(CASC201104)~~
关键词
多示例学习
多特征联合表示
权值分配
目标跟踪
分类器
multiple instance learning
joint multiple feature representation
weight distribution
target tracking
classifier