摘要
基于时空上下文信息的目标跟踪算法利用目标与背景之间的时空关系,在一定程度上解决静态遮挡问题,但当目标出现较大遮挡或快速运动目标被背景中物体遮挡(动态遮挡)时,仍然会出现跟踪不准确或跟丢的情况.基于此种情况,文中提出基于遮挡检测和时空上下文信息的目标跟踪算法.首先利用首帧图像中压缩后的光照不变颜色特征构造并初始化时空上下文模型.然后利用双向轨迹误差对输入的视频帧进行遮挡情况判断.如果相邻帧间目标区域特征点的双向匹配误差小于给定阈值,说明目标未出现严重遮挡或动态遮挡,可以利用时空上下文模型进行准确跟踪.否则利用文中提出的组合分类器对后续帧进行目标检测,直至重新检测到目标,同时对上下文模型和分类器进行在线更新.在多个视频帧序列上的测试表明,文中算法可以较好地解决复杂场景下较严重的静态遮挡和动态遮挡问题.
The existing spatio-temporal context based target tracking algorithms have good performance for static occlusion due to the consideration of the spatio-temporal relationship between the object and the background. However, the large occlusion area of the object or the occluded fast-moving object still easily lead to inaccurate tracking or lost tracking. A target tracking algorithm combining target occlusion detection and context information is proposed in this paper. Firstly, the compressed illumination-invariant color features extracted from the first frame are utilized to constitute and initialize spatio-temporal context model. Then, the occlusions in the inputted video frames are judged by bidirectional trajectory error. If the bidirectional matching error of key points in object region between consecutive frames is less than aset threshold, there is no dynamic occlusion or severe static occlusion. Accordingly, the accurate tracking is conducted in virtue of spatio-temporal context model. Otherwise, the objects in the subsequent frames are detected by the combined classifiers until the objects can be detected again. Meanwhile, the context model and classifiers are updated online. The experimental results on several video frame sequences show that the proposed method can deal with severe static occlusion and dynamic occlusion in complex scenario well.
出处
《模式识别与人工智能》
EI
CSCD
北大核心
2017年第8期718-727,共10页
Pattern Recognition and Artificial Intelligence
基金
国家自然科学基金项目(No.61663031
61462065)
江西省研究生创新专项基金项目(No.YC2015-S337)资助~~
关键词
目标跟踪
上下文信息
遮挡检测
组合分类器
Target Tracking, Context Information, Occlusion Detection, Combined Classifier