摘要
提出了一种基于改进时空兴趣点检测的人体行为识别算法。旨在针对复杂环境的时空特性,在传统兴趣点检测算法的基础上,加入背景点抑制和时空兴趣点约束,以减少无用兴趣点对有效兴趣点信息的干扰。为此,首先对Harris-Laplace算法进行改进,以克服兴趣点检测过程中遇到的多尺度问题和冗余点过多问题,提取筛选后的有效兴趣点作为目标的运动坐标信息。然后基于Bag-of-words模型思想,使用HOG算子对兴趣点进行特征提取,建立视觉词典,使用AIB算法合并词义相近的视觉词汇,作为单词表中的基础词汇。最后使用SVM进行人体行为分类并实现复杂环境下的人体行为识别。为了验证新算法的有效性,分别在现有的公开人体行为基准数据库和一些复杂场景下进行实验。试验结果表明,通过对无用兴趣点的抑制,能够有效降低单帧图像的计算复杂度,减少特征提取时间,提高行为识别准确度。
In this paper,we present a human action recognition algorithm based on interest points in spatial and temporal constraints. In order to overcome the problems of available information interference cause by complex background cenes,We proposed the improved Spatio-Temporal Interest Points( STIP) detection approach by surround suppression combined with local and temporal constraints. Firstly,the improved Harris-Laplace algorithm is proposed to solve the multi-scale problems. Then,based on the bag of words model,the HOG descriptor is used to extract feature vectors and Agglomerative Information Bottleneck( AIB) algorithm to combine the visual vocabulary.The Support Vector Machine( SVM) is trained for action classification and prediction. In order to validate the effectiveness of the proposed method,experiments were carried out under the existing disclosure benchmark datasets of human action and other more complex scenes. Experiment results demonstrate that the proposed human action recognition algorithm is both effective and efficient in a great variety of complex scenes.
作者
丁松涛
曲仕茹
Ding Songtao Qu Shiru(School of Automation, Northwestern Polytechnical University , Xi'an 710072 , China)
出处
《西北工业大学学报》
EI
CAS
CSCD
北大核心
2016年第5期886-892,共7页
Journal of Northwestern Polytechnical University
基金
教育部高等学校博士学科点专项科研基金(20096102110027)
航天科技创新基金(CASC201104)
航空科学基金(2012ZC53043)资助