摘要
提出一种新的局部时空特征描述方法对视频序列进行识别和分类。结合SURF和光流检测图像中的时空兴趣点,并利用相应的描述子表示兴趣点。用词袋模型表示视频数据,结合SVM对包含不同行为的视频进行训练和分类。为了检测这种时空特征的有效性,通过UCF YouTube数据集进行了测试。实验结果表明,提出的算法能够有效识别各种场景下的人体行为。
This paper presents a new local spatial-temporal feature for identifying and classifying video sequences. Spatial-tem- poral interest points are detected by combining SURF and optical flow. Corresponding descriptors are used to describe the interest points. Video data is represented by famous bag-of-words model. SVM is used to train and classify videos contained various hu- man actions. To verify the efficiency of our descriptor, we test it on UCF YouTube datasheet. Experimental results show that pro- posed method can efficiently recognize human actions under different scenes.
出处
《电子技术应用》
北大核心
2012年第7期123-125,共3页
Application of Electronic Technique
基金
中央高校科研经费资助项目(2010HGZX0019)
关键词
行为识别
光流
词袋
时空特征
兴趣点
actions recognition
optical flow
bag-of-words
spatial- temporal feature
interest point