期刊文献+

融合时空兴趣点和多元广义高斯混合模型的人体动作识别 被引量:3

Human Motion Recognition based on Spatiotemporal Interest Points and Multivariate Generalized Gaussian Mixture Models
下载PDF
导出
摘要 人体动作识别近年作为计算机视觉领域的热点研究方向,被广泛用于人机交互、虚拟现实等领域。针对传统人体动作识别算法中提取特征时冗余点过多、忽略图像数据的关联性等问题,提出一种融合时空兴趣点和结合定点估计的多元广义高斯混合模型(MGGMMs)的人体动作识别方法,通过过滤冗余特征点和利用多元广义高斯混合模型实现了特征点的有效提取以及对数据关联性的充分利用。以改进的Harris-Laplace算法和3D-SIFT描述子提取视频序列的特征点,利用BOW模型进行视觉词聚类,最后通过改进的多元广义高斯混合模型进行建模和分类。在KTH公开数据集上进行实验,实验结果表明提出的人体动作识别方法能够对视频中人体动作进行有效识别和分类。 Human action recognition has been widely used in the fields of human-computer interaction and virtual reality, etc. Aiming at the problems of excessive redundancies and neglecting the correlation of image data in traditional human motion recognition algorithm, this paper proposed a human motion recognition method based on spatio-temporal interest points and multivariate generalized Gaussian mixture model combined with fixed-point estimation. Redundant feature points and multivariate generalized Gaussian mixture models are used to effectively extract feature points and make full use of data correlation. The Harris-Laplace algorithm and 3D-SIFT descriptor extracted the feature points of the video sequence, and the BOW model clustered the visual words. Finally, the improved multivariate generalized Gaussian mixture model modeled and classified. Experiments were performed on the KTH datasethe experimental results show that the proposed human motion recognition method can effectively recognize and classify human motion in video.
作者 何冰倩 魏维 宋岩贝 高联欣 张斌 HE Bingqian;WEI Wei;SONG Yanbei;GAO Lianxin;ZHANG Bin(College of Computer Science and Technology,Chengdu University of Information Technology,Chengdu 610225 ,China)
出处 《成都信息工程大学学报》 2019年第4期358-364,共7页 Journal of Chengdu University of Information Technology
基金 四川省教育厅重点科研项目(17ZA0064)
关键词 动作识别 时空兴趣点 HARRIS-LAPLACE MGGMMs 特征提取 action recognition spatial and temporal interest points Harris-Laplace 3D -SIFT feature extraction
  • 相关文献

参考文献2

二级参考文献6

  • 1Kishore K. Reddy,Mubarak Shah.Recognizing 50 human action categories of web videos[J].Machine Vision and Applications.2013(5)
  • 2Chris Ellis,Syed Zain Masood,Marshall F. Tappen,Joseph J. LaViola,Rahul Sukthankar.Exploring the Trade-off Between Accuracy and Observational Latency in Action Recognition[J].International Journal of Computer Vision.2013(3)
  • 3Rongrong Ji,Hongxun Yao,Xiaoshuai Sun.Actor-independent action search using spatiotemporal vocabulary with appearance hashing[J].Pattern Recognition.2010(3)
  • 4Juan Carlos Niebles,Hongcheng Wang,Li Fei-Fei.Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words[J].International Journal of Computer Vision.2008(3)
  • 5Ivan Laptev.On Space-Time Interest Points[J].International Journal of Computer Vision (-).2005(2-3)
  • 6黎洪松,李达.人体运动分析研究的若干新进展[J].模式识别与人工智能,2009,22(1):70-78. 被引量:38

共引文献128

同被引文献23

引证文献3

二级引证文献8

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部