期刊文献+

基于双语动态系统包的视角无关的人体行为识别 被引量:2

View-Invariant Action Recognition Based on Bilingual Bag of Dynamic Systems
下载PDF
导出
摘要 视角无关的人体行为识别是计算机视觉领域研究的热点和难点之一。现有的视角无关的行为识别算法的识别率随着角度的改变差异很大,尤其与俯角相关的识别效果还不够理想。提出了一种基于双语动态系统包的视角无关的人体行为识别方法。首先结合兴趣点检测器和密集采样算法提取视频帧中的时空立方体并对每个时空立方体建立线性动态系统(LDS);其次对LDSs进行非线性降维聚类形成码本,并根据LDSs在码本中的分布及权重用一个动态系统包(bag of dynamical systems)来表示每个动作样本;最后同时对两个视角下的BoDS采用K-奇异值分解(K-SVD)算法得到一对可迁移字典对,然后根据这对字典对采用正交匹配追踪(OMP)算法得到两个视角下每个动作的稀疏表示。在IXMAS多视角数据库的实验结果表明了文中算法的稳定性和有效性。 View-invariant action recognition is one of the difficult and hot spots in computer vision.The recognition rates of existing algorithms vary with the variant viewpoints,especially the recognition effect on the top view is not ideal.A new framework is proposed for cross-view action recognition.Spatio-temporal patches are extracted as a low-level feature with the combination of interest point detection and dense sampling algorithm,and each patch is represented as a linear dynamic system (LDS).The codebook is formed through nonlinear reduced dimension aggregation of LDSs and,as the middle-level representation,bag of dynamic system (BoDS) is formed based on the distribution and weight of LDSs within the codebook.Using K-singular value decomposition (K-SVD) algorithm,a BoDS pair corresponding to two viewpoints is transformed into a transferable dictionary pair.An orthogonal matching pursuit (OMP) algorithm is applied to the dictionary pair to generate the sparse representation of the action,thus it ensures that the same action from the two views with the same high-level representation.Experimental results on the Ⅸ-MAS multi-view dataset show the effectiveness and the stabilization of the proposed algorithm.
出处 《南京邮电大学学报(自然科学版)》 北大核心 2014年第1期103-110,共8页 Journal of Nanjing University of Posts and Telecommunications:Natural Science Edition
基金 国家自然科学基金(61172118 61001152) 江苏省自然科学基金(BK2010523) 江苏省属高校自然科学研究项目(11KJB510012) 南京邮电大学校科研基金(NY210073)资助项目
关键词 视角无关动作识别 迁移学习 双语动态系统包 view-invariant action recognition transfer leaning bilingual bag of dynamic systems
  • 相关文献

参考文献26

  • 1LIU J,ALI S,SHAH M.Recognizing human actions using multiple features[C] //Proc Int' l Conf on Computer Vision and Pattern Recognition.2008.
  • 2LIN Z,JIANG Z,DAVIS L S.Recognizing actions by shap-motion prototype trees[C] //Proc Int' l Conf on Computer Vision.2009:444-451.
  • 3GORELICK L,BLANK M,SHECHTMAN E,et al.Actions as spacetime shapes[J] .IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29 (12):2247-2253.
  • 4GRUNDMANN M,MERIER F,ESSA I.3D shape context and distance transform for action recognition[C] //Proc Int'l Conf on Pattern Recognition.2008:1-4.
  • 5EFROS A,BERG A C,MORI G,et al.Recognizing action at a distance[C] //Proc Int' l Conf on Computer Vision.2003.
  • 6JUNEJO I,DEXTER E,LAPTEV I,et al.View-independent action recognition from temporal self-similarities[J] .IEEE Transactions on Pattern Recognition and Machine Intelligence,2011,33 (1):173-185.
  • 7FARHADI A,TABRIZI M,ENDRES I,et al.A latent model of discriminative aspect[C] // Proc Int' l Conf on Computer Vision.2009:1-8.
  • 8FARHADI A,TABRIZI M.Learning to recognize activities from the wrong view point[C] //Proc European Conf Computer Vision.2008:154-166.
  • 9LIU J,SHAH M,KUIPERS B,et al.Cross-View Action Recognition via View Knowledge Transfer[C] //Proc Int' l Conf on Computer Vision and Pattern Recognition.2011.
  • 10DHILLON I S.Co-clustering documents and words using bipartite spectral graph partitioning[C] // International Conference on Knowledge Discovery and Data Mining.2001:269-274.

同被引文献16

引证文献2

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部