期刊文献+

基于RGB-D数据的自适应局部时空特征提取方法 被引量:3

Adaptive Local Spatiotemporal Feature Extraction Based on RGB-D Data
下载PDF
导出
摘要 对于一次学习手势识别,噪声和全局经验运动约束严重影响时空特征的精确与充分提取,为此提出了一种融合颜色和深度(RGB-D)信息的自适应局部时空特征提取方法.首先建立连续两灰度帧和两深度帧的金字塔以及相应的光流金字塔作为尺度空间.然后根据灰度和深度光流的水平与垂直方差自适应提取运动感兴趣区域(motion regions of interest,MRo Is).接着仅在MRo Is内检测角点作为兴趣点,当兴趣点的灰度和深度光流同时满足局部运动约束时即为关键点,局部运动约束是在每个MRo I内自适应确定的.最后在改进的梯度运动空间计算SIFT-like描述子.Chalearn数据库上的实验结果表明:提出方法得到了较高的识别准确率,其识别性能优于现已发表的方法. Noise and global empirical motion constraints seriously affect extracting accurate and sufficient spatiotemporal features for one-shot learning gesture recognition. To tackle the problem,an adaptive local spatiotemporal feature extraction approach with both color and depth (RGB- D ) information fused was proposed. Firstly,pyramids and optical flow pyramids of successive two gray frames and two depth frames were built as scale space. Then,motion regions of interest ( MRoIs) were adaptively extracted according to horizontal and vertical variances of the gray and depth optical flow. Subsequently,corners were just detected as interest points in the MRoIs. These interest points were selected as keypoints only if their optical flow meet adaptive local gray and depth motion constraints. The local motion constraints were adaptively determined in each MRoI. Finally, SIFT-like descriptors were calculated in improved gradient and motion spaces. Experimental results of ChaLearn dataset demonstrate that the proposed approach has higher recognition accuracy that is comparable to the published approaches.
出处 《北京工业大学学报》 CAS CSCD 北大核心 2016年第11期1643-1651,共9页 Journal of Beijing University of Technology
基金 国家"973"计划资助项目(2012CB720000) 国家自然科学基金资助项目(61573029)
关键词 手势识别 一次学习 时空特征 自适应 运动感兴趣区域 gesture recognition one-shot learning spatiotemporal feature adaptive motion regions of interest
  • 相关文献

参考文献2

二级参考文献36

  • 1岳玮宁,董士海,王悦,汪国平,王衡,陈文广.普适计算的人机交互框架研究[J].计算机学报,2004,27(12):1657-1664. 被引量:45
  • 2杜友田,陈峰,徐文立,李永彬.基于视觉的人的运动识别综述[J].电子学报,2007,35(1):84-90. 被引量:79
  • 3Weinland D,Ronfard R,and Boyer E.A survey of vision-based methods for action representation,segmentation and recognition[J].Computer Vision and Image Understanding,2011,115(2):224-241.
  • 4Lange R.3D time-of-flight distance measurement with custom solid-state image sensors in CMOS/CCD-technology[D].[Ph.D.dissertation].University of Siegen,2000.
  • 5Chen L,Wei H,and Ferryman J M.A survey of human motion analysis using depth imagery[J].Pattern Recognition Letters,2013,34(15):1995-2006.
  • 6Muller M and Roder T.Motion templates for automatic classification and retrieval of motion capture data[C].Proceedings of the 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation.Eurographics Association,Switzerland,2006:137-146.
  • 7Xia L,Chen C C,and Aggarwal J K.View invariant human action recognition using histograms of 3D joints[C].2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops(CVPRW),Providence,2012 20-27.
  • 8Yang X and Tian Y L.Eigenjoints-based action recognition using naive-bayes-nearest-neighbor[C].2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops(CVPRW),Providence,2012:14-19.
  • 9Seidenari L,Varano V,Berretti S,et al.Weakly Aligned Multi-part Bag-of-Poses for Action Recognition from Depth Cameras[M].Springer Berlin Heidelberg:New Trends in Image Analysis and Processing,2013:446-455.
  • 10Shotton J,Sharp T,Kipman A,et al.Real-time human pose recognition in parts from single depth images[J].Communications of the ACM,2013,56(1):116-124.

共引文献11

同被引文献27

引证文献3

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部