期刊文献+

Generalized Pose Decoupled Network for Unsupervised 3D Skeleton Sequence-Based Action Representation Learning

原文传递
导出
摘要 Human action representation is derived from the description of human shape and motion.The traditional unsupervised 3-dimensional(3D)human action representation learning method uses a recurrent neural network(RNN)-based autoencoder to reconstruct the input pose sequence and then takes the midlevel feature of the autoencoder as representation.Although RNN can implicitly learn a certain amount of motion information,the extracted representation mainly describes the human shape and is insufficient to describe motion information.Therefore,we first present a handcrafted motion feature called pose flow to guide the reconstruction of the autoencoder,whose midlevel feature is expected to describe motion information.The performance is limited as we observe that actions can be distinctive in either motion direction or motion norm.For example,we can distinguish“sitting down”and“standing up”from motion direction yet distinguish“running”and“jogging”from motion norm.In these cases,it is difficult to learn distinctive features from pose flow where direction and norm are mixed.To this end,we present an explicit pose decoupled flow network(PDF-E)to learn from direction and norm in a multi-task learning framework,where 1 encoder is used to generate representation and 2 decoders are used to generating direction and norm,respectively.Further,we use reconstructing the input pose sequence as an additional constraint and present a generalized PDF network(PDF-G)to learn both motion and shape information,which achieves state-of-the-art performances on large-scale and challenging 3D action recognition datasets including the NTU RGB+D 60 dataset and NTU RGB+D 120 dataset.
出处 《Cyborg and Bionic Systems》 2022年第1期1-11,共11页 类生命系统(英文)
基金 supported by the National Natural Science Foundation of China(grant no.62203476,no.61871154,and no.62031013) the Youth Program of National Natural Science Foundation of China(61906103).
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部