摘要
Prevalent use of motion capture(MoCap)produces large volumes of data and MoCap data retrieval becomes crucial for efficient data reuse.MoCap clips may not be neatly segmented and labeled,increasing the difficulty of retrieval.In order to effectively retrieve such data,we propose an elastic content-based retrieval scheme via unsupervised posture encoding and strided temporal alignment(PESTA)in this work.It retrieves similarities at the sub-sequence level,achieves robustness against singular frames and enables control of tradeoff between precision and efficiency.It firstly learns a dictionary of encoded postures utilizing unsupervised adversarial autoencoder techniques and,based on which,compactly symbolizes any MoCap sequence.Secondly,it conducts strided temporal alignment to align a query sequence to repository sequences to retrieve the best-matching sub-sequences from the repository.Further,it extends to find matches for multiple sub-queries in a long query at sharply promoted efficiency and minutely sacrificed precision.Outstanding performance of the proposed scheme is well demonstrated by experiments on two public MoCap datasets and one MoCap dataset captured by ourselves.
作者
蒋子飞
李伟
黄艳
尹义龙
郭宗杰
彭京亮
Zi-Fei Jiang;Wei Li;Yan Huang;Yi-Long Yin;C.-C.Jay Kuo;Jing-Liang Peng(School of Software,Shandong University,Jinan 250101,China;Ming Hsieh Department of Electrical and Computer Engineering,University of Southern California Los Angeles 90089,U.S.A;Shandong Provincial Key Laboratory of Network Based Intelligent Computing,Jinan 250022,China;School of Information Science and Engineering,University of Jinan,Jinan 250022,China)
基金
supported by Shandong Provincial Natural Science Foundation of China under Grant No.ZR2022MF294.