Prevalent use of motion capture(MoCap)produces large volumes of data and MoCap data retrieval becomes crucial for efficient data reuse.MoCap clips may not be neatly segmented and labeled,increasing the difficulty of r...Prevalent use of motion capture(MoCap)produces large volumes of data and MoCap data retrieval becomes crucial for efficient data reuse.MoCap clips may not be neatly segmented and labeled,increasing the difficulty of retrieval.In order to effectively retrieve such data,we propose an elastic content-based retrieval scheme via unsupervised posture encoding and strided temporal alignment(PESTA)in this work.It retrieves similarities at the sub-sequence level,achieves robustness against singular frames and enables control of tradeoff between precision and efficiency.It firstly learns a dictionary of encoded postures utilizing unsupervised adversarial autoencoder techniques and,based on which,compactly symbolizes any MoCap sequence.Secondly,it conducts strided temporal alignment to align a query sequence to repository sequences to retrieve the best-matching sub-sequences from the repository.Further,it extends to find matches for multiple sub-queries in a long query at sharply promoted efficiency and minutely sacrificed precision.Outstanding performance of the proposed scheme is well demonstrated by experiments on two public MoCap datasets and one MoCap dataset captured by ourselves.展开更多
基金supported by Shandong Provincial Natural Science Foundation of China under Grant No.ZR2022MF294.
文摘Prevalent use of motion capture(MoCap)produces large volumes of data and MoCap data retrieval becomes crucial for efficient data reuse.MoCap clips may not be neatly segmented and labeled,increasing the difficulty of retrieval.In order to effectively retrieve such data,we propose an elastic content-based retrieval scheme via unsupervised posture encoding and strided temporal alignment(PESTA)in this work.It retrieves similarities at the sub-sequence level,achieves robustness against singular frames and enables control of tradeoff between precision and efficiency.It firstly learns a dictionary of encoded postures utilizing unsupervised adversarial autoencoder techniques and,based on which,compactly symbolizes any MoCap sequence.Secondly,it conducts strided temporal alignment to align a query sequence to repository sequences to retrieve the best-matching sub-sequences from the repository.Further,it extends to find matches for multiple sub-queries in a long query at sharply promoted efficiency and minutely sacrificed precision.Outstanding performance of the proposed scheme is well demonstrated by experiments on two public MoCap datasets and one MoCap dataset captured by ourselves.