摘要
To generate dance that temporally and aesthetically matches the music is a challenging problem in three aspects.First,the generated motion should be beats-aligned to the local musical features.Second,the global aesthetic style should be matched between motion and music.And third,the generated motion should be diverse and non-self-repeating.To address these challenges,we propose ReChoreoNet,which re-choreographs high-quality dance motion for a given piece of music.A data-driven learning strategy is proposed to efficiently correlate the temporal connections between music and motion in a progressively learned cross-modality embedding space.The beats-aligned content motion will be subsequently used as autoregressive context and control signal to control a normalizing-flow model,which transfers the style of a prototype motion to the final generated dance.In addition,we present an aesthetically labelled music-dance repertoire(MDR)for both efficient learning of the cross-modality embedding,and understanding of the aesthetic connections between music and motion.We demonstrate that our repertoire-based framework is robustly extensible in both content and style.Both quantitative and qualitative experiments have been carried out to validate the efficiency of our proposed model.
基金
supported by the Theme-based Research Scheme,Research Grants Council of Hong Kong,China(T45-205/21-N).