期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
ReChoreoNet: Repertoire-based Dance Re-choreography with Music-conditioned Temporal and Style Clues
1
作者 ho yin au Jie Chen +1 位作者 Junkun Jiang Yike Guo 《Machine Intelligence Research》 EI CSCD 2024年第4期771-781,共11页
To generate dance that temporally and aesthetically matches the music is a challenging problem in three aspects.First,the generated motion should be beats-aligned to the local musical features.Second,the global aesthe... To generate dance that temporally and aesthetically matches the music is a challenging problem in three aspects.First,the generated motion should be beats-aligned to the local musical features.Second,the global aesthetic style should be matched between motion and music.And third,the generated motion should be diverse and non-self-repeating.To address these challenges,we propose ReChoreoNet,which re-choreographs high-quality dance motion for a given piece of music.A data-driven learning strategy is proposed to efficiently correlate the temporal connections between music and motion in a progressively learned cross-modality embedding space.The beats-aligned content motion will be subsequently used as autoregressive context and control signal to control a normalizing-flow model,which transfers the style of a prototype motion to the final generated dance.In addition,we present an aesthetically labelled music-dance repertoire(MDR)for both efficient learning of the cross-modality embedding,and understanding of the aesthetic connections between music and motion.We demonstrate that our repertoire-based framework is robustly extensible in both content and style.Both quantitative and qualitative experiments have been carried out to validate the efficiency of our proposed model. 展开更多
关键词 Generative model cross-modality learning normalizing flow tempo synchronization style transfer.
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部