Background Synthesizing dance motions to match musical inputs is a significant challenge in animation research.Compared to functional human motions,such as locomotion,dance motions are creative and artistic,often infl...Background Synthesizing dance motions to match musical inputs is a significant challenge in animation research.Compared to functional human motions,such as locomotion,dance motions are creative and artistic,often influenced by music,and can be independent body language expressions.Dance choreography requires motion content to follow a general dance genre,whereas dance performances under musical influence are infused with diverse impromptu motion styles.Considering the high expressiveness and variations in space and time,providing accessible and effective user control for tuning dance motion styles remains an open problem.Methods In this study,we present a hierarchical framework that decouples the dance synthesis task into independent modules.We use a high-level choreography module built as a Transformer-based sequence model to predict the long-term structure of a dance genre and a low-level realization module that implements dance stylization and synchronization to match the musical input or user preferences.This novel framework allows the individual modules to be trained separately.Because of the decoupling,dance composition can fully utilize existing high-quality dance datasets that do not have musical accompaniments,and the dance implementation can conveniently incorporate user controls and edit motions through a decoder network.Each module is replaceable at runtime,which adds flexibility to the synthesis of dance sequences.Results Synthesized results demonstrate that our framework generates high-quality diverse dance motions that are well adapted to varying musical conditions and user controls.展开更多
In this paper, a novel motion texture approach is presented for synthesizing long character motion (e.g., kungfu) that is similar to the original short input motion. First, a new motion with repeated frames is generat...In this paper, a novel motion texture approach is presented for synthesizing long character motion (e.g., kungfu) that is similar to the original short input motion. First, a new motion with repeated frames is generated by exploiting the symmetric properties of the frames and reversing the motion sequence playback in a given motion sequence. Then, the order of the above motion sequence is rearranged by putting the start and the end frames together. The graphcut algorithm is used to seamlessly synthesize the transition between the start and the end frames, which is noted as graphcut-based motion-texton. Finally, we utilize the motion-textons to synthesize long motion texture, which can be patched together like the image texture synthesis method using graphcut algorithm, and automatically form a long motion texture endlessly. Our approach is demonstrated by synthesizing the long kungfu motion texture without visual artifacts, together with post-processing including our new developed graphcut-based motion blending and Poisson-based motion smoothing techniques.展开更多
A shape blending algorithm of 2-D curved shapes is presented in this paper. A curvedshape is represented by a closed Non-Uniform Rational B-Spline(NURBS). We determine the inter-mediate shapes by interpolating the int...A shape blending algorithm of 2-D curved shapes is presented in this paper. A curvedshape is represented by a closed Non-Uniform Rational B-Spline(NURBS). We determine the inter-mediate shapes by interpolating the intrinsic definitions of the initial and final control polygons. Thisalgorithm can avoid shrinkage resulted from linear vertex interpolation and produce smcoth intermedi-ate shapes. Aliasing problems can also be easily eliminated.展开更多
Creating realistic virtual humans has been a challenging objective in computer science research for some time. This paper describes an integrated framework for modeling virtual humans with a high level of autonomy. Th...Creating realistic virtual humans has been a challenging objective in computer science research for some time. This paper describes an integrated framework for modeling virtual humans with a high level of autonomy. The framework seeks to reproduce human-like believable behavior and movement in virtual humans in a virtual environment. The framework includes a visual and auditory information perception module, a decision network based behavior decision module, and a hierarchical autonomous motion control module. These cooperate to model realistic autonomous individual behavior for virtual humans in real-time interactive virtual environments. The framework was tested in a simulated virtual environment system to demonstrate the ability of the framework to create autonomous, perceptive and intelligent virtual humans in real-time virtual environments.展开更多
基金Supported by Startup Fund 20019495,McMaster University。
文摘Background Synthesizing dance motions to match musical inputs is a significant challenge in animation research.Compared to functional human motions,such as locomotion,dance motions are creative and artistic,often influenced by music,and can be independent body language expressions.Dance choreography requires motion content to follow a general dance genre,whereas dance performances under musical influence are infused with diverse impromptu motion styles.Considering the high expressiveness and variations in space and time,providing accessible and effective user control for tuning dance motion styles remains an open problem.Methods In this study,we present a hierarchical framework that decouples the dance synthesis task into independent modules.We use a high-level choreography module built as a Transformer-based sequence model to predict the long-term structure of a dance genre and a low-level realization module that implements dance stylization and synchronization to match the musical input or user preferences.This novel framework allows the individual modules to be trained separately.Because of the decoupling,dance composition can fully utilize existing high-quality dance datasets that do not have musical accompaniments,and the dance implementation can conveniently incorporate user controls and edit motions through a decoder network.Each module is replaceable at runtime,which adds flexibility to the synthesis of dance sequences.Results Synthesized results demonstrate that our framework generates high-quality diverse dance motions that are well adapted to varying musical conditions and user controls.
基金Project supported by the National Natural Science Foundation of China (Nos. 60573153 and 60533080), and Program for New Century Excellent Talents in University (No. NCET-05-0519), China
文摘In this paper, a novel motion texture approach is presented for synthesizing long character motion (e.g., kungfu) that is similar to the original short input motion. First, a new motion with repeated frames is generated by exploiting the symmetric properties of the frames and reversing the motion sequence playback in a given motion sequence. Then, the order of the above motion sequence is rearranged by putting the start and the end frames together. The graphcut algorithm is used to seamlessly synthesize the transition between the start and the end frames, which is noted as graphcut-based motion-texton. Finally, we utilize the motion-textons to synthesize long motion texture, which can be patched together like the image texture synthesis method using graphcut algorithm, and automatically form a long motion texture endlessly. Our approach is demonstrated by synthesizing the long kungfu motion texture without visual artifacts, together with post-processing including our new developed graphcut-based motion blending and Poisson-based motion smoothing techniques.
文摘A shape blending algorithm of 2-D curved shapes is presented in this paper. A curvedshape is represented by a closed Non-Uniform Rational B-Spline(NURBS). We determine the inter-mediate shapes by interpolating the intrinsic definitions of the initial and final control polygons. Thisalgorithm can avoid shrinkage resulted from linear vertex interpolation and produce smcoth intermedi-ate shapes. Aliasing problems can also be easily eliminated.
基金Supported by the National Natural Science Foundation of China(No.60801053)the Beijing Natural Science Foundation (No.4082025)+4 种基金the Doctoral Foundation of China (No.20070004037)the Fundamental Research Funds for the Central Universities (Nos.2009JBM135 and 2011JBM023)the BJTU Hongguoyuan Innovative Talent Program (No.151139522)the Beijing Excellent Doctoral Thesis Program (No.YB20081000401)the National Key Basic Research and Development (973) Program of China (No.2006CB303105)
文摘Creating realistic virtual humans has been a challenging objective in computer science research for some time. This paper describes an integrated framework for modeling virtual humans with a high level of autonomy. The framework seeks to reproduce human-like believable behavior and movement in virtual humans in a virtual environment. The framework includes a visual and auditory information perception module, a decision network based behavior decision module, and a hierarchical autonomous motion control module. These cooperate to model realistic autonomous individual behavior for virtual humans in real-time interactive virtual environments. The framework was tested in a simulated virtual environment system to demonstrate the ability of the framework to create autonomous, perceptive and intelligent virtual humans in real-time virtual environments.