期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
Music-stylized hierarchical dance synthesis with user control
1
作者 Yanbo CHENG Yichen JIANG Yingying WANG 《虚拟现实与智能硬件(中英文)》 EI 2024年第5期339-357,共19页
Background Synthesizing dance motions to match musical inputs is a significant challenge in animation research.Compared to functional human motions,such as locomotion,dance motions are creative and artistic,often infl... Background Synthesizing dance motions to match musical inputs is a significant challenge in animation research.Compared to functional human motions,such as locomotion,dance motions are creative and artistic,often influenced by music,and can be independent body language expressions.Dance choreography requires motion content to follow a general dance genre,whereas dance performances under musical influence are infused with diverse impromptu motion styles.Considering the high expressiveness and variations in space and time,providing accessible and effective user control for tuning dance motion styles remains an open problem.Methods In this study,we present a hierarchical framework that decouples the dance synthesis task into independent modules.We use a high-level choreography module built as a Transformer-based sequence model to predict the long-term structure of a dance genre and a low-level realization module that implements dance stylization and synchronization to match the musical input or user preferences.This novel framework allows the individual modules to be trained separately.Because of the decoupling,dance composition can fully utilize existing high-quality dance datasets that do not have musical accompaniments,and the dance implementation can conveniently incorporate user controls and edit motions through a decoder network.Each module is replaceable at runtime,which adds flexibility to the synthesis of dance sequences.Results Synthesized results demonstrate that our framework generates high-quality diverse dance motions that are well adapted to varying musical conditions and user controls. 展开更多
关键词 Deep learning Character animation Motion synthesis Motion stylization Multimodal synchronization User control
下载PDF
利用面部实时动态捕捉技术进行动画制作--Character Animator应用实例
2
作者 马金秀 《现代电影技术》 2020年第10期42-47,共6页
随着计算机技术的不断进步,动画制作软件也在发展当中.利用面部实时动态捕捉技术进行动画制作,为动画短片提供了一种新的制作方式。本文从实际完成的视频短片入手,讲述在短片的制作过程中,如何利用面部实时动态捕捉技术制作人偶动画,介... 随着计算机技术的不断进步,动画制作软件也在发展当中.利用面部实时动态捕捉技术进行动画制作,为动画短片提供了一种新的制作方式。本文从实际完成的视频短片入手,讲述在短片的制作过程中,如何利用面部实时动态捕捉技术制作人偶动画,介绍了在Character Animator中,动画人偶的导入、骨骼绑定、实时动画的录制和导出,探讨了这种动画制作方式在短视频制作中的应用前景。 展开更多
关键词 面部实时动态捕捉 骨骼绑定 Character Animator 动画制作
下载PDF
Motion texture using symmetric property and graphcut algorithm
3
作者 SHEN Jian-bing JIN Xiao-gang +1 位作者 ZHOU Chuan ZHAO Han-li 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2006年第7期1107-1114,共8页
In this paper, a novel motion texture approach is presented for synthesizing long character motion (e.g., kungfu) that is similar to the original short input motion. First, a new motion with repeated frames is generat... In this paper, a novel motion texture approach is presented for synthesizing long character motion (e.g., kungfu) that is similar to the original short input motion. First, a new motion with repeated frames is generated by exploiting the symmetric properties of the frames and reversing the motion sequence playback in a given motion sequence. Then, the order of the above motion sequence is rearranged by putting the start and the end frames together. The graphcut algorithm is used to seamlessly synthesize the transition between the start and the end frames, which is noted as graphcut-based motion-texton. Finally, we utilize the motion-textons to synthesize long motion texture, which can be patched together like the image texture synthesis method using graphcut algorithm, and automatically form a long motion texture endlessly. Our approach is demonstrated by synthesizing the long kungfu motion texture without visual artifacts, together with post-processing including our new developed graphcut-based motion blending and Poisson-based motion smoothing techniques. 展开更多
关键词 Motion capture Motion texture Character animation Graphcut
下载PDF
2-D SHAPE BLENDING OF NURBS CURVE SHAPES 被引量:2
4
作者 Jin Xiaogang Bao Hujun Peng Qunsheng 《Computer Aided Drafting,Design and Manufacturing》 1994年第1期13-19,共1页
A shape blending algorithm of 2-D curved shapes is presented in this paper. A curvedshape is represented by a closed Non-Uniform Rational B-Spline(NURBS). We determine the inter-mediate shapes by interpolating the int... A shape blending algorithm of 2-D curved shapes is presented in this paper. A curvedshape is represented by a closed Non-Uniform Rational B-Spline(NURBS). We determine the inter-mediate shapes by interpolating the intrinsic definitions of the initial and final control polygons. Thisalgorithm can avoid shrinkage resulted from linear vertex interpolation and produce smcoth intermedi-ate shapes. Aliasing problems can also be easily eliminated. 展开更多
关键词 shape blending character animation NURBS
全文增补中
Creating Autonomous, Perceptive and Intelligent Virtual Humans in a Real-Time Virtual Environment
5
作者 刘渭滨 周亮 +2 位作者 邢薇薇 刘幸奇 袁保宗 《Tsinghua Science and Technology》 SCIE EI CAS 2011年第3期233-240,共8页
Creating realistic virtual humans has been a challenging objective in computer science research for some time. This paper describes an integrated framework for modeling virtual humans with a high level of autonomy. Th... Creating realistic virtual humans has been a challenging objective in computer science research for some time. This paper describes an integrated framework for modeling virtual humans with a high level of autonomy. The framework seeks to reproduce human-like believable behavior and movement in virtual humans in a virtual environment. The framework includes a visual and auditory information perception module, a decision network based behavior decision module, and a hierarchical autonomous motion control module. These cooperate to model realistic autonomous individual behavior for virtual humans in real-time interactive virtual environments. The framework was tested in a simulated virtual environment system to demonstrate the ability of the framework to create autonomous, perceptive and intelligent virtual humans in real-time virtual environments. 展开更多
关键词 virtual human autonomous behavior motion planning obstacle avoidance character animation behavior decision
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部