This paper proposes a novel method, called model transduction, to directly transfer pose between different meshes, without the need of building the skeleton configurations for meshes. Different from previous retargett...This paper proposes a novel method, called model transduction, to directly transfer pose between different meshes, without the need of building the skeleton configurations for meshes. Different from previous retargetting methods, such as deformation transfer, model transduction does not require a reference source mesh to obtain the source deformation, thus effectively avoids unsatisfying results when the source and target have different reference poses. Moreover, we show other two applications of the model transduction method: pose correction after various mesh editing operations, and skeleton-free deformation animation based on 3D Mocap (Motion capture) data. Model transduction is based on two ingredients: model deformation and model correspondence. Specifically, based on the mean-value manifold operator, our mesh deformation method produces visually pleasing deformation results under large angle rotations or big-scale translations of handles. Then we propose a novel scheme for shape-preserving correspondence between manifold meshes. Our method fits nicely in a unified framework, where the similar type of operator is applied in all phases. The resulting quadratic formulation can be efficiently minimized by fast solving the sparse linear system. Experimental results show that model transduction can successfully transfer both complex skeletal structures and subtle skin deformations.展开更多
基金supported by the National Natural Science Foundation of China under Grant Nos. 60903060 and 60675012the National High-Tech Research and Development 863 Program of China under Grant No. 2009AA012104the China Postdoctoral Science Foundation under Grant No. 20080440258
文摘This paper proposes a novel method, called model transduction, to directly transfer pose between different meshes, without the need of building the skeleton configurations for meshes. Different from previous retargetting methods, such as deformation transfer, model transduction does not require a reference source mesh to obtain the source deformation, thus effectively avoids unsatisfying results when the source and target have different reference poses. Moreover, we show other two applications of the model transduction method: pose correction after various mesh editing operations, and skeleton-free deformation animation based on 3D Mocap (Motion capture) data. Model transduction is based on two ingredients: model deformation and model correspondence. Specifically, based on the mean-value manifold operator, our mesh deformation method produces visually pleasing deformation results under large angle rotations or big-scale translations of handles. Then we propose a novel scheme for shape-preserving correspondence between manifold meshes. Our method fits nicely in a unified framework, where the similar type of operator is applied in all phases. The resulting quadratic formulation can be efficiently minimized by fast solving the sparse linear system. Experimental results show that model transduction can successfully transfer both complex skeletal structures and subtle skin deformations.