A data-driven method was proposed to realistically animate garments on human poses in reduced space. Firstly, a gradient based method was extended to generate motion sequences and garments were simulated on the sequen...A data-driven method was proposed to realistically animate garments on human poses in reduced space. Firstly, a gradient based method was extended to generate motion sequences and garments were simulated on the sequences as our training data. Based on the examples, the proposed method can fast output realistic garments on new poses. Our framework can be mainly divided into offline phase and online phase. During the offline phase, based on linear blend skinning(LBS), rigid bones and flex bones were estimated for human bodies and garments, respectively. Then, rigid bone weight maps on garment vertices were learned from examples. In the online phase, new human poses were treated as input to estimate rigid bone transformations. Then, both rigid bones and flex bones were used to drive garments to fit the new poses. Finally, a novel formulation was also proposed to efficiently deal with garment-body penetration. Experiments manifest that our method is fast and accurate. The intersection artifacts are fast removed and final garment results are quite realistic.展开更多
基金Project(20104307110003)supported by the Research Fund for the Doctoral Program of Higher Education of ChinaProjects(61379103,61202333,61303185)supported by the National Natural Science Foundation of China+1 种基金Project(2012M520392)supported by the China Postdoctoral Science FoundationProject(CX2012B027)supported by the Hunan Province Graduate Student Innovation Program,China
文摘A data-driven method was proposed to realistically animate garments on human poses in reduced space. Firstly, a gradient based method was extended to generate motion sequences and garments were simulated on the sequences as our training data. Based on the examples, the proposed method can fast output realistic garments on new poses. Our framework can be mainly divided into offline phase and online phase. During the offline phase, based on linear blend skinning(LBS), rigid bones and flex bones were estimated for human bodies and garments, respectively. Then, rigid bone weight maps on garment vertices were learned from examples. In the online phase, new human poses were treated as input to estimate rigid bone transformations. Then, both rigid bones and flex bones were used to drive garments to fit the new poses. Finally, a novel formulation was also proposed to efficiently deal with garment-body penetration. Experiments manifest that our method is fast and accurate. The intersection artifacts are fast removed and final garment results are quite realistic.