期刊文献+

Monocular Video Guided Garment Simulation

Monocular Video Guided Garment Simulation
原文传递
导出
摘要 We present a prototype to generate a garment-shape sequence guided by a monocular video sequence. It is a combination of a physically-based simulation and a boundary-based modification. Given a garment in the video worn on a mannequin, the simulation generates a garment initial shape by exploiting the mannequin shapes estimated from the video. The modification then deforms the simulated 3D shape into such a shape that matches the garment 2D boundary extracted from the video. According to the matching correspondences between the vertices on the shape and the points on the boundary, the modification is implemented by attracting the matched vertices and their neighboring vertices. For best-matching correspondences and efficient performance, three criteria are introduced to select the candidate vertices for matching. Since modifying each garment shape independently may cause inter-frame oscillations, changes by the modification are also propagated from one frame to the next frame. As a result, the generated garment 3D shape sequence is stable and similar to the garment video sequence. We demonstrate the effectiveness of our prototype with a number of examples. We present a prototype to generate a garment-shape sequence guided by a monocular video sequence. It is a combination of a physically-based simulation and a boundary-based modification. Given a garment in the video worn on a mannequin, the simulation generates a garment initial shape by exploiting the mannequin shapes estimated from the video. The modification then deforms the simulated 3D shape into such a shape that matches the garment 2D boundary extracted from the video. According to the matching correspondences between the vertices on the shape and the points on the boundary, the modification is implemented by attracting the matched vertices and their neighboring vertices. For best-matching correspondences and efficient performance, three criteria are introduced to select the candidate vertices for matching. Since modifying each garment shape independently may cause inter-frame oscillations, changes by the modification are also propagated from one frame to the next frame. As a result, the generated garment 3D shape sequence is stable and similar to the garment video sequence. We demonstrate the effectiveness of our prototype with a number of examples.
出处 《Journal of Computer Science & Technology》 SCIE EI CSCD 2015年第3期528-539,共12页 计算机科学技术学报(英文版)
基金 This work was partially supported by the National High Technology Research and Development 863 Program of China under Grant No. 2013AA013801, the National Natural Science Foundation of China under Grant No. 61325011, and the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20131102130002.
关键词 garment simulation monocular video shape correspondence garment simulation, monocular video, shape correspondence
  • 相关文献

参考文献40

  • 1Fadaifard H, Wolberg G. hnage warping for retargeting gar- ments among arbitrary poses. The Visual Computer, 2013, 29(6/7/8): 525-534.
  • 2Guan P, Weiss A, Balan A, Black M J. Estimating human shape and pose from a single image. In Proc. the 12th Inter- national Conference on Computer Vision, Sept.29 Oct.2, 2009, pp.1381-1388.
  • 3Wei X, Chai J. VideoMocap: Modeling physically realis- tic human motion from monocular video sequences. ACM Transactions on Graphics, 2010, 29(4): 42:1-42:10.
  • 4Zhou S, Fu H, Liu L, Cohen-Or D, Han X. Parametric re- shaping of human bodies in images. ACM Transactions on Graphics, 2010, 29(4): 126:1-126:10.
  • 5Vondrak M, Sigal L, Hodgins J, Jenkins O. Video-based 3D motion capture through biped control. ACM Transactions on Graphics, 2012, 31(4): 27:1-27:12.
  • 6Chen X, Guo Y, Zhou B, Zhao Q. Deformable model for estimating clothed and naked human shapes from a single image. The Visual Computer, 2013, 29(11): 1187-1196.
  • 7Guo Y, Chen X, Zhou B, Zhao Q. Clothed and naked human shapes estimation from a single image. In Proc. the l st International Conference on Computational Visual Media, Nov. 2012, pp.43-50.
  • 8Popa T, Zhou Q, Bradley D, Kraevoy V, Fu H, Sheffer A, Heidrich W. Wrinkling captured garments using space-time data-driven deformation. Computer Graphics Forum, 2009, 28(2): 427-435.
  • 9Kraevoy V, Sheffer A, van de Panne M. Modeling from contour drawings. In Proc. the 6th Eurographics Symposium on Sketch-Based Interfaces and Modeling, Aug. 2009, pp.37- 44.
  • 10Wang H, Liao M, Zhang Q, Yang R, Turk G. Physically guided liquid surface modeling from videos. A CM Transactions on Graphics, 2009, 28(3): 90:1-90:1l.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部