Existing physical cloth simulators suffer from expensive computation and difficulties in tuning mechanical parameters to get desired wrinkling behaviors.Data-driven methods provide an alternative solution.They typical...Existing physical cloth simulators suffer from expensive computation and difficulties in tuning mechanical parameters to get desired wrinkling behaviors.Data-driven methods provide an alternative solution.They typically synthesize cloth animation at a much lower computational cost,and also create wrinkling effects that are similar to the training data.In this paper we propose a deep learning based method for synthesizing cloth animation with high resolution meshes.To do this we first create a dataset for training:a pair of low and high resolution meshes are simulated and their motions are synchronized.As a result the two meshes exhibit similar large-scale deformation but different small wrinkles.Each simulated mesh pair is then converted into a pair of low-and high-resolution"images"(a 2D array of samples),with each image pixel being interpreted as any of three descriptors:the displacement,the normal and the velocity.With these image pairs,we design a multi-feature super-resolution(MFSR)network that jointly trains an upsampling synthesizer for the three descriptors.The MFSR architecture consists of shared and task-specific layers to learn multi-level features when super-resolving three descriptors simultaneously.Frame-to-frame consistency is well maintained thanks to the proposed kinematics-based loss function.Our method achieves realistic results at high frame rates:12-14 times faster than traditional physical simulation.We demonstrate the performance of our method with various experimental scenes,including a dressed character with sophisticated collisions.展开更多
Synthesizing garment dynamics according to body motions is a vital technique in computer graphics.Physics-based simulation depends on an accurate model of the law of kinetics of cloth,which is time-consuming,hard to i...Synthesizing garment dynamics according to body motions is a vital technique in computer graphics.Physics-based simulation depends on an accurate model of the law of kinetics of cloth,which is time-consuming,hard to implement,and complex to control.Existing data-driven approaches either lack temporal consistency,or fail to handle garments that are different from body topology.In this paper,we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape.Given a sequence of body motions,our workflow is able to gen-erate corresponding garment dynamics with both spatial and temporal coherence.To that end,we develop a transformer-based garment synthesis network to learn the mapping from body motions to garment dynamics.Frame-level attention is employed to capture the dependency of garments and body motions.Moreover,a post-processing procedure is further tak-en to perform penetration removal and auto-texturing.Then,textured clothing animation that is collision-free and tempo-rally-consistent is generated.We quantitatively and qualitatively evaluated our proposed workflow from different aspects.Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation,while running 1000 times faster.Besides,our workflow achieved superior synthesis perfor-mance compared with alternative approaches.To stimulate further research in this direction,our code will be publicly available soon.展开更多
基金supported by the National Key Research and Development Program of China under Grant No.2018YFB2100602the National Natural Science Foundation of China under Grant Nos.61972459,61971418 and 62071157Open Research Projects of Zhejiang Lab under Grant No.2021KE0AB07.
文摘Existing physical cloth simulators suffer from expensive computation and difficulties in tuning mechanical parameters to get desired wrinkling behaviors.Data-driven methods provide an alternative solution.They typically synthesize cloth animation at a much lower computational cost,and also create wrinkling effects that are similar to the training data.In this paper we propose a deep learning based method for synthesizing cloth animation with high resolution meshes.To do this we first create a dataset for training:a pair of low and high resolution meshes are simulated and their motions are synchronized.As a result the two meshes exhibit similar large-scale deformation but different small wrinkles.Each simulated mesh pair is then converted into a pair of low-and high-resolution"images"(a 2D array of samples),with each image pixel being interpreted as any of three descriptors:the displacement,the normal and the velocity.With these image pairs,we design a multi-feature super-resolution(MFSR)network that jointly trains an upsampling synthesizer for the three descriptors.The MFSR architecture consists of shared and task-specific layers to learn multi-level features when super-resolving three descriptors simultaneously.Frame-to-frame consistency is well maintained thanks to the proposed kinematics-based loss function.Our method achieves realistic results at high frame rates:12-14 times faster than traditional physical simulation.We demonstrate the performance of our method with various experimental scenes,including a dressed character with sophisticated collisions.
基金supported by the National Natural Science Foundation of China under Grant No.61972379.
文摘Synthesizing garment dynamics according to body motions is a vital technique in computer graphics.Physics-based simulation depends on an accurate model of the law of kinetics of cloth,which is time-consuming,hard to implement,and complex to control.Existing data-driven approaches either lack temporal consistency,or fail to handle garments that are different from body topology.In this paper,we present a motion-inspired real-time garment synthesis workflow that enables high-level control of garment shape.Given a sequence of body motions,our workflow is able to gen-erate corresponding garment dynamics with both spatial and temporal coherence.To that end,we develop a transformer-based garment synthesis network to learn the mapping from body motions to garment dynamics.Frame-level attention is employed to capture the dependency of garments and body motions.Moreover,a post-processing procedure is further tak-en to perform penetration removal and auto-texturing.Then,textured clothing animation that is collision-free and tempo-rally-consistent is generated.We quantitatively and qualitatively evaluated our proposed workflow from different aspects.Extensive experiments demonstrate that our network is able to deliver clothing dynamics which retain the wrinkles from the physics-based simulation,while running 1000 times faster.Besides,our workflow achieved superior synthesis perfor-mance compared with alternative approaches.To stimulate further research in this direction,our code will be publicly available soon.