We present a method to dynamically deform 3D garments, in the form of a 3D polygon mesh, based on body shape, motion, and physical cloth material properties. Considering physical cloth properties allows to learn a physically grounded model, with the advantage of being more accurate in terms of physically inspired metrics such as strain or curvature. Existing work studies pose-dependent garment modeling to generate garment deformations from example data, and possibly data-driven dynamic cloth simulation to generate realistic garments in motion. We propose D-Garment, a learning-based approach trained on new data generated with a physics-based simulator. Compared to prior work, our 3D generative model learns garment deformations conditioned by physical material properties, which allows to model loose cloth geometry, especially for large deformations and dynamic wrinkles driven by body motion. Furthermore, the model can be efficiently fitted to observations captured using vision sensors such as 3D point clouds. We leverage the capability of diffusion models to learn flexible and powerful generative priors by modeling the 3D garment in a 2D parameter space independently from the mesh resolution. This representation allows to learn a template-specific latent diffusion model. This allows to condition global and local geometry with body and cloth material information. We quantitatively and qualitatively evaluate D-Garment on both simulations and data captured with a multi-view acquisition platform. Compared to recent baselines, our method is more realistic and accurate in terms of shape similarity and physical validity metrics.
We introduce a latent diffusion model that allows to generate dynamic garment deformations from physical inputs defined by a cloth material and the underlying body shape and motion. Our model is capable of representing large deformations and fine wrinkles of dynamic loose clothing.
Our model generates garment deformations conditioned on body shape, motion and cloth material. It builds upon a 2D latent diffusion model to learn how to deform a template in uv-space. 3D mesh vertex displacement from template is parameterized by the uv displacement map, and our model is trained on it along with the conditional inputs. At inference, our model generates the deformed garment by iteratively denoising the Gaussian noise w.r.t. its conditional inputs.
If you find our work useful, consider citing:
@article{dumoulin2026dgarment,
title={D-Garment: Physically Grounded Latent Diffusion for Dynamic Garment Deformations},
author={Dumoulin, Antoine and Boukhayma, Adnane and Boissieux, Laurence and Damodaran, Bharath Bhushan and Hellier, Pierre and Wuhrer, Stefanie},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2026},
url = {https://openreview.net/forum?id=NrPyio1aUK}
}