D-Garment: Physics-Conditioned Latent Diffusion for Dynamic Garment Deformations

1Inria Centre at the University Grenoble Alpes 2Inria, University of Rennes, CNRS, IRISA-UMR 6074 3Interdigital Inc.
Model overview

TLDR: We introduce a latent diffusion model that allows to generate dynamic garment deformations from physical inputs defined by a cloth material and the underlying body shape and motion. Our model is capable of representing large deformations and fine wrinkles of dynamic loose clothing.

Our model generates garment deformations conditioned on body shape, motion and cloth material. It builds upon a 2D latent diffusion model to learn how to deform a template in uv-space. 3D mesh vertex displacement from template is parameterized by the uv displacement map, and our model is trained on it along with the conditional inputs. At inference, our model generates the deformed garment by iteratively denoising the Gaussian noise w.r.t. its conditional inputs.

Abstract

Adjusting and deforming 3D garments to body shapes, body motion, and cloth material is an important problem in virtual and augmented reality. Applications are numerous, ranging from virtual change rooms to the entertainment and gaming industry. This problem is challenging as garment dynamics influence geometric details such as wrinkling patterns, which depend on physical input including the wearer's body shape and motion, as well as cloth material features. Existing work studies learning-based modeling techniques to generate garment pose-driven deformations from example data, and physics-inspired simulators to generate realistic garment dynamics. We propose here a learning-based approach trained on data generated with a physics-based simulator. Compared to prior work, our 3D generative model learns garment deformations for loose cloth geometry, especially for large deformations and dynamic wrinkles driven by body motion and cloth material. Furthermore, the model can be efficiently fitted to observations captured using vision sensors. We propose to leverage the capability of diffusion models to learn fine-scale detail: we model the 3D garment in a 2D parameter space, and learn a latent diffusion model using this representation independent from the mesh resolution. This allows to condition global and local geometric information with body and material information. We quantitatively and qualitatively evaluate our method on both simulated data and data captured with a multi-view acquisition platform. Compared to strong baselines, our method is more accurate in terms of Chamfer distance.

BibTeX

If you find our work useful, consider citing:


@article{dumoulin2025dgarment,
  title={D-Garment: Physics-Conditioned Latent Diffusion for Dynamic Garment Deformations},
  author={Dumoulin, Antoine and Boukhayma, Adnane and Boissieux, Laurence and Damodaran, Bharath Bhushan and Hellier, Pierre and Wuhrer, Stefanie},
  journal={arXiv preprint arXiv:2504.03468},
  year={2025},
  url = {https://doi.org/10.48550/arXiv.2504.03468}
}