High-fidelity 3D garment synthesis from text is desirable yet challenging for digital avatar creation. Recent diffusion-based approaches via Score Distillation Sampling (SDS) have enabled new possibilities but either intricately couple with human body or struggle to reuse. We introduce ClotheDreamer, a 3D Gaussian-based method for generating wearable, production-ready 3D garment assets from text prompts. We propose a novel representation Disentangled Clothe Gaussian Splatting (DCGS) to enable separate optimization. DCGS represents clothed avatar as one Gaussian model but freezes body Gaussian splats. To enhance quality and completeness, we incorporate bidirectional SDS to supervise clothed avatar and garment RGBD renderings respectively with pose conditions and propose a new pruning strategy for loose clothing. Our approach can also support custom clothing templates as input. Benefiting from our design, the synthetic 3D garment can be easily applied to virtual try-on and support physically accurate animation. Extensive experiments showcase our method's superior and competitive performance.
高保真的3D服装从文本合成对数字化化身创建而言是理想但具挑战性的。最近基于扩散的方法通过得分提取采样(SDS)开辟了新的可能性,但这些方法要么与人体紧密耦合,要么难以重用。我们引入了ClotheDreamer,一种基于3D高斯的方法,用于从文本提示生成可穿戴、生产就绪的3D服装资产。我们提出了一种新的表示方法——解耦高斯服装散射(DCGS),以实现单独优化。DCGS将穿着服装的化身表示为一个高斯模型,但冻结了身体高斯散射。为了提高质量和完整性,我们结合使用双向SDS分别监督穿着服装的化身和服装的RGBD渲染,带有姿态条件,并提出了一种新的修剪策略用于宽松服装。我们的方法还可以支持自定义服装模板作为输入。得益于我们的设计,合成的3D服装可以轻松应用于虚拟试穿,并支持物理精确的动画。广泛的实验展示了我们方法的优越和竞争性能。