We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-accurate dynamics, even for loose clothing. Previous neural rendering-based representations of animatable clothed humans typically employ a single model to represent both the clothing and the underlying body. While efficient, these approaches often fail to accurately represent complex garment dynamics, leading to incorrect deformations and noticeable rendering artifacts, especially for sliding or loose garments. Furthermore, previous works represent garment dynamics as pose-dependent deformations and facilitate novel pose animations in a data-driven manner. This often results in outcomes that do not faithfully represent the mechanics of motion and are prone to generating artifacts in out-of-distribution poses. To address these issues, we adopt two individual 3D Gaussian Splatting (3DGS) models with different deformation characteristics, modeling the human body and clothing separately. This distinction allows for better handling of their respective motion characteristics. With this representation, we integrate a graph neural network (GNN)-based clothed body physics simulation module to ensure an accurate representation of clothing dynamics. Our method, through its carefully designed features, achieves high-fidelity rendering of clothed human bodies in complex and novel driving poses, significantly outperforming previous methods under the same settings.
我们介绍了 PICA,这是一种新颖的表示方法,用于高保真可动画的穿衣人体头像,具有物理精确的动态效果,即使对于宽松的服装也是如此。先前基于神经渲染的可动画穿衣人体表示方法通常使用单一模型来表示衣服和底层身体。虽然这种方法高效,但往往无法准确表示复杂的服装动态,导致不正确的变形和明显的渲染伪影,特别是对于滑动或宽松的服装。 此外,先前的工作将服装动态表示为姿势相关的变形,并以数据驱动的方式实现新颖的姿势动画。这常常导致无法忠实表现运动力学,并容易在分布外姿势中产生伪影。 为解决这些问题,我们采用了两个具有不同变形特征的独立 3D 高斯散射(3DGS)模型,分别对人体和服装进行建模。这种区分允许更好地处理它们各自的运动特征。基于这种表示方法,我们整合了一个基于图神经网络(GNN)的穿衣人体物理模拟模块,以确保准确表示服装动态。 通过精心设计的特性,我们的方法在复杂和新颖的驱动姿势下实现了穿衣人体的高保真渲染,在相同设置下显著优于先前的方法。