Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.38 KB

2410.24223.md

File metadata and controls

5 lines (3 loc) · 2.38 KB

URAvatar: Universal Relightable Gaussian Codec Avatars

We present a new approach to creating photorealistic and relightable head avatars from a phone scan with unknown illumination. The reconstructed avatars can be animated and relit in real time with the global illumination of diverse environments. Unlike existing approaches that estimate parametric reflectance parameters via inverse rendering, our approach directly models learnable radiance transfer that incorporates global light transport in an efficient manner for real-time rendering. However, learning such a complex light transport that can generalize across identities is non-trivial. A phone scan in a single environment lacks sufficient information to infer how the head would appear in general environments. To address this, we build a universal relightable avatar model represented by 3D Gaussians. We train on hundreds of high-quality multi-view human scans with controllable point lights. High-resolution geometric guidance further enhances the reconstruction accuracy and generalization. Once trained, we finetune the pretrained model on a phone scan using inverse rendering to obtain a personalized relightable avatar. Our experiments establish the efficacy of our design, outperforming existing approaches while retaining real-time rendering capability.

我们提出了一种新方法,可通过手机扫描在未知光照条件下创建具有真实感和可重光照的头像模型。重建的头像能够在多种环境的全局光照下实现实时动画和重光照。与通过逆向渲染估计参数化反射参数的现有方法不同,我们的方法直接建模了可学习的辐射传输,在实时渲染中高效地结合了全局光传输。然而,学习这种能够跨身份泛化的复杂光传输并非易事。单一环境中的手机扫描缺乏推断头像在通用环境中外观的足够信息。为了解决这个问题,我们构建了一个由3D高斯表示的通用可重光照头像模型。我们在数百个使用可控点光源的高质量多视角人像扫描数据上进行训练。高分辨率的几何指导进一步提升了重建的准确性和泛化能力。训练完成后,我们通过逆向渲染对预训练模型进行微调,使其适应手机扫描数据,从而获得个性化的可重光照头像。我们的实验验证了该设计的有效性,优于现有方法,同时保持了实时渲染能力。