Recent advancements in head avatar rendering using Gaussian primitives have achieved significantly high-fidelity results. Although precise head geometry is crucial for applications like mesh reconstruction and relighting, current methods struggle to capture intricate geometric details and render unseen poses due to their reliance on similarity transformations, which cannot handle stretch and shear transforms essential for detailed deformations of geometry. To address this, we propose SurFhead, a novel method that reconstructs riggable head geometry from RGB videos using 2D Gaussian surfels, which offer well-defined geometric properties, such as precise depth from fixed ray intersections and normals derived from their surface orientation, making them advantageous over 3D counterparts. SurFhead ensures high-fidelity rendering of both normals and images, even in extreme poses, by leveraging classical mesh-based deformation transfer and affine transformation interpolation. SurFhead introduces precise geometric deformation and blends surfels through polar decomposition of transformations, including those affecting normals. Our key contribution lies in bridging classical graphics techniques, such as mesh-based deformation, with modern Gaussian primitives, achieving state-of-the-art geometry reconstruction and rendering quality. Unlike previous avatar rendering approaches, SurFhead enables efficient reconstruction driven by Gaussian primitives while preserving high-fidelity geometry.
在头部头像渲染领域,基于高斯基元的最新进展已经实现了显著的高保真效果。尽管精确的头部几何形状对于网格重建和光照再现等应用至关重要,但现有方法由于依赖相似变换,难以捕捉复杂的几何细节并渲染未见过的姿态。相似变换无法处理几何变形中所需的拉伸和剪切变换,从而限制了细节的表现。为了解决这一问题,我们提出了SurFhead,这是一种新方法,使用2D高斯表面体(surfels)从RGB视频中重建可操控的头部几何形状。2D高斯表面体具有明确的几何特性,例如通过固定射线交点精确获取深度,并且可根据表面方向推导法线,相比3D高斯基元具有优势。SurFhead通过结合经典的基于网格的形变传递和仿射变换插值,实现了在极端姿态下的高保真法线和图像渲染。SurFhead通过对影响法线的变换进行极分解,引入了精确的几何变形和表面体的混合。我们的核心贡献在于将经典图形技术(如基于网格的形变)与现代高斯基元结合,实现了最先进的几何重建和渲染质量。与以往的头像渲染方法不同,SurFhead在通过高斯基元驱动的高效重建的同时,保持了高保真的几何形态。