This paper tackles the intricate challenge of object removal to update the radiance field using the 3D Gaussian Splatting. The main challenges of this task lie in the preservation of geometric consistency and the maintenance of texture coherence in the presence of the substantial discrete nature of Gaussian primitives. We introduce a robust framework specifically designed to overcome these obstacles. The key insight of our approach is the enhancement of information exchange among visible and invisible areas, facilitating content restoration in terms of both geometry and texture. Our methodology begins with optimizing the positioning of Gaussian primitives to improve geometric consistency across both removed and visible areas, guided by an online registration process informed by monocular depth estimation. Following this, we employ a novel feature propagation mechanism to bolster texture coherence, leveraging a cross-attention design that bridges sampling Gaussians from both uncertain and certain areas. This innovative approach significantly refines the texture coherence within the final radiance field. Extensive experiments validate that our method not only elevates the quality of novel view synthesis for scenes undergoing object removal but also showcases notable efficiency gains in training and rendering speeds.
本文解决了使用三维高斯涂抹更新辐射场中物体移除的复杂挑战。这项任务的主要难点在于保持几何一致性和在高斯原始图形显著的离散特性存在的情况下维护纹理一致性。我们引入了一个专门设计的强大框架来克服这些障碍。我们方法的核心见解是增强可见区域和不可见区域之间的信息交换,从而在几何和纹理两个方面促进内容恢复。我们的方法首先通过优化高斯原始图形的定位来提高被移除区域和可见区域的几何一致性,这一过程由单目深度估计信息的在线注册过程指导。接下来,我们采用一种新颖的特征传播机制来增强纹理一致性,利用跨注意力设计桥接不确定和确定区域的采样高斯。这种创新方法显著提高了最终辐射场内的纹理一致性。广泛的实验验证了我们的方法不仅提升了经历物体移除的场景新视角合成的质量,而且还在训练和渲染速度上展示了显著的效率提升。