Point cloud registration is a fundamental problem for large-scale 3D scene scanning and reconstruction. With the help of deep learning, registration methods have evolved significantly, reaching a nearly-mature stage. As the introduction of Neural Radiance Fields (NeRF), it has become the most popular 3D scene representation as its powerful view synthesis capabilities. Regarding NeRF representation, its registration is also required for large-scale scene reconstruction. However, this topic extremly lacks exploration. This is due to the inherent challenge to model the geometric relationship among two scenes with implicit representations. The existing methods usually convert the implicit representation to explicit representation for further registration. Most recently, Gaussian Splatting (GS) is introduced, employing explicit 3D Gaussian. This method significantly enhances rendering speed while maintaining high rendering quality. Given two scenes with explicit GS representations, in this work, we explore the 3D registration task between them. To this end, we propose GaussReg, a novel coarse-to-fine framework, both fast and accurate. The coarse stage follows existing point cloud registration methods and estimates a rough alignment for point clouds from GS. We further newly present an image-guided fine registration approach, which renders images from GS to provide more detailed geometric information for precise alignment. To support comprehensive evaluation, we carefully build a scene-level dataset called ScanNet-GSReg with 1379 scenes obtained from the ScanNet dataset and collect an in-the-wild dataset called GSReg. Experimental results demonstrate our method achieves state-of-the-art performance on multiple datasets. Our GaussReg is 44 times faster than HLoc (SuperPoint as the feature extractor and SuperGlue as the matcher) with comparable accuracy.
点云配准是大规模 3D 场景扫描和重建的一个基本问题。在深度学习的帮助下,配准方法已经显著发展,达到了接近成熟的阶段。随着神经辐射场(NeRF)的引入,由于其强大的视图合成能力,它已成为最流行的 3D 场景表示方法。对于 NeRF 表示,其配准也是大规模场景重建所需的。然而,这个主题极度缺乏探索。这是由于在具有隐式表示的两个场景之间建模几何关系的固有挑战。现有方法通常将隐式表示转换为显式表示以进行进一步配准。最近,高斯散射(GS)被引入,采用显式 3D 高斯。这种方法显著提高了渲染速度,同时保持了高渲染质量。 给定两个具有显式 GS 表示的场景,在本工作中,我们探索了它们之间的 3D 配准任务。为此,我们提出了 GaussReg,一种新颖的粗到细框架,既快速又准确。粗配准阶段遵循现有的点云配准方法,并对来自 GS 的点云进行粗略对齐估计。我们进一步提出了一种新的图像引导精细配准方法,该方法从 GS 渲染图像以提供更详细的几何信息,用于精确对齐。 为支持全面评估,我们精心构建了一个名为 ScanNet-GSReg 的场景级数据集,其中包含从 ScanNet 数据集获得的 1379 个场景,并收集了一个名为 GSReg 的实际应用数据集。实验结果表明,我们的方法在多个数据集上达到了最先进的性能。我们的 GaussReg 比 HLoc(使用 SuperPoint 作为特征提取器和 SuperGlue 作为匹配器)快 44 倍,同时保持了可比的准确性。