The Gaussian splatting for radiance field rendering method has recently emerged as an efficient approach for accurate scene representation. It optimizes the location, size, color, and shape of a cloud of 3D Gaussian elements to visually match, after projection, or splatting, a set of given images taken from various viewing directions. And yet, despite the proximity of Gaussian elements to the shape boundaries, direct surface reconstruction of objects in the scene is a challenge. We propose a novel approach for surface reconstruction from Gaussian splatting models. Rather than relying on the Gaussian elements' locations as a prior for surface reconstruction, we leverage the superior novel-view synthesis capabilities of 3DGS. To that end, we use the Gaussian splatting model to render pairs of stereo-calibrated novel views from which we extract depth profiles using a stereo matching method. We then combine the extracted RGB-D images into a geometrically consistent surface. The resulting reconstruction is more accurate and shows finer details when compared to other methods for surface reconstruction from Gaussian splatting models, while requiring significantly less compute time compared to other surface reconstruction methods. We performed extensive testing of the proposed method on in-the-wild scenes, taken by a smartphone, showcasing its superior reconstruction abilities. Additionally, we tested the proposed method on the Tanks and Temples benchmark, and it has surpassed the current leading method for surface reconstruction from Gaussian splatting models.
高斯喷溅用于辐射场渲染方法最近已经作为一种高效的准确场景表示方法而出现。它优化了一团三维高斯元素的位置、大小、颜色和形状,以便在投影或喷溅后,从各个观察方向拍摄的一组给定图像视觉上匹配。然而,尽管高斯元素接近形状边界,直接重建场景中对象的表面仍是一项挑战。 我们提出了一种从高斯喷溅模型重建表面的新方法。我们不是依赖高斯元素的位置作为表面重建的先验,而是利用3DGS卓越的新视角合成能力。为此,我们使用高斯喷溅模型渲染一对经过立体校准的新视角,从中我们使用立体匹配方法提取深度轮廓。然后,我们将提取的RGB-D图像合并成一个几何上一致的表面。与其他从高斯喷溅模型进行表面重建的方法相比,结果重建更加准确,展示了更细致的细节,同时与其他表面重建方法相比,所需的计算时间显著减少。 我们对提出的方法进行了广泛的测试,这些测试在野外场景中进行,由智能手机拍摄,展示了其卓越的重建能力。此外,我们还在Tanks and Temples基准测试上测试了提出的方法,它已经超过了当前领先的从高斯喷溅模型进行表面重建的方法。