Splatting (3DGS) creates a radiance field consisting of 3D Gaussians to represent a scene. With sparse training views, 3DGS easily suffers from overfitting, negatively impacting the reconstruction quality. This paper introduces a new co-regularization perspective for improving sparse-view 3DGS. When training two 3D Gaussian radiance fields with the same sparse views of a scene, we observe that the two radiance fields exhibit point disagreement and rendering disagreement that can unsupervisedly predict reconstruction quality, stemming from the sampling implementation in densification. We further quantify the point disagreement and rendering disagreement by evaluating the registration between Gaussians' point representations and calculating differences in their rendered pixels. The empirical study demonstrates the negative correlation between the two disagreements and accurate reconstruction, which allows us to identify inaccurate reconstruction without accessing ground-truth information. Based on the study, we propose CoR-GS, which identifies and suppresses inaccurate reconstruction based on the two disagreements: (1) Co-pruning considers Gaussians that exhibit high point disagreement in inaccurate positions and prunes them. (2) Pseudo-view co-regularization considers pixels that exhibit high rendering disagreement are inaccurately rendered and suppress the disagreement. Results on LLFF, Mip-NeRF360, DTU, and Blender demonstrate that CoR-GS effectively regularizes the scene geometry, reconstructs the compact representations, and achieves state-of-the-art novel view synthesis quality under sparse training views.
3D高斯喷溅(3DGS)创建一个由3D高斯组成的辐射场来表示场景。在稀疏训练视图的情况下,3DGS容易过拟合,这对重建质量产生负面影响。本文引入了一种新的共正则化视角,用于改善稀疏视图下的3DGS。当使用同一场景的相同稀疏视图训练两个3D高斯辐射场时,我们观察到两个辐射场表现出\textit{点不一致}和\textit{渲染不一致},这两种不一致可以无监督地预测重建质量,源自在密集化中的采样实现。我们通过评估高斯点表示之间的配准以及计算它们渲染像素的差异来进一步量化点不一致和渲染不一致。实证研究表明两种不一致与精确重建之间的负相关性,这使我们能够在不访问真实信息的情况下识别不准确的重建。基于这项研究,我们提出了CoR-GS,该方法基于两种不一致来识别和抑制不准确的重建:(1)共修剪考虑显示高点不一致的高斯处于不准确的位置并将其修剪。(2)伪视图共正则化考虑显示高渲染不一致的像素是不准确渲染的,并抑制这种不一致。在LLFF、Mip-NeRF360、DTU和Blender数据集上的结果表明,CoR-GS有效地规范了场景几何结构,重建了紧凑的表示,并在稀疏训练视图下实现了最先进的新视角合成质量。