We propose GGS, a Generalizable Gaussian Splatting method for Autonomous Driving which can achieve realistic rendering under large viewpoint changes. Previous generalizable 3D gaussian splatting methods are limited to rendering novel views that are very close to the original pair of images, which cannot handle large differences in viewpoint. Especially in autonomous driving scenarios, images are typically collected from a single lane. The limited training perspective makes rendering images of a different lane very challenging. To further improve the rendering capability of GGS under large viewpoint changes, we introduces a novel virtual lane generation module into GSS method to enables high-quality lane switching even without a multi-lane dataset. Besides, we design a diffusion loss to supervise the generation of virtual lane image to further address the problem of lack of data in the virtual lanes. Finally, we also propose a depth refinement module to optimize depth estimation in the GSS model. Extensive validation of our method, compared to existing approaches, demonstrates state-of-the-art performance.
我们提出了 GGS(通用高斯点云渲染)方法,用于自动驾驶,能够在大视角变化下实现真实感渲染。之前的通用 3D 高斯点云渲染方法仅限于渲染与原始图像对非常接近的新视角,无法处理较大的视角差异。特别是在自动驾驶场景中,图像通常从单车道收集。有限的训练视角使得渲染不同车道的图像变得非常具有挑战性。为进一步提升 GGS 在大视角变化下的渲染能力,我们在 GGS 方法中引入了一种新颖的虚拟车道生成模块,即使在没有多车道数据集的情况下,也能实现高质量的车道切换。此外,我们设计了一种扩散损失来监督虚拟车道图像的生成,以进一步解决虚拟车道数据不足的问题。最后,我们还提出了一种深度优化模块,以优化 GGS 模型中的深度估计。与现有方法相比,我们的方法经过广泛验证,展示了最先进的性能。