Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 3.1 KB

2405.03659.md

File metadata and controls

5 lines (3 loc) · 3.1 KB

A Construct-Optimize Approach to Sparse View Synthesis without Camera Pose

Novel view synthesis from a sparse set of input images is a challenging problem of great practical interest, especially when camera poses are absent or inaccurate. Direct optimization of camera poses and usage of estimated depths in neural radiance field algorithms usually do not produce good results because of the coupling between poses and depths, and inaccuracies in monocular depth estimation. In this paper, we leverage the recent 3D Gaussian splatting method to develop a novel construct-and-optimize method for sparse view synthesis without camera poses. Specifically, we construct a solution progressively by using monocular depth and projecting pixels back into the 3D world. During construction, we optimize the solution by detecting 2D correspondences between training views and the corresponding rendered images. We develop a unified differentiable pipeline for camera registration and adjustment of both camera poses and depths, followed by back-projection. We also introduce a novel notion of an expected surface in Gaussian splatting, which is critical to our optimization. These steps enable a coarse solution, which can then be low-pass filtered and refined using standard optimization methods. We demonstrate results on the Tanks and Temples and Static Hikes datasets with as few as three widely-spaced views, showing significantly better quality than competing methods, including those with approximate camera pose information. Moreover, our results improve with more views and outperform previous InstantNGP and Gaussian Splatting algorithms even when using half the dataset.

从稀疏的输入图像集合进行新视角合成是一个具有实际意义的挑战性问题,特别是在缺少或不准确的相机位置的情况下。在神经辐射场算法中直接优化相机位置并使用估计的深度通常无法产生良好的结果,这是因为位置和深度之间的耦合以及单目深度估计的不准确性。在这篇论文中,我们利用最近的3D高斯喷溅方法,开发了一种新的构建并优化方法,用于在没有相机位置的情况下进行稀疏视图合成。具体来说,我们通过使用单目深度并将像素投影回三维世界,逐步构建解决方案。在构建过程中,我们通过检测训练视图与相应渲染图像之间的二维对应关系来优化解决方案。我们开发了一个统一的可微分管道,用于相机注册和调整相机位置及深度,随后进行反投影。我们还引入了高斯喷溅中的一个新概念——预期表面,这对我们的优化至关重要。这些步骤使我们能够得到一个粗糙的解决方案,随后可以通过标准优化方法进行低通滤波和细化。我们在Tanks and Temples及Static Hikes数据集上展示了结果,即使仅使用三个相距较远的视图,也显示出比竞争方法包括那些有大致相机位置信息的方法显著更好的质量。此外,我们的结果随着视图数量的增加而改善,并且即使使用一半的数据集也超过了先前的InstantNGP和高斯喷溅算法。