Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.5 KB

2408.11413.md

File metadata and controls

5 lines (3 loc) · 2.5 KB

Pano2Room: Novel View Synthesis from a Single Indoor Panorama

Recent single-view 3D generative methods have made significant advancements by leveraging knowledge distilled from extensive 3D object datasets. However, challenges persist in the synthesis of 3D scenes from a single view, primarily due to the complexity of real-world environments and the limited availability of high-quality prior resources. In this paper, we introduce a novel approach called Pano2Room, designed to automatically reconstruct high-quality 3D indoor scenes from a single panoramic image. These panoramic images can be easily generated using a panoramic RGBD inpainter from captures at a single location with any camera. The key idea is to initially construct a preliminary mesh from the input panorama, and iteratively refine this mesh using a panoramic RGBD inpainter while collecting photo-realistic 3D-consistent pseudo novel views. Finally, the refined mesh is converted into a 3D Gaussian Splatting field and trained with the collected pseudo novel views. This pipeline enables the reconstruction of real-world 3D scenes, even in the presence of large occlusions, and facilitates the synthesis of photo-realistic novel views with detailed geometry. Extensive qualitative and quantitative experiments have been conducted to validate the superiority of our method in single-panorama indoor novel synthesis compared to the state-of-the-art.

近年来,单视图三维生成方法通过利用从广泛的三维对象数据集中提取的知识取得了显著进展。然而,由于真实世界环境的复杂性以及高质量先验资源的有限性,从单个视图合成三维场景仍面临挑战。在本文中,我们提出了一种新颖的方法,称为 Pano2Room,旨在从单个全景图像自动重建高质量的室内三维场景。这些全景图像可以通过全景 RGBD 图像修补器从任何相机在单个位置捕获的图像轻松生成。我们的核心思想是首先从输入的全景图像构建一个初步网格,并使用全景 RGBD 图像修补器迭代地优化该网格,同时收集逼真的三维一致的伪新视图。最终,优化后的网格被转换为三维高斯喷涂场,并使用收集到的伪新视图进行训练。该流程即使在存在大量遮挡的情况下,也能重建真实的三维场景,并能合成具有详细几何信息的逼真新视图。通过大量的定性和定量实验,我们验证了我们的方法在单全景室内新视图合成方面相较于当前最先进的方法的优越性。