Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.62 KB

2402.10259.md

File metadata and controls

5 lines (3 loc) · 2.62 KB

GaussianObject: Just Taking Four Images to Get A High-Quality 3D Object with Gaussian Splatting

Reconstructing and rendering 3D objects from highly sparse views is of critical importance for promoting applications of 3D vision techniques and improving user experience. However, images from sparse views only contain very limited 3D information, leading to two significant challenges: 1) Difficulty in building multi-view consistency as images for matching are too few; 2) Partially omitted or highly compressed object information as view coverage is insufficient. To tackle these challenges, we propose GaussianObject, a framework to represent and render the 3D object with Gaussian splatting, that achieves high rendering quality with only 4 input images. We first introduce techniques of visual hull and floater elimination which explicitly inject structure priors into the initial optimization process for helping build multi-view consistency, yielding a coarse 3D Gaussian representation. Then we construct a Gaussian repair model based on diffusion models to supplement the omitted object information, where Gaussians are further refined. We design a self-generating strategy to obtain image pairs for training the repair model. Our GaussianObject is evaluated on several challenging datasets, including MipNeRF360, OmniObject3D, and OpenIllumination, achieving strong reconstruction results from only 4 views and significantly outperforming previous state-of-the-art methods.

重建和渲染来自高度稀疏视角的三维对象对于推动三维视觉技术的应用和改善用户体验至关重要。然而,稀疏视角的图像只包含非常有限的三维信息,导致两个显著挑战:1)由于匹配图像太少,建立多视角一致性困难;2)部分省略或高度压缩的对象信息,因为视角覆盖不足。为了解决这些挑战,我们提出了一个框架 GaussianObject,通过高斯喷溅来表示和渲染三维对象,仅使用4张输入图像即可实现高质量渲染。我们首先介绍了视觉外壳和浮点消除技术,这些技术明确地将结构先验注入初始优化过程中,以帮助建立多视角一致性,产生一个粗糙的三维高斯表示。然后我们基于扩散模型构建了一个高斯修复模型来补充省略的对象信息,其中高斯被进一步细化。我们设计了一种自生成策略来获得用于训练修复模型的图像对。我们的 GaussianObject 在几个具有挑战性的数据集上进行了评估,包括 MipNeRF360、OmniObject3D 和 OpenIllumination,仅从4个视角就实现了强大的重建结果,显著超越了之前的最先进方法。