Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.39 KB

2312.00451.md

File metadata and controls

5 lines (3 loc) · 2.39 KB

FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting

Novel view synthesis from limited observations remains an important and persistent task. However, high efficiency in existing NeRF-based few-shot view synthesis is often compromised to obtain an accurate 3D representation. To address this challenge, we propose a few-shot view synthesis framework based on 3D Gaussian Splatting that enables real-time and photo-realistic view synthesis with as few as three training views. The proposed method, dubbed FSGS, handles the extremely sparse initialized SfM points with a thoughtfully designed Gaussian Unpooling process. Our method iteratively distributes new Gaussians around the most representative locations, subsequently infilling local details in vacant areas. We also integrate a large-scale pre-trained monocular depth estimator within the Gaussians optimization process, leveraging online augmented views to guide the geometric optimization towards an optimal solution. Starting from sparse points observed from limited input viewpoints, our FSGS can accurately grow into unseen regions, comprehensively covering the scene and boosting the rendering quality of novel views. Overall, FSGS achieves state-of-the-art performance in both accuracy and rendering efficiency across diverse datasets, including LLFF, Mip-NeRF360, and Blender.

从有限的观察中合成新视图仍然是一个重要且持续的任务。然而,在现有基于NeRF的少样本视图合成中,为了获得准确的3D表示,往往会牺牲高效性。为了应对这一挑战,我们提出了一种基于3D高斯喷溅的少样本视图合成框架,该框架能够仅使用三个训练视图实现实时和真实感视图合成。我们提出的方法,称为FSGS,通过精心设计的高斯Unpooling过程处理极其稀疏的初始化SfM点。我们的方法迭代地在最具代表性的位置周围分布新高斯,随后在空白区域填充局部细节。我们还在高斯优化过程中整合了一个大规模预训练的单目深度估计器,利用在线增强视图指导几何优化朝着最优解发展。从有限输入视点观察到的稀疏点开始,我们的FSGS能够准确地扩展到未见区域,全面覆盖场景并提升新视图的渲染质量。总体而言,FSGS在多种数据集上都实现了最新的性能,包括LLFF、Mip-NeRF360和Blender,无论是在准确性还是渲染效率方面。