Skip to content

Latest commit

 

History

History
7 lines (5 loc) · 2.29 KB

2411.12309.md

File metadata and controls

7 lines (5 loc) · 2.29 KB

DGTR: Distributed Gaussian Turbo-Reconstruction for Sparse-View Vast Scenes

Novel-view synthesis (NVS) approaches play a critical role in vast scene reconstruction. However, these methods rely heavily on dense image inputs and prolonged training times, making them unsuitable where computational resources are limited. Additionally, few-shot methods often struggle with poor reconstruction quality in vast environments. This paper presents DGTR, a novel distributed framework for efficient Gaussian reconstruction for sparse-view vast scenes. Our approach divides the scene into regions, processed independently by drones with sparse image inputs. Using a feed-forward Gaussian model, we predict high-quality Gaussian primitives, followed by a global alignment algorithm to ensure geometric consistency. Synthetic views and depth priors are incorporated to further enhance training, while a distillation-based model aggregation mechanism enables efficient reconstruction. Our method achieves high-quality large-scale scene reconstruction and novel-view synthesis in significantly reduced training times, outperforming existing approaches in both speed and scalability. We demonstrate the effectiveness of our framework on vast aerial scenes, achieving high-quality results within minutes.

新视图合成(NVS)方法在大场景重建中扮演了关键角色。然而,这些方法严重依赖密集的图像输入和长时间的训练,限制了其在计算资源有限的环境中的应用。此外,少样本方法在大场景中往往面临重建质量较差的问题。 本文提出了一种新颖的分布式框架 DGTR,用于稀疏视图大场景的高效高斯重建。我们的方法将场景划分为多个区域,由携带稀疏图像输入的无人机独立处理。通过一个前馈高斯模型,我们预测高质量的高斯基元,随后使用全局对齐算法确保几何一致性。合成视图和深度先验被引入以进一步提升训练效果,而基于蒸馏的模型聚合机制则实现了高效的重建。 我们的方法在显著减少训练时间的同时,实现了高质量的大规模场景重建和新视图合成。在速度和可扩展性上均超越现有方法。我们在大规模航拍场景上验证了该框架的有效性,展示了其在数分钟内生成高质量结果的能力。