Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 1.94 KB

2406.09395.md

File metadata and controls

5 lines (3 loc) · 1.94 KB

Modeling Ambient Scene Dynamics for Free-view Synthesis

We introduce a novel method for dynamic free-view synthesis of an ambient scenes from a monocular capture bringing a immersive quality to the viewing experience. Our method builds upon the recent advancements in 3D Gaussian Splatting (3DGS) that can faithfully reconstruct complex static scenes. Previous attempts to extend 3DGS to represent dynamics have been confined to bounded scenes or require multi-camera captures, and often fail to generalize to unseen motions, limiting their practical application. Our approach overcomes these constraints by leveraging the periodicity of ambient motions to learn the motion trajectory model, coupled with careful regularization. We also propose important practical strategies to improve the visual quality of the baseline 3DGS static reconstructions and to improve memory efficiency critical for GPU-memory intensive learning. We demonstrate high-quality photorealistic novel view synthesis of several ambient natural scenes with intricate textures and fine structural elements.

我们提出了一种新颖的方法,用于从单目摄像头捕捉环境场景,实现动态自由视角合成,为观看体验带来沉浸式的质量。我们的方法建立在最新的3D高斯喷涂(3DGS)技术上,该技术能够忠实地重建复杂的静态场景。以往尝试将3DGS扩展到动态表示的努力,通常局限于有界场景或需要多摄像头捕捉,并且常常无法泛化到未见过的运动,限制了它们的实际应用。我们的方法通过利用环境运动的周期性来学习运动轨迹模型,并结合仔细的规范化来克服这些限制。我们还提出了一些重要的实用策略,以提高基线3DGS静态重建的视觉质量和改善对GPU内存密集型学习至关重要的内存效率。我们展示了几种环境自然场景的高质量光真实新视角合成,这些场景具有复杂的纹理和精细的结构元素。