Dynamic Gaussian splatting has led to impressive scene reconstruction and image synthesis advances in novel views. Existing methods, however, heavily rely on pre-computed poses and Gaussian initialization by Structure from Motion (SfM) algorithms or expensive sensors. For the first time, this paper addresses this issue by integrating self-supervised VO into our pose-free dynamic Gaussian method (VDG) to boost pose and depth initialization and static-dynamic decomposition. Moreover, VDG can work with only RGB image input and construct dynamic scenes at a faster speed and larger scenes compared with the pose-free dynamic view-synthesis method. We demonstrate the robustness of our approach via extensive quantitative and qualitative experiments. Our results show favorable performance over the state-of-the-art dynamic view synthesis methods.
动态高斯散射已经在新视角的场景重建和图像合成中取得了显著的进展。然而,现有方法严重依赖于通过运动结构(SfM)算法或昂贵传感器预先计算的姿态和高斯初始化。本文首次通过将自监督的视觉里程计(VO)集成到我们的无姿态动态高斯方法(VDG)中,来解决这一问题,以提升姿态和深度初始化以及静态-动态分解。此外,VDG只需使用RGB图像输入,就可以比无姿态动态视角合成方法更快地构建动态场景,并处理更大的场景。我们通过广泛的定量和定性实验展示了我们方法的稳健性。我们的结果显示,在动态视角合成方法中,性能优于现有的最先进技术。