Understanding dynamic 3D scenes is fundamental for various applications, including extended reality (XR) and autonomous driving. Effectively integrating semantic information into 3D reconstruction enables holistic representation that opens opportunities for immersive and interactive applications. We introduce SADG, Segment Any Dynamic Gaussian Without Object Trackers, a novel approach that combines dynamic Gaussian Splatting representation and semantic information without reliance on object IDs. In contrast to existing works, we do not rely on supervision based on object identities to enable consistent segmentation of dynamic 3D objects. To this end, we propose to learn semantically-aware features by leveraging masks generated from the Segment Anything Model (SAM) and utilizing our novel contrastive learning objective based on hard pixel mining. The learned Gaussian features can be effectively clustered without further post-processing. This enables fast computation for further object-level editing, such as object removal, composition, and style transfer by manipulating the Gaussians in the scene. We further extend several dynamic novel-view datasets with segmentation benchmarks to enable testing of learned feature fields from unseen viewpoints. We evaluate SADG on proposed benchmarks and demonstrate the superior performance of our approach in segmenting objects within dynamic scenes along with its effectiveness for further downstream editing tasks.
理解动态 3D 场景是扩展现实(XR)和自动驾驶等应用的基础。将语义信息有效集成到 3D 重建中,可实现整体化表示,为沉浸式和交互式应用提供了可能性。我们提出了 SADG(Segment Any Dynamic Gaussian Without Object Trackers),一种全新方法,结合了动态高斯点云表示与语义信息,而无需依赖对象 ID。 与现有方法相比,我们不依赖基于对象身份的监督,便能够对动态 3D 对象进行一致的分割。为实现这一点,我们提出通过利用 Segment Anything Model (SAM) 生成的掩膜,以及基于困难像素挖掘的全新对比学习目标,学习语义感知特征。学习到的高斯特征可以在无需进一步后处理的情况下有效聚类,从而支持快速计算,实现对象级编辑,例如对象移除、组合和风格迁移,这些操作均通过对场景中高斯点进行操作完成。 此外,我们扩展了多个动态新视图数据集,添加了分割基准,用于测试从未见过的视点中学习的特征场。我们在这些基准上评估了 SADG,结果表明,该方法在动态场景中对象分割方面具有卓越性能,同时在后续编辑任务中也展现了其高效性和有效性。