SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model
We introduce SeaSplat, a method to enable real-time rendering of underwater scenes leveraging recent advances in 3D radiance fields. Underwater scenes are challenging visual environments, as rendering through a medium such as water introduces both range and color dependent effects on image capture. We constrain 3D Gaussian Splatting (3DGS), a recent advance in radiance fields enabling rapid training and real-time rendering of full 3D scenes, with a physically grounded underwater image formation model. Applying SeaSplat to the real-world scenes from SeaThru-NeRF dataset, a scene collected by an underwater vehicle in the US Virgin Islands, and simulation-degraded real-world scenes, not only do we see increased quantitative performance on rendering novel viewpoints from the scene with the medium present, but are also able to recover the underlying true color of the scene and restore renders to be without the presence of the intervening medium. We show that the underwater image formation helps learn scene structure, with better depth maps, as well as show that our improvements maintain the significant computational improvements afforded by leveraging a 3D Gaussian representation.
我们提出了SeaSplat,一种利用最新的三维辐射场进展实现水下场景实时渲染的方法。水下场景是具有挑战性的视觉环境,因为通过如水这样的介质进行渲染时,图像捕获会受到距离和颜色的依赖性影响。我们对3D高斯分布(3DGS)进行约束,3DGS是辐射场中的一项新进展,能够实现完整三维场景的快速训练和实时渲染,并结合了一个基于物理的水下图像生成模型。将SeaSplat应用于SeaThru-NeRF数据集中的真实世界场景(该数据集由美国维尔京群岛的一辆水下车辆采集),以及通过模拟降质处理的真实世界场景,我们不仅在含有介质的情况下实现了更高的定量渲染性能,还能够恢复场景的真实颜色,并将渲染结果还原为没有介质影响的状态。我们展示了水下图像生成模型有助于学习场景结构,生成更好的深度图,同时我们的改进也保持了通过3D高斯表示带来的显著计算优势。