Skip to content

Latest commit

 

History

History
6 lines (4 loc) · 3.08 KB

2410.16995.md

File metadata and controls

6 lines (4 loc) · 3.08 KB

E-3DGS: Gaussian Splatting with Exposure and Motion Events

Estimating Neural Radiance Fields (NeRFs) from images captured under optimal conditions has been extensively explored in the vision community. However, robotic applications often face challenges such as motion blur, insufficient illumination, and high computational overhead, which adversely affect downstream tasks like navigation, inspection, and scene visualization. To address these challenges, we propose E-3DGS, a novel event-based approach that partitions events into motion (from camera or object movement) and exposure (from camera exposure), using the former to handle fast-motion scenes and using the latter to reconstruct grayscale images for high-quality training and optimization of event-based 3D Gaussian Splatting (3DGS). We introduce a novel integration of 3DGS with exposure events for high-quality reconstruction of explicit scene representations. Our versatile framework can operate on motion events alone for 3D reconstruction, enhance quality using exposure events, or adopt a hybrid mode that balances quality and effectiveness by optimizing with initial exposure events followed by high-speed motion events. We also introduce EME-3D, a real-world 3D dataset with exposure events, motion events, camera calibration parameters, and sparse point clouds. Our method is faster and delivers better reconstruction quality than event-based NeRF while being more cost-effective than NeRF methods that combine event and RGB data by using a single event sensor. By combining motion and exposure events, E-3DGS sets a new benchmark for event-based 3D reconstruction with robust performance in challenging conditions and lower hardware demands.

在视觉领域中,已广泛研究了从理想条件下拍摄的图像中估计神经辐射场(NeRFs)。然而,机器人应用通常面临运动模糊、光照不足和高计算开销等挑战,这些因素不利于导航、检测和场景可视化等下游任务。为应对这些挑战,我们提出了E-3DGS,这是一种基于事件的创新方法,将事件划分为运动(由相机或物体运动引起)和曝光(由相机曝光引起),前者用于处理快速运动场景,后者用于重建灰度图像,以便高质量训练和优化基于事件的3D高斯点云(3DGS)。我们首次将3DGS与曝光事件相结合,实现高质量的显式场景表示重建。 我们的多功能框架可以仅依靠运动事件进行3D重建,通过曝光事件提升质量,或采用混合模式:首先使用初始曝光事件优化,再利用高速运动事件来平衡质量和效率。此外,我们引入了EME-3D,这是一种包含曝光事件、运动事件、相机校准参数和稀疏点云的真实3D数据集。相比事件驱动的NeRF,我们的方法更快且重建质量更高,同时相比那些结合事件和RGB数据的NeRF方法,由于仅使用单个事件传感器,具备更高的成本效益。通过结合运动和曝光事件,E-3DGS在具有挑战性的条件下实现了强大的3D重建表现,降低了硬件需求,树立了基于事件的3D重建新基准。