LiDAR simulation plays a crucial role in closed-loop simulation for autonomous driving. Although recent advancements, such as the use of reconstructed mesh and Neural Radiance Fields (NeRF), have made progress in simulating the physical properties of LiDAR, these methods have struggled to achieve satisfactory frame rates and rendering quality. To address these limitations, we present LiDAR-GS, the first LiDAR Gaussian Splatting method, for real-time high-fidelity re-simulation of LiDAR sensor scans in public urban road scenes. The vanilla Gaussian Splatting, designed for camera models, cannot be directly applied to LiDAR re-simulation. To bridge the gap between passive camera and active LiDAR, our LiDAR-GS designs a differentiable laser beam splatting, grounded in the LiDAR range view model. This innovation allows for precise surface splatting by projecting lasers onto micro cross-sections, effectively eliminating artifacts associated with local affine approximations. Additionally, LiDAR-GS leverages Neural Gaussian Fields, which further integrate view-dependent clues, to represent key LiDAR properties that are influenced by the incident angle and external factors. Combining these practices with some essential adaptations, e.g., dynamic instances decomposition, our approach succeeds in simultaneously re-simulating depth, intensity, and ray-drop channels, achieving state-of-the-art results in both rendering frame rate and quality on publically available large scene datasets.
LiDAR模拟在自动驾驶的闭环仿真中起着至关重要的作用。尽管近年来的技术进步(如使用重建网格和神经辐射场NeRF)在模拟LiDAR的物理特性方面取得了一定进展,但这些方法在帧率和渲染质量方面仍未达到令人满意的水平。为了解决这些局限,我们提出了LiDAR-GS,这是一种基于高斯点的新方法,用于在公共城市道路场景中实现实时高保真LiDAR传感器扫描的重模拟。传统的高斯点方法是为相机模型设计的,无法直接应用于LiDAR重模拟。为了弥合被动相机和主动LiDAR之间的差距,LiDAR-GS设计了一种基于LiDAR视距模型的可微激光束点处理方法。该创新通过将激光投射到微小截面上实现了精确的表面点投影,有效消除了与局部仿射近似相关的伪影。此外,LiDAR-GS利用神经高斯场,将视角依赖线索进一步集成,以表示受入射角和外部因素影响的关键LiDAR属性。结合一些必要的适应措施,例如动态实例分解,我们的方法能够同时重模拟深度、强度和光线丢失通道,并在公共大规模场景数据集上实现了在渲染帧率和质量方面的最新成果。