3D Gaussian Splatting has recently emerged as a highly promising technique for modeling of static 3D scenes. In contrast to Neural Radiance Fields, it utilizes efficient rasterization allowing for very fast rendering at high-quality. However, the storage size is significantly higher, which hinders practical deployment, e.g.~on resource constrained devices. In this paper, we introduce a compact scene representation organizing the parameters of 3D Gaussian Splatting (3DGS) into a 2D grid with local homogeneity, ensuring a drastic reduction in storage requirements without compromising visual quality during rendering. Central to our idea is the explicit exploitation of perceptual redundancies present in natural scenes. In essence, the inherent nature of a scene allows for numerous permutations of Gaussian parameters to equivalently represent it. To this end, we propose a novel highly parallel algorithm that regularly arranges the high-dimensional Gaussian parameters into a 2D grid while preserving their neighborhood structure. During training, we further enforce local smoothness between the sorted parameters in the grid. The uncompressed Gaussians use the same structure as 3DGS, ensuring a seamless integration with established renderers. Our method achieves a reduction factor of 8x to 26x in size for complex scenes with no increase in training time, marking a substantial leap forward in the domain of 3D scene distribution and consumption.
三维高斯飞溅(3D Gaussian Splatting)技术近来已成为静态三维场景建模的一种非常有前景的技术。与神经辐射场(Neural Radiance Fields)相比,它利用高效的光栅化实现了高质量的快速渲染。然而,它的存储大小显著增加,这限制了它在资源受限设备上的实际部署。在本文中,我们引入了一种紧凑的场景表示方法,将三维高斯飞溅的参数组织到具有局部同质性的二维网格中,从而大幅减少存储需求,同时在渲染过程中不影响视觉质量。我们的想法核心是明确利用自然场景中存在的感知冗余。本质上,场景的固有特性允许使用众多高斯参数的排列来等效地表示它。为此,我们提出了一种新颖的高度并行算法,它将高维高斯参数有规律地排列到二维网格中,同时保留它们的邻域结构。在训练过程中,我们进一步在网格中对排序的参数施加局部平滑性。未压缩的高斯使用与三维高斯飞溅相同的结构,确保与现有渲染器的无缝集成。我们的方法在复杂场景的大小上实现了8倍至26倍的减少,且不增加训练时间,标志着在三维场景分发和消费领域的一大飞跃。