TorchSparse is a high-performance neural network library for point cloud processing.
Here at Zendar, we rely on a fork of this repo.
If you need to update this code, the steps to do so are:
- make the code changes required, here in
ZendarInc/torchsparse
- note: if any python dependencies were updated, ensure that you update
the
pyproject.toml
andpdm.lock
file correctly, using the same PDM version as our CI runners. (see.github/workflows/post-commit.yaml
for reference.) the version of PDM used here likely does not match that used in other repositories, likeRadarProcessor
- note: if any python dependencies were updated, ensure that you update
the
- increment the
torchsparse/version.py
appropriately - merge those changes into
zendar-main
- the
post-commit
github action should then build the wheel, and make it available for you to download in the action summary page. unzip the file to get a.whl
file, create a github release (versioned the same as the code), and attach the resulting wheel file to it.
- the
- in the RadarProcessor repo, update the
pyproject.toml
file appropriately - merge your changes into the develop branch in RadarProcessor
To aid with integration of the new torchsparse version 2.1.0 into RadarProcessor, the module torchsparse has been renamed torchsparseplusplus. This will be maintained in the branch zendar-main-tspp
. Torchsparse version 1.4.5 will be maintained on the branch zendar-main
.
TorchSparse depends on the Google Sparse Hash library.
-
On Ubuntu, it can be installed by
sudo apt-get install libsparsehash-dev
-
On Mac OS, it can be installed by
brew install google-sparsehash
-
You can also compile the library locally (if you do not have the sudo permission) and add the library path to the environment variable
CPLUS_INCLUDE_PATH
.
The latest released TorchSparse (v1.4.0) can then be installed by
pip install --upgrade git+https://github.com/mit-han-lab/[email protected]
If you use TorchSparse in your code, please remember to specify the exact version as your dependencies.
We compare TorchSparse with MinkowskiEngine (where the latency is measured on NVIDIA GTX 1080Ti):
MinkowskiEngine v0.4.3 | TorchSparse v1.0.0 | |
---|---|---|
MinkUNet18C (MACs / 10) | 224.7 ms | 124.3 ms |
MinkUNet18C (MACs / 4) | 244.3 ms | 160.9 ms |
MinkUNet18C (MACs / 2.5) | 269.6 ms | 214.3 ms |
MinkUNet18C | 323.5 ms | 294.0 ms |
Sparse tensor (SparseTensor
) is the main data structure for point cloud, which has two data fields:
- Coordinates (
coords
): a 2D integer tensor with a shape of N x 4, where the first three dimensions correspond to quantized x, y, z coordinates, and the last dimension denotes the batch index. - Features (
feats
): a 2D tensor with a shape of N x C, where C is the number of feature channels.
Most existing datasets provide raw point cloud data with float coordinates. We can use sparse_quantize
(provided in torchsparse.utils.quantize
) to voxelize x, y, z coordinates and remove duplicates:
coords -= np.min(coords, axis=0, keepdims=True)
coords, indices = sparse_quantize(coords, voxel_size, return_index=True)
coords = torch.tensor(coords, dtype=torch.int)
feats = torch.tensor(feats[indices], dtype=torch.float)
tensor = SparseTensor(coords=coords, feats=feats)
We can then use sparse_collate_fn
(provided in torchsparse.utils.collate
) to assemble a batch of SparseTensor
's (and add the batch dimension to coords
). Please refer to this example for more details.
The neural network interface in TorchSparse is very similar to PyTorch:
from torch import nn
from torchsparse import nn as spnn
model = nn.Sequential(
spnn.Conv3d(in_channels, out_channels, kernel_size),
spnn.BatchNorm(out_channels),
spnn.ReLU(True),
)
If you use TorchSparse in your research, please use the following BibTeX entry:
@inproceedings{tang2020searching,
title = {{Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution}},
author = {Tang, Haotian and Liu, Zhijian and Zhao, Shengyu and Lin, Yujun and Lin, Ji and Wang, Hanrui and Han, Song},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2020}
}
TorchSparse is inspired by many existing open-source libraries, including (but not limited to) MinkowskiEngine, SECOND and SparseConvNet.