Jiawei Ren,Β
Lingdong Kong,Β
Liang Pan,Β
Ziwei Liu
S-Lab, Nanyang Technological University
PointCloud-C is the very first test-suite for point cloud perception robustness analysis under corruptions. It includes two sets: ModelNet-C (ICML'22) for point cloud classification and ShapeNet-C (arXiv'22) for part segmentation.
Fig. Examples of point cloud corruptions in PointCloud-C.
Visit our project page to explore more details. π±
- [2024.03] - We add Leaderboard to this page. We welcome pull requests to submit your results!
- [2024.01] - The toolkit tailored for The RoboDrive Challenge has been released. π οΈ
- [2023.12] - We are hosting The RoboDrive Challenge at ICRA 2024. π
- [2023.03] - Intend to test the robustness of your 3D perception models on real-world point clouds? Check our recent work, Robo3D, a comprehensive suite that enables OoD robustness evaluation of 3D detectors and segmentors on our newly established datasets:
KITTI-C
,SemanticKITTI-C
,nuScenes-C
, andWOD-C
. - [2022.11] - The preprint of the PointCloud-C paper (ModelNet-C + ShapeNet-C) is available here.
- [2022.10] - We have successfully hosted the 2022 PointCloud-C Challenge. Congratulations to the winners: π₯
Antins_cv
, π₯DGPC
&DGPS
, and π₯BIT_gdxy_xtf
. - [2022.07] - Try a Gradio demo for PointCloud-C corruptions at Hugging Face Spaces! π€
- [2022.07] - Competition starts! Join now at our CodaLab page.
- [2022.06] - PointCloud-C is now live on Paper-with-Code. Join the benchmark today!
- [2022.06] - The 1st PointCloud-C challenge will be hosted in conjecture with the ECCV'22 SenseHuman workshop. π
- [2022.06] - We are organizing the 1st PointCloud-C challenge! Click here to explore the competition details.
- [2022.05] - ModelNet-C is accepted to ICML 2022. Click here to check it out! π
- Highlight
- Data Preparation
- Getting Started
- Leaderboard
- Benchmark Results
- Evaluation
- Customize Evaluation
- Build PointCloud-C
- TODO List
- License
- Acknowledgement
- Citation
Please refer to DATA_PREPARE.md for the details to prepare the ModelNet-C and ShapeNet-C datasets.
Please refer to GET_STARTED.md to learn more usage about this codebase.
Method | Reference | Augmentation | mCE |
Clean OA |
---|---|---|---|---|
EPiC (RPC, WOLFMix) | Levi et al., ICCV 2023 | Yes | 0.501 | 0.927 |
EPiC (PCT) | Levi et al., ICCV 2023 | No | 0.646 | 0.934 |
WOLFMix (GDANet) | Ren et al., ICML 2022 | Yes | 0.571 | 0.934 |
RPC | Ren et al., ICML 2022 | No | 0.863 | 0.930 |
Method | Reference | Standalone | mCE |
RmCE |
Clean OA |
---|---|---|---|---|---|
DGCNN | Wang et al. | Yes | 1.000 | 1.000 | 0.926 |
PointNet | Qi et al. | Yes | 1.422 | 1.488 | 0.907 |
PointNet++ | Qi et al. | Yes | 1.072 | 1.114 | 0.930 |
RSCNN | Liu et al. | Yes | 1.130 | 1.201 | 0.923 |
SimpleView | Goyal et al. | Yes | 1.047 | 1.181 | 0.939 |
GDANet | Xu et al. | Yes | 0.892 | 0.865 | 0.934 |
CurveNet | Xiang et al. | Yes | 0.927 | 0.978 | 0.938 |
PAConv | Xu et al. | Yes | 1.104 | 1.211 | 0.936 |
PCT | Guo et al. | Yes | 0.925 | 0.884 | 0.930 |
RPC | Ren et al. | Yes | 0.863 | 0.778 | 0.930 |
OcCo (DGCNN) | Wang et al. | No | 1.248 | 1.262 | 0.922 |
PointBERT | Yu et al. | No | 1.033 | 0.895 | 0.922 |
PointMixUp (PointNet++) | Chen et al. | No | 1.028 | 0.785 | 0.915 |
PointCutMix-K (PointNet++) | Zhang et al. | No | 0.806 | 0.808 | 0.933 |
PointCutMix-R (PointNet++) | Zhang et al. | No | 0.796 | 0.809 | 0.929 |
PointWOLF (DGCNN) | Kim et al. | No | 0.814 | 0.698 | 0.926 |
RSMix (DGCNN) | Lee et al. | No | 0.745 | 0.839 | 0.930 |
PointCutMix-R (DGCNN) | Zhang et al. | No | 0.627 | 0.504 | 0.926 |
PointCutMix-K (DGCNN) | Zhang et al. | No | 0.659 | 0.585 | 0.932 |
WOLFMix (DGCNN) | Ren et al. | No | 0.590 | 0.485 | 0.932 |
WOLFMix (GDANet) | Ren et al. | No | 0.571 | 0.439 | 0.934 |
WOLFMix (PCT) | Ren et al. | No | 0.574 | 0.653 | 0.934 |
PointCutMix-K (PCT) | Zhang et al. | No | 0.644 | 0.565 | 0.931 |
PointCutMix-R (PCT) | Zhang et al. | No | 0.608 | 0.518 | 0.928 |
WOLFMix (RPC) | Ren et al. | No | 0.601 | 0.940 | 0.933 |
Method | Reference | Standalone | mCE |
RmCE |
Clean mIoU |
---|---|---|---|---|---|
DGCNN | Wang et al. | Yes | 1.000 | 1.000 | 0.852 |
PointNet | Qi et al. | Yes | 1.178 | 1.056 | 0.833 |
PointNet++ | Qi et al. | Yes | 1.112 | 1.850 | 0.857 |
OcCo-DGCNN | Wang et al. | No | 0.977 | 0.804 | 0.851 |
OcCo-PointNet | Wang et al. | No | 1.130 | 0.937 | 0.832 |
OcCo-PCN | Wang et al. | No | 1.173 | 0.882 | 0.815 |
GDANet | Xu et al. | Yes | 0.923 | 0.785 | 0.857 |
PAConv | Xu et al. | Yes | 0.927 | 0.848 | 0.859 |
PointTransformers | Zhao et al. | Yes | 1.049 | 0.933 | 0.840 |
PointMLP | Ma et al. | Yes | 0.977 | 0.810 | 0.853 |
PointBERT | Yu et al. | No | 1.033 | 0.895 | 0.855 |
PointMAE | Pang et al. | No | 0.927 | 0.703 | 0.860 |
*Note: Standalone indicates whether or not the method is a standalone architecture or a combination with augmentation or pretrain.
Evaluation commands are provided in EVALUATE.md.
We have provided evaluation utilities to help you evaluate on ModelNet-C using your own codebase. Please follow CUSTOMIZE.md.
You can manage to generate your own "PointCloud-C"! Follow the instructions in GENERATE.md.
- Initial release. π
- Add license. See here for more details.
- Release test sets. Download ModelNet-C and ShapeNet-C from our project page.
- Add evaluation scripts for classification models.
- Add evaluation scripts for part segmentation models.
- Add competition details.
- Clean and retouch codebase.
This work is under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
We acknowledge the use of the following public resources during the course of this work: 1SimpleView, 2PCT, 3GDANet, 4CurveNet, 5PAConv, 6RSMix, 7PointMixUp, 8PointCutMix, 9PointWOLF, 10PointTransformers, 11OcCo, 12PointMLP, 13PointBERT, and 14PointMAE.
If you find this work helpful, please kindly consider citing our papers:
@article{ren2022pointcloud-c,
title = {PointCloud-C: Benchmarking and Analyzing Point Cloud Perception Robustness under Corruptions},
author = {Jiawei Ren and Lingdong Kong and Liang Pan and Ziwei Liu},
journal = {Preprint},
year = {2022}
}
@inproceedings{ren2022modelnet-c,
title = {Benchmarking and Analyzing Point Cloud Classification under Corruptions},
author = {Jiawei Ren and Liang Pan and Ziwei Liu},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2022}
}