Benchmark
Metrics
In the PointCloud-C benchmark, we choose DGCNN (Wang et al., 2018), a classic point cloud recognition method, as the baseline to normalize the severity of different corruptions. Inspired by the 2D robustness work (Hendrycks & Dietterich, 2019), we use the mean corruption error (mCE) as the primary metric to measure the robustness of different algorithms. We also report the relative mCE (RmCE) and the clean scores, i.e., clean OA for point cloud classification and clean mIoU for part segmentation. Please refer to our papers for more detailed definitions.
Participation
The PointCloud-C benchmark is live on Paper-with-Code.
Please evaluate your models using the official ModelNet-C and ShapeNet-C datasets and update the results in the benchmark.
Alternatively, you can directly send your results and associated method descriptions to jiawei011@e.ntu.edu.sg and lingdong001@e.ntu.edu.sg.
We will update your methods manually into our benchmark.
ModelNet-C
Method | Reference | Standalone | mCE ↓ | RmCE ↓ | Clean OA ↑ |
DGCNN | Wang et al., 2019 | ✔ | 1.000 | 1.000 | 0.926 |
PointNet | Qi et al., 2017 | ✔ | 1.422 | 1.488 | 0.907 |
PointNet++ | Qi et al., 2017 | ✔ | 1.072 | 1.114 | 0.930 |
RSCNN | Liu et al., 2019 | ✔ | 1.130 | 1.201 | 0.923 |
SimpleView | Goyal et al., 2021 | ✔ | 1.047 | 1.181 | 0.939 |
GDANet | Xu et al., 2021 | ✔ | 0.892 | 0.865 | 0.934 |
CurveNet | Xiang et al., 2021 | ✔ | 0.927 | 0.978 | 0.938 |
PAConv | Xu et al., 2021 | ✔ | 1.104 | 1.211 | 0.936 |
PCT | Guo et al., 2020 | ✔ | 0.925 | 0.884 | 0.930 |
RPC | Ren et al., 2022 | ✔ | 0.863 | 0.778 | 0.930 |
OcCo (DGCNN) | Wang et al., 2021 | 1.047 | 1.302 | 0.922 | |
PointBERT | Yu et al., 2021 | 1.248 | 1.262 | 0.922 | |
PointMixUp (PointNet++) | Chen et al., 2020 | 1.028 | 0.785 | 0.915 | |
PointWOLF (DGCNN) | Kim et al., 2021 | 0.814 | 0.698 | 0.926 | |
RSMix (DGCNN) | Lee et al., 2021 | 0.745 | 0.839 | 0.930 | |
WOLFMix (DGCNN) | Ren et al., 2022 | 0.590 | 0.485 | 0.932 | |
WOLFMix (GDANet) | Ren et al., 2022 | 0.571 | 0.439 | 0.934 | |
WOLFMix (PCT) | Ren et al., 2022 | 0.574 | 0.653 | 0.934 | |
WOLFMix (RPC) | Ren et al., 2022 | 0.601 | 0.940 | 0.933 |
*Note: Standalone indicates whether or not the method is a standalone architecture or a combination with augmentation or pretrain.
ShapeNet-C
Method | Reference | Standalone | mCE ↓ | RmCE ↓ | Clean mIoU ↑ |
DGCNN | Wang et al., 2019 | ✔ | 1.000 | 1.000 | 0.852 |
PointNet | Qi et al., 2017 | ✔ | 1.178 | 1.056 | 0.833 |
PointNet++ | Qi et al., 2017 | ✔ | 1.112 | 1.850 | 0.857 |
OcCo (DGCNN) | Wang et al., 2021 | 0.977 | 0.804 | 0.851 | |
OcCo (PointNet) | Wang et al., 2021 | 1.130 | 0.937 | 0.832 | |
OcCo (PCN) | Wang et al., 2021 | 1.173 | 0.882 | 0.815 | |
GDANet | Xu et al., 2021 | ✔ | 0.923 | 0.785 | 0.857 |
PAConv | Xu et al., 2021 | ✔ | 0.927 | 0.848 | 0.859 |
PointTransformers | Zhao et al., 2020 | ✔ | 1.049 | 0.933 | 0.840 |
PointMLP | Ma et al., 2022 | ✔ | 0.977 | 0.810 | 0.853 |
PointBERT | Yu et al., 2021 | 1.033 | 0.895 | 0.855 | |
PointMAE | Pang et al., 2022 | 0.927 | 0.703 | 0.860 |
*Note: Standalone indicates whether or not the method is a standalone architecture or a combination with augmentation or pretrain.