Overview
The 1st PointCloud-C challenge is hosted in conjecture with the ECCV'22 SenseHuman workshop. It aims at providing a comprehensive test-suite for point cloud robustness analysis under corruptions. It includes two seperated sets for point cloud classification and part segmentation, respectively. For more details, please refer to our technical reports.
Timeline
Competition @ CodaLab
To participate, please register at the official CodaLab page here.
Download
Please prepare the data from the following resources:
# | Description | Size | Number of Items | Data Format | Link |
---|---|---|---|---|---|
1 | Classification | 0.33GB | 1 | .zip | Google Drive |
1 | Part Segmentation | 0.76GB | 1 | .zip | Google Drive |
Note: The above data are generated for this competition only. To participate in the official ModelNet-C and ShapeNet-C benchmark, please refer to this page.
Protocol
Users are required to use only the official ModelNet and ShapeNet training set during training for the ModelNet-C track and the ShapeNet-C track, respectively. Other training datasets are only allowed when they are unlabelled or used in an unlabelled way. Corruptions that are part of ModelNet-C and ShapeNet-C are strictly NOT allowed to be included as training augmentations. No test-time model ensemble is allowed.
Metrics
In this competition, we choose DGCNN (Wang et al., 2018), a classic point cloud recognition method, as the baseline to normalize the severity of different corruptions. We use the mean corruption error (mCE) as the primary metric to measure the robustness of different algorithms. We also report the relative mCE (RmCE) and the clean scores, i.e., clean OA for point cloud classification and clean mIoU for part segmentation. Please refer to our papers for more detailed definitions.
Contact
If you encounter any problems, please get in touch with us at jiawei011@e.ntu.edu.sg and lingdong001@e.ntu.edu.sg.
License
This competition is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:
- That the data in this competition comes “AS IS”, without express or implied warranty. Although every effort has been made to ensure accuracy, we do not accept any responsibility for errors or omissions.
- That you may not use the data in this competition or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain.
- That you include a reference to PointCloud-C (including ModelNet-C, ShapeNet-C, and the specially generated data for academic challenges) in any work that makes use of the benchmark. For research papers, please cite our preferred publications as listed on our webpage.