File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation
Title | OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation |
---|---|
Authors | |
Issue Date | 17-Jun-2024 |
Abstract | The booming of 3D recognition in the 2020s began with the introduction of point cloud transformers. They quickly overwhelmed sparse CNNs and became state-of-the-art models, especially in 3D semantic segmentation. However, sparse CNNs are still valuable networks, due to their efficiency treasure, and ease of application. In this work, we reexamine the design distinctions and test the limits of what a sparse CNN can achieve. We discover that the key credit to the performance difference is adaptivity. Specifically, we propose two key components, i.e., adaptive receptive fields (spatially) and adaptive relation, to bridge the gap. This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module to greatly enhance the adaptivity of sparse CNNs at minimal computational cost. Without any self-attention modules, OA-CNNs favorably surpass point transformers in terms of accuracy in both indoor and outdoor scenes, with much less latency and memory cost. Notably, it achieves 76.1%, 78.9%, and 70.6% mIoU on ScanNet v2, nuScenes, and SemanticKITTI validation benchmarks respectively, while maintaining at most 5× better speed than transformer counterparts. This revelation highlights the potential of pure sparse CNNs to outperform transformer-related networks. Our code is built upon Pointcept [9], which is available at here 1 1 https://github.com/Pointcept/Pointcept. |
Persistent Identifier | http://hdl.handle.net/10722/350521 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Peng, Bohao | - |
dc.contributor.author | Wu, Xiaoyang | - |
dc.contributor.author | Jiang, Li | - |
dc.contributor.author | Chen, Yukang | - |
dc.contributor.author | Zhao, Hengshuang | - |
dc.contributor.author | Tian, Zhuotao | - |
dc.contributor.author | Jia, Jiaya | - |
dc.date.accessioned | 2024-10-29T00:32:02Z | - |
dc.date.available | 2024-10-29T00:32:02Z | - |
dc.date.issued | 2024-06-17 | - |
dc.identifier.uri | http://hdl.handle.net/10722/350521 | - |
dc.description.abstract | <p>The booming of 3D recognition in the 2020s began with the introduction of point cloud transformers. They quickly overwhelmed sparse CNNs and became state-of-the-art models, especially in 3D semantic segmentation. However, sparse CNNs are still valuable networks, due to their efficiency treasure, and ease of application. In this work, we reexamine the design distinctions and test the limits of what a sparse CNN can achieve. We discover that the key credit to the performance difference is adaptivity. Specifically, we propose two key components, i.e., adaptive receptive fields (spatially) and adaptive relation, to bridge the gap. This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module to greatly enhance the adaptivity of sparse CNNs at minimal computational cost. Without any self-attention modules, OA-CNNs favorably surpass point transformers in terms of accuracy in both indoor and outdoor scenes, with much less latency and memory cost. Notably, it achieves 76.1%, 78.9%, and 70.6% mIoU on ScanNet v2, nuScenes, and SemanticKITTI validation benchmarks respectively, while maintaining at most 5× better speed than transformer counterparts. This revelation highlights the potential of pure sparse CNNs to outperform transformer-related networks. Our code is built upon Pointcept [9], which is available at here <sup>1</sup> <sup>1</sup> https://github.com/Pointcept/Pointcept.</p> | - |
dc.language | eng | - |
dc.relation.ispartof | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (17/06/2024-21/06/2024, Seattle) | - |
dc.title | OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation | - |
dc.type | Conference_Paper | - |
dc.identifier.doi | 10.1109/CVPR52733.2024.02013 | - |