File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Context-Aware Transformer for 3D Point Cloud Automatic Annotation
Title | Context-Aware Transformer for 3D Point Cloud Automatic Annotation |
---|---|
Authors | |
Issue Date | 26-Jun-2023 |
Publisher | Association for the Advancement of Artificial Intelligence (AAAI) |
Abstract | 3D automatic annotation has received increased attention since manually annotating 3D point clouds is laborious. However, existing methods are usually complicated, e.g., pipelined training for 3D foreground/background segmentation, cylindrical object proposals, and point completion. Furthermore, they often overlook the inter-object feature correlation that is particularly informative to hard samples for 3D annotation. To this end, we propose a simple yet effective end-to-end Context-Aware Transformer (CAT) as an automated 3D-box labeler to generate precise 3D box annotations from 2D boxes, trained with a small number of human annotations. We adopt the general encoder-decoder architecture, where the CAT encoder consists of an intra-object encoder (local) and an inter-object encoder (global), performing self-attention along the sequence and batch dimensions, respectively. The former models intra-object interactions among points and the latter extracts feature relations among different objects, thus boosting scene-level understanding. Via local and global encoders, CAT can generate high-quality 3D box annotations with a streamlined workflow, allowing it to outperform existing state-of-the-arts by up to 1.79% 3D AP on the hard task of the KITTI test set. |
Persistent Identifier | http://hdl.handle.net/10722/339481 |
ISSN |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Qian, Xiaoyan | - |
dc.contributor.author | Liu, Chang | - |
dc.contributor.author | Qi, Xiaojuan | - |
dc.contributor.author | Tan, Siew-Chong | - |
dc.contributor.author | Lam, Edmund | - |
dc.contributor.author | Wong, Ngai | - |
dc.date.accessioned | 2024-03-11T10:36:59Z | - |
dc.date.available | 2024-03-11T10:36:59Z | - |
dc.date.issued | 2023-06-26 | - |
dc.identifier.issn | 2159-5399 | - |
dc.identifier.uri | http://hdl.handle.net/10722/339481 | - |
dc.description.abstract | <p>3D automatic annotation has received increased attention since manually annotating 3D point clouds is laborious. However, existing methods are usually complicated, e.g., pipelined training for 3D foreground/background segmentation, cylindrical object proposals, and point completion. Furthermore, they often overlook the inter-object feature correlation that is particularly informative to hard samples for 3D annotation. To this end, we propose a simple yet effective end-to-end Context-Aware Transformer (CAT) as an automated 3D-box labeler to generate precise 3D box annotations from 2D boxes, trained with a small number of human annotations. We adopt the general encoder-decoder architecture, where the CAT encoder consists of an intra-object encoder (local) and an inter-object encoder (global), performing self-attention along the sequence and batch dimensions, respectively. The former models intra-object interactions among points and the latter extracts feature relations among different objects, thus boosting scene-level understanding. Via local and global encoders, CAT can generate high-quality 3D box annotations with a streamlined workflow, allowing it to outperform existing state-of-the-arts by up to 1.79% 3D AP on the hard task of the KITTI test set.<br></p> | - |
dc.language | eng | - |
dc.publisher | Association for the Advancement of Artificial Intelligence (AAAI) | - |
dc.relation.ispartof | Proceedings of the AAAI Conference on Artificial Intelligence | - |
dc.title | Context-Aware Transformer for 3D Point Cloud Automatic Annotation | - |
dc.type | Conference_Paper | - |
dc.identifier.doi | 10.1609/aaai.v37i2.25301 | - |
dc.identifier.volume | 37 | - |
dc.identifier.issnl | 2159-5399 | - |