File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/ICCV48922.2021.01595
- Scopus: eid_2-s2.0-85117295748
- WOS: WOS:000798743206042
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Point Transformer
Title | Point Transformer |
---|---|
Authors | |
Issue Date | 2021 |
Citation | Proceedings of the IEEE International Conference on Computer Vision, 2021, p. 16239-16248 How to Cite? |
Abstract | Self-attention networks have revolutionized natural language processing and are making impressive strides in image analysis tasks such as image classification and object detection. Inspired by this success, we investigate the application of self-attention networks to 3D point cloud processing. We design self-attention layers for point clouds and use these to construct self-attention networks for tasks such as semantic scene segmentation, object part segmentation, and object classification. Our Point Transformer design improves upon prior work across domains and tasks. For example, on the challenging S3DIS dataset for large-scale semantic scene segmentation, the Point Transformer attains an mIoU of 70.4% on Area 5, outperforming the strongest prior model by 3.3 absolute percentage points and crossing the 70% mIoU threshold for the first time. |
Persistent Identifier | http://hdl.handle.net/10722/333516 |
ISSN | 2023 SCImago Journal Rankings: 12.263 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhao, Hengshuang | - |
dc.contributor.author | Jiang, Li | - |
dc.contributor.author | Jia, Jiaya | - |
dc.contributor.author | Torr, Philip | - |
dc.contributor.author | Koltun, Vladlen | - |
dc.date.accessioned | 2023-10-06T05:20:06Z | - |
dc.date.available | 2023-10-06T05:20:06Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Proceedings of the IEEE International Conference on Computer Vision, 2021, p. 16239-16248 | - |
dc.identifier.issn | 1550-5499 | - |
dc.identifier.uri | http://hdl.handle.net/10722/333516 | - |
dc.description.abstract | Self-attention networks have revolutionized natural language processing and are making impressive strides in image analysis tasks such as image classification and object detection. Inspired by this success, we investigate the application of self-attention networks to 3D point cloud processing. We design self-attention layers for point clouds and use these to construct self-attention networks for tasks such as semantic scene segmentation, object part segmentation, and object classification. Our Point Transformer design improves upon prior work across domains and tasks. For example, on the challenging S3DIS dataset for large-scale semantic scene segmentation, the Point Transformer attains an mIoU of 70.4% on Area 5, outperforming the strongest prior model by 3.3 absolute percentage points and crossing the 70% mIoU threshold for the first time. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the IEEE International Conference on Computer Vision | - |
dc.title | Point Transformer | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/ICCV48922.2021.01595 | - |
dc.identifier.scopus | eid_2-s2.0-85117295748 | - |
dc.identifier.spage | 16239 | - |
dc.identifier.epage | 16248 | - |
dc.identifier.isi | WOS:000798743206042 | - |