File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Lowis3D: Language-Driven Open-World Instance-Level 3D Scene Understanding

TitleLowis3D: Language-Driven Open-World Instance-Level 3D Scene Understanding
Authors
Keywords3D scene understanding
instance segmentation
Location awareness
open vocabulary
open world
panoptic segmentation
point clouds
Semantic segmentation
Semantics
Solid modeling
Task analysis
Three-dimensional displays
Training
Issue Date1-Dec-2024
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, v. 46, n. 12, p. 8517-8533 How to Cite?
Abstract

Open-world instance-level scene understanding aims to locate and recognize unseen object categories that are not present in the annotated dataset. This task is challenging because the model needs to both localize novel 3D objects and infer their semantic categories. A key factor for the recent progress in 2D open-world perception is the availability of large-scale image-text pairs from the Internet, which cover a wide range of vocabulary concepts. However, this success is hard to replicate in 3D scenarios due to the scarcity of 3D-text pairs. To address this challenge, we propose to harness pre-trained vision-language (VL) foundation models that encode extensive knowledge from image-text pairs to generate captions for multi-view images of 3D scenes. This allows us to establish explicit associations between 3D shapes and semantic-rich captions. Moreover, to enhance the fine-grained visual-semantic representation learning from captions for object-level categorization, we design hierarchical point-caption association methods to learn semantic-aware embeddings that exploit the 3D geometry between 3D points and multi-view images. In addition, to tackle the localization challenge for novel classes in the open-world setting, we develop debiased instance localization, which involves training object grouping modules on unlabeled data using instance-level pseudo supervision. This significantly improves the generalization capabilities of instance grouping and, thus, the ability to accurately locate novel objects. We conduct extensive experiments on 3D semantic, instance, and panoptic segmentation tasks, covering indoor and outdoor scenes across three datasets. Our method outperforms baseline methods by a significant margin in semantic segmentation (e.g. 34.5% ∼ 65.3%), instance segmentation (e.g. 21.8% ∼ 54.0%), and panoptic segmentation (e.g. 14.7% ∼ 43.3%). Code will be available.


Persistent Identifierhttp://hdl.handle.net/10722/351086
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158

 

DC FieldValueLanguage
dc.contributor.authorDing, Runyu-
dc.contributor.authorYang, Jihan-
dc.contributor.authorXue, Chuhui-
dc.contributor.authorZhang, Wenqing-
dc.contributor.authorBai, Song-
dc.contributor.authorQi, Xiaojuan-
dc.date.accessioned2024-11-09T00:35:45Z-
dc.date.available2024-11-09T00:35:45Z-
dc.date.issued2024-12-01-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, v. 46, n. 12, p. 8517-8533-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/351086-
dc.description.abstract<p>Open-world instance-level scene understanding aims to locate and recognize unseen object categories that are not present in the annotated dataset. This task is challenging because the model needs to both localize novel 3D objects and infer their semantic categories. A key factor for the recent progress in 2D open-world perception is the availability of large-scale image-text pairs from the Internet, which cover a wide range of vocabulary concepts. However, this success is hard to replicate in 3D scenarios due to the scarcity of 3D-text pairs. To address this challenge, we propose to harness pre-trained vision-language (VL) foundation models that encode extensive knowledge from image-text pairs to generate captions for multi-view images of 3D scenes. This allows us to establish explicit associations between 3D shapes and semantic-rich captions. Moreover, to enhance the fine-grained visual-semantic representation learning from captions for object-level categorization, we design hierarchical point-caption association methods to learn semantic-aware embeddings that exploit the 3D geometry between 3D points and multi-view images. In addition, to tackle the localization challenge for novel classes in the open-world setting, we develop debiased instance localization, which involves training object grouping modules on unlabeled data using instance-level pseudo supervision. This significantly improves the generalization capabilities of instance grouping and, thus, the ability to accurately locate novel objects. We conduct extensive experiments on 3D semantic, instance, and panoptic segmentation tasks, covering indoor and outdoor scenes across three datasets. Our method outperforms baseline methods by a significant margin in semantic segmentation (e.g. 34.5% ∼ 65.3%), instance segmentation (e.g. 21.8% ∼ 54.0%), and panoptic segmentation (e.g. 14.7% ∼ 43.3%). Code will be available.<br></p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject3D scene understanding-
dc.subjectinstance segmentation-
dc.subjectLocation awareness-
dc.subjectopen vocabulary-
dc.subjectopen world-
dc.subjectpanoptic segmentation-
dc.subjectpoint clouds-
dc.subjectSemantic segmentation-
dc.subjectSemantics-
dc.subjectSolid modeling-
dc.subjectTask analysis-
dc.subjectThree-dimensional displays-
dc.subjectTraining-
dc.titleLowis3D: Language-Driven Open-World Instance-Level 3D Scene Understanding-
dc.typeArticle-
dc.identifier.doi10.1109/TPAMI.2024.3410324-
dc.identifier.scopuseid_2-s2.0-85195375531-
dc.identifier.volume46-
dc.identifier.issue12-
dc.identifier.spage8517-
dc.identifier.epage8533-
dc.identifier.eissn1939-3539-
dc.identifier.issnl0162-8828-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats