File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: DCL: Differential Contrastive Learning for Geometry-Aware Depth Synthesis

TitleDCL: Differential Contrastive Learning for Geometry-Aware Depth Synthesis
Authors
Keywordsdeep learning for visual perception
Deep learning methods
transfer learning
Issue Date2022
Citation
IEEE Robotics and Automation Letters, 2022, v. 7, n. 2, p. 4845-4852 How to Cite?
AbstractWe describe a method for unpaired realistic depth synthesis that learns diverse variations from the real-world depth scans and ensures geometric consistency between the synthetic and synthesized depth. The synthesized realistic depth can then be used to train task-specific networks facilitating label transfer from the synthetic domain. Unlike existing image synthesis pipelines, where geometries are mostly ignored, we treat geometries carried by the depth scans based on their own existence. We propose differential contrastive learning that explicitly enforces the underlying geometric properties to be invariant regarding the real variations been learned. The resulting depth synthesis method is task-agnostic, and we demonstrate the effectiveness of the proposed synthesis method by extensive evaluations on real-world geometric reasoning tasks. The networks trained with the depth synthesized by our method consistently achieve better performance across a wide range of tasks than state of the art, and can even surpass the networks supervised with full real-world annotations when slightly fine-tuned, showing good transferability.1
Persistent Identifierhttp://hdl.handle.net/10722/325554

 

DC FieldValueLanguage
dc.contributor.authorShen, Yuefan-
dc.contributor.authorYang, Yanchao-
dc.contributor.authorZheng, Youyi-
dc.contributor.authorLiu, C. Karen-
dc.contributor.authorGuibas, Leonidas J.-
dc.date.accessioned2023-02-27T07:34:15Z-
dc.date.available2023-02-27T07:34:15Z-
dc.date.issued2022-
dc.identifier.citationIEEE Robotics and Automation Letters, 2022, v. 7, n. 2, p. 4845-4852-
dc.identifier.urihttp://hdl.handle.net/10722/325554-
dc.description.abstractWe describe a method for unpaired realistic depth synthesis that learns diverse variations from the real-world depth scans and ensures geometric consistency between the synthetic and synthesized depth. The synthesized realistic depth can then be used to train task-specific networks facilitating label transfer from the synthetic domain. Unlike existing image synthesis pipelines, where geometries are mostly ignored, we treat geometries carried by the depth scans based on their own existence. We propose differential contrastive learning that explicitly enforces the underlying geometric properties to be invariant regarding the real variations been learned. The resulting depth synthesis method is task-agnostic, and we demonstrate the effectiveness of the proposed synthesis method by extensive evaluations on real-world geometric reasoning tasks. The networks trained with the depth synthesized by our method consistently achieve better performance across a wide range of tasks than state of the art, and can even surpass the networks supervised with full real-world annotations when slightly fine-tuned, showing good transferability.1-
dc.languageeng-
dc.relation.ispartofIEEE Robotics and Automation Letters-
dc.subjectdeep learning for visual perception-
dc.subjectDeep learning methods-
dc.subjecttransfer learning-
dc.titleDCL: Differential Contrastive Learning for Geometry-Aware Depth Synthesis-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/LRA.2022.3148788-
dc.identifier.scopuseid_2-s2.0-85124733647-
dc.identifier.volume7-
dc.identifier.issue2-
dc.identifier.spage4845-
dc.identifier.epage4852-
dc.identifier.eissn2377-3766-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats