File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: GlocalFuse-Depth: Fusing transformers and CNNs for all-day self-supervised monocular depth estimation

TitleGlocalFuse-Depth: Fusing transformers and CNNs for all-day self-supervised monocular depth estimation
Authors
KeywordsAll-day image
Feature fusion
Monocular depth estimation
Self-supervised
Issue Date7-Feb-2024
PublisherElsevier
Citation
Neurocomputing, 2024, v. 569 How to Cite?
AbstractIn recent years, self-supervised monocular depth estimation has drawn much attention since it frees of depth annotations and achieves remarkable results on standard benchmarks. However, most of existing methods only focus on either daytime or nighttime images, their performance degrades on the other domain because of the large gap between daytime and nighttime images. To address this problem, we propose a two-branch network named GlocalFuse-Depth for self-supervised depth estimation of all-day images in this paper. The daytime and nighttime images in input image pair are fed into the two branches: CNN branch and Transformer branch, respectively, where both local details and global dependency can be effectively captured. Besides, a novel fusion module is proposed to fuse multi-dimensional features from the two branches. Extensive experiments demonstrate that GlocalFuse-Depth achieves state-of-the-art results for all-day images of the Oxford RobotCar dataset, which proves the superiority of our method.
Persistent Identifierhttp://hdl.handle.net/10722/348518
ISSN
2023 Impact Factor: 5.5
2023 SCImago Journal Rankings: 1.815

 

DC FieldValueLanguage
dc.contributor.authorZhang, Zezheng-
dc.contributor.authorChan, Ryan KY-
dc.contributor.authorWong, Kenneth KY-
dc.date.accessioned2024-10-10T00:31:16Z-
dc.date.available2024-10-10T00:31:16Z-
dc.date.issued2024-02-07-
dc.identifier.citationNeurocomputing, 2024, v. 569-
dc.identifier.issn0925-2312-
dc.identifier.urihttp://hdl.handle.net/10722/348518-
dc.description.abstractIn recent years, self-supervised monocular depth estimation has drawn much attention since it frees of depth annotations and achieves remarkable results on standard benchmarks. However, most of existing methods only focus on either daytime or nighttime images, their performance degrades on the other domain because of the large gap between daytime and nighttime images. To address this problem, we propose a two-branch network named GlocalFuse-Depth for self-supervised depth estimation of all-day images in this paper. The daytime and nighttime images in input image pair are fed into the two branches: CNN branch and Transformer branch, respectively, where both local details and global dependency can be effectively captured. Besides, a novel fusion module is proposed to fuse multi-dimensional features from the two branches. Extensive experiments demonstrate that GlocalFuse-Depth achieves state-of-the-art results for all-day images of the Oxford RobotCar dataset, which proves the superiority of our method.-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofNeurocomputing-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectAll-day image-
dc.subjectFeature fusion-
dc.subjectMonocular depth estimation-
dc.subjectSelf-supervised-
dc.titleGlocalFuse-Depth: Fusing transformers and CNNs for all-day self-supervised monocular depth estimation-
dc.typeArticle-
dc.identifier.doi10.1016/j.neucom.2023.127122-
dc.identifier.scopuseid_2-s2.0-85180376525-
dc.identifier.volume569-
dc.identifier.eissn1872-8286-
dc.identifier.issnl0925-2312-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats