File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Learning depth-guided convolutions for monocular 3d object detection

TitleLearning depth-guided convolutions for monocular 3d object detection
Authors
Issue Date2020
PublisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1001809
Citation
Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, USA, 14-19 June 2020, p. 4306-4315 How to Cite?
Abstract3D object detection from a single image without LiDAR is a challenging task due to the lack of accurate depth information. Conventional 2D convolutions are unsuitable for this task because they fail to capture local object and its scale information, which are vital for 3D object detection. To better represent 3D structure, prior arts typically transform depth maps estimated from 2D images into a pseudo-LiDAR representation, and then apply existing 3D point-cloud based object detectors. However, their results depend heavily on the accuracy of the estimated depth maps, resulting in suboptimal performance. In this work, instead of using pseudo-LiDAR representation, we improve the fundamental 2D fully convolutions by proposing a new local convolutional network (LCN), termed Depth-guided Dynamic-Depthwise-Dilated LCN (D 4 LCN), where the filters and their receptive fields can be automatically learned from image-based depth maps, making different pixels of different images have different filters. D 4 LCN overcomes the limitation of conventional 2D convolutions and narrows the gap between image representation and 3D point cloud representation. Extensive experiments show that D 4 LCN outperforms existing works by large margins. For example, the relative improvement of D 4 LCN against the state-of-the-art on KITTI is 9.1% in the moderate setting. D 4 LCN ranks 1 st on KITTI monocular 3D object detection benchmark at the time of submission (car, December 2019). The code is available at https://github.com/dingmyu/D4LCN.
DescriptionCVPR 2020 Workshop held virtually due to COVID-19
Workshop session: Autonomous Driving
Persistent Identifierhttp://hdl.handle.net/10722/284164
ISSN
2020 SCImago Journal Rankings: 1.122
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorDing, M-
dc.contributor.authorHuo, Y-
dc.contributor.authorYi, H-
dc.contributor.authorWang, Z-
dc.contributor.authorShi, J-
dc.contributor.authorLu, Z-
dc.contributor.authorLuo, P-
dc.date.accessioned2020-07-20T05:56:35Z-
dc.date.available2020-07-20T05:56:35Z-
dc.date.issued2020-
dc.identifier.citationProceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, USA, 14-19 June 2020, p. 4306-4315-
dc.identifier.issn2160-7508-
dc.identifier.urihttp://hdl.handle.net/10722/284164-
dc.descriptionCVPR 2020 Workshop held virtually due to COVID-19-
dc.descriptionWorkshop session: Autonomous Driving-
dc.description.abstract3D object detection from a single image without LiDAR is a challenging task due to the lack of accurate depth information. Conventional 2D convolutions are unsuitable for this task because they fail to capture local object and its scale information, which are vital for 3D object detection. To better represent 3D structure, prior arts typically transform depth maps estimated from 2D images into a pseudo-LiDAR representation, and then apply existing 3D point-cloud based object detectors. However, their results depend heavily on the accuracy of the estimated depth maps, resulting in suboptimal performance. In this work, instead of using pseudo-LiDAR representation, we improve the fundamental 2D fully convolutions by proposing a new local convolutional network (LCN), termed Depth-guided Dynamic-Depthwise-Dilated LCN (D 4 LCN), where the filters and their receptive fields can be automatically learned from image-based depth maps, making different pixels of different images have different filters. D 4 LCN overcomes the limitation of conventional 2D convolutions and narrows the gap between image representation and 3D point cloud representation. Extensive experiments show that D 4 LCN outperforms existing works by large margins. For example, the relative improvement of D 4 LCN against the state-of-the-art on KITTI is 9.1% in the moderate setting. D 4 LCN ranks 1 st on KITTI monocular 3D object detection benchmark at the time of submission (car, December 2019). The code is available at https://github.com/dingmyu/D4LCN.-
dc.languageeng-
dc.publisherIEEE Computer Society. The Journal's web site is located at http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1001809-
dc.relation.ispartofProceedings of IEEE/CVF International Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)-
dc.rightsIEEE Conference on Computer Vision and Pattern Recognition Workshops Proceedings. Copyright © IEEE Computer Society.-
dc.rights©2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.titleLearning depth-guided convolutions for monocular 3d object detection-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.doi10.1109/CVPRW50498.2020.00508-
dc.identifier.scopuseid_2-s2.0-85090147685-
dc.identifier.hkuros311024-
dc.identifier.spage4306-
dc.identifier.epage4315-
dc.identifier.isiWOS:000788279004077-
dc.publisher.placeUnited States-
dc.identifier.issnl2160-7508-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats