File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Learning depth-guided convolutions for monocular 3d object detection
Title | Learning depth-guided convolutions for monocular 3d object detection |
---|---|
Authors | |
Keywords | Monocular 3d object detection depth-guided Dynamic local convolution |
Issue Date | 2020 |
Publisher | IEEE. |
Citation | IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (Virtual), 14-19 June 2020, p. 11669-11678 How to Cite? |
Abstract | 3D object detection from a single image without LiDAR is a challenging task due to the lack of accurate depth information. Conventional 2D convolutions are unsuitable for this task because they fail to capture local object and its scale information, which are vital for 3D object detection. To better represent 3D structure, prior arts typically transform depth maps estimated from 2D images into a pseudo-LiDAR representation, and then apply existing 3D point-cloud based object detectors. However, their results depend heavily on the accuracy of the estimated depth maps, resulting in suboptimal performance. In this work, instead of using pseudo-LiDAR representation, we improve the fundamental 2D fully convolutions by proposing a new local convolutional network (LCN), termed Depth-guided Dynamic-Depthwise-Dilated LCN (D4LCN), where the filters and their receptive fields can be automatically learned from image-based depth maps, making different pixels of different images have different filters. D4LCN overcomes the limitation of conventional 2D convolutions and narrows the gap between image representation and 3D point cloud representation. Extensive experiments show that D$^4$LCN outperforms existing works by large margins. For example, the relative improvement of D4LCN against the state-of-the-art on KITTI is 9.1% in the moderate setting. D4LCN ranks 1st on KITTI monocular 3D object detection benchmark at the time of submission (car, December 2019). |
Persistent Identifier | http://hdl.handle.net/10722/315804 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | DING, M | - |
dc.contributor.author | HUO, Y | - |
dc.contributor.author | Yi, H | - |
dc.contributor.author | WANG, Z | - |
dc.contributor.author | SHI, J | - |
dc.contributor.author | LU, Z | - |
dc.contributor.author | Luo, P | - |
dc.date.accessioned | 2022-08-19T09:04:44Z | - |
dc.date.available | 2022-08-19T09:04:44Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (Virtual), 14-19 June 2020, p. 11669-11678 | - |
dc.identifier.uri | http://hdl.handle.net/10722/315804 | - |
dc.description.abstract | 3D object detection from a single image without LiDAR is a challenging task due to the lack of accurate depth information. Conventional 2D convolutions are unsuitable for this task because they fail to capture local object and its scale information, which are vital for 3D object detection. To better represent 3D structure, prior arts typically transform depth maps estimated from 2D images into a pseudo-LiDAR representation, and then apply existing 3D point-cloud based object detectors. However, their results depend heavily on the accuracy of the estimated depth maps, resulting in suboptimal performance. In this work, instead of using pseudo-LiDAR representation, we improve the fundamental 2D fully convolutions by proposing a new local convolutional network (LCN), termed Depth-guided Dynamic-Depthwise-Dilated LCN (D4LCN), where the filters and their receptive fields can be automatically learned from image-based depth maps, making different pixels of different images have different filters. D4LCN overcomes the limitation of conventional 2D convolutions and narrows the gap between image representation and 3D point cloud representation. Extensive experiments show that D$^4$LCN outperforms existing works by large margins. For example, the relative improvement of D4LCN against the state-of-the-art on KITTI is 9.1% in the moderate setting. D4LCN ranks 1st on KITTI monocular 3D object detection benchmark at the time of submission (car, December 2019). | - |
dc.language | eng | - |
dc.publisher | IEEE. | - |
dc.relation.ispartof | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: virtual, 13-19 June 2020 | - |
dc.rights | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: virtual, 13-19 June 2020. Copyright © IEEE. | - |
dc.subject | Monocular | - |
dc.subject | 3d object detection | - |
dc.subject | depth-guided | - |
dc.subject | Dynamic local convolution | - |
dc.title | Learning depth-guided convolutions for monocular 3d object detection | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Luo, P: pluo@hku.hk | - |
dc.identifier.authority | Luo, P=rp02575 | - |
dc.identifier.doi | 10.1109/CVPR42600.2020.01169 | - |
dc.identifier.hkuros | 335606 | - |
dc.identifier.spage | 11669 | - |
dc.identifier.epage | 11678 | - |
dc.publisher.place | United States | - |