File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/IROS55552.2023.10341620
- Scopus: eid_2-s2.0-85169030676
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Sparse Dense Fusion for 3D Object Detection
Title | Sparse Dense Fusion for 3D Object Detection |
---|---|
Authors | |
Issue Date | 2023 |
Citation | IEEE International Conference on Intelligent Robots and Systems, 2023, p. 10939-10946 How to Cite? |
Abstract | With the prevalence of multimodal learning, camera-LiDAR fusion has gained popularity in 3D object detection. Many fusion approaches have been proposed, falling into two main categories: sparse-only or dense-only, differentiated by their feature representation within the fusion module. We analyze these approaches within a shared taxonomy, identifying two key challenges: (1) Sparse-only methodologies maintain 3D geometric prior but fail to capture the semantic richness from camera data, and (2) Dense-only strategies preserve semantic continuity at the expense of precise geometric information derived from LiDAR. Upon analysis, we deduce that due to their respective architectural designs, some degree of information loss is inevitable. To counteract this loss, we introduce Sparse Dense Fusion (SD-Fusion), an innovative framework combining both sparse and dense fusion modules via the Transformer architecture. The simple yet effective fusion strategy enhances semantic texture and simultaneously leverages spatial structure data. Employing our SD-Fusion strategy, we assemble two popular methods with moderate performance, achieving a 4.3% increase in mAP and a 2.5% rise in NDS, thus ranking first in the nuScenes benchmark. Comprehensive ablation studies validate the effectiveness of our approach and empirically support our findings. |
Persistent Identifier | http://hdl.handle.net/10722/351474 |
ISSN | 2023 SCImago Journal Rankings: 1.094 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Gao, Yulu | - |
dc.contributor.author | Sima, Chonghao | - |
dc.contributor.author | Shi, Shaoshuai | - |
dc.contributor.author | Di, Shangzhe | - |
dc.contributor.author | Liu, Si | - |
dc.contributor.author | Li, Hongyang | - |
dc.date.accessioned | 2024-11-20T03:56:30Z | - |
dc.date.available | 2024-11-20T03:56:30Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | IEEE International Conference on Intelligent Robots and Systems, 2023, p. 10939-10946 | - |
dc.identifier.issn | 2153-0858 | - |
dc.identifier.uri | http://hdl.handle.net/10722/351474 | - |
dc.description.abstract | With the prevalence of multimodal learning, camera-LiDAR fusion has gained popularity in 3D object detection. Many fusion approaches have been proposed, falling into two main categories: sparse-only or dense-only, differentiated by their feature representation within the fusion module. We analyze these approaches within a shared taxonomy, identifying two key challenges: (1) Sparse-only methodologies maintain 3D geometric prior but fail to capture the semantic richness from camera data, and (2) Dense-only strategies preserve semantic continuity at the expense of precise geometric information derived from LiDAR. Upon analysis, we deduce that due to their respective architectural designs, some degree of information loss is inevitable. To counteract this loss, we introduce Sparse Dense Fusion (SD-Fusion), an innovative framework combining both sparse and dense fusion modules via the Transformer architecture. The simple yet effective fusion strategy enhances semantic texture and simultaneously leverages spatial structure data. Employing our SD-Fusion strategy, we assemble two popular methods with moderate performance, achieving a 4.3% increase in mAP and a 2.5% rise in NDS, thus ranking first in the nuScenes benchmark. Comprehensive ablation studies validate the effectiveness of our approach and empirically support our findings. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE International Conference on Intelligent Robots and Systems | - |
dc.title | Sparse Dense Fusion for 3D Object Detection | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/IROS55552.2023.10341620 | - |
dc.identifier.scopus | eid_2-s2.0-85169030676 | - |
dc.identifier.spage | 10939 | - |
dc.identifier.epage | 10946 | - |
dc.identifier.eissn | 2153-0866 | - |