File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-981-97-8508-7_4
- Scopus: eid_2-s2.0-85209359372
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: 3D Data Augmentation for Driving Scenes on Camera
Title | 3D Data Augmentation for Driving Scenes on Camera |
---|---|
Authors | |
Keywords | 3D Perception Autonomous Driving Data Augmentation NeRF |
Issue Date | 2025 |
Citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2025, v. 15036 LNCS, p. 46-63 How to Cite? |
Abstract | Driving scenes are extremely diverse and complicated that it is impossible to collect all cases with human effort alone. While data augmentation is an effective technique to enrich the training data, existing methods for camera data in autonomous driving applications are confined to the 2D image plane, which may not optimally increase data diversity in 3D real-world scenarios. To this end, we propose a 3D data augmentation approach termed Drive-3DAug, aiming at augmenting the driving scenes on camera in the 3D space. We first utilize Neural Radiance Field (NeRF) to reconstruct the 3D models of background and foreground objects. Then, augmented driving scenes can be obtained by placing the 3D objects with adapted location and orientation at the pre-defined valid region of backgrounds. As such, the training database could be effectively scaled up. However, the 3D object modeling is constrained to the image quality and the limited viewpoints. To overcome these problems, we modify the original NeRF by introducing a geometric rectified loss and a symmetric-aware training strategy. We evaluate our method for the camera-only monocular 3D detection task on the Waymo and nuScences datasets. The proposed data augmentation approach contributes to a gain of and in terms of detection accuracy, on Waymo and nuScences respectively. Furthermore, the constructed 3D models serve as digital driving assets and could be recycled for different detectors or other 3D perception tasks. |
Persistent Identifier | http://hdl.handle.net/10722/352484 |
ISSN | 2023 SCImago Journal Rankings: 0.606 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tong, Wenwen | - |
dc.contributor.author | Xie, Jiangwei | - |
dc.contributor.author | Li, Tianyu | - |
dc.contributor.author | Li, Yang | - |
dc.contributor.author | Deng, Hanming | - |
dc.contributor.author | Dai, Bo | - |
dc.contributor.author | Lu, Lewei | - |
dc.contributor.author | Zhao, Hao | - |
dc.contributor.author | Yan, Junchi | - |
dc.contributor.author | Li, Hongyang | - |
dc.date.accessioned | 2024-12-16T03:59:22Z | - |
dc.date.available | 2024-12-16T03:59:22Z | - |
dc.date.issued | 2025 | - |
dc.identifier.citation | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2025, v. 15036 LNCS, p. 46-63 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | http://hdl.handle.net/10722/352484 | - |
dc.description.abstract | Driving scenes are extremely diverse and complicated that it is impossible to collect all cases with human effort alone. While data augmentation is an effective technique to enrich the training data, existing methods for camera data in autonomous driving applications are confined to the 2D image plane, which may not optimally increase data diversity in 3D real-world scenarios. To this end, we propose a 3D data augmentation approach termed Drive-3DAug, aiming at augmenting the driving scenes on camera in the 3D space. We first utilize Neural Radiance Field (NeRF) to reconstruct the 3D models of background and foreground objects. Then, augmented driving scenes can be obtained by placing the 3D objects with adapted location and orientation at the pre-defined valid region of backgrounds. As such, the training database could be effectively scaled up. However, the 3D object modeling is constrained to the image quality and the limited viewpoints. To overcome these problems, we modify the original NeRF by introducing a geometric rectified loss and a symmetric-aware training strategy. We evaluate our method for the camera-only monocular 3D detection task on the Waymo and nuScences datasets. The proposed data augmentation approach contributes to a gain of and in terms of detection accuracy, on Waymo and nuScences respectively. Furthermore, the constructed 3D models serve as digital driving assets and could be recycled for different detectors or other 3D perception tasks. | - |
dc.language | eng | - |
dc.relation.ispartof | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | - |
dc.subject | 3D Perception | - |
dc.subject | Autonomous Driving | - |
dc.subject | Data Augmentation | - |
dc.subject | NeRF | - |
dc.title | 3D Data Augmentation for Driving Scenes on Camera | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/978-981-97-8508-7_4 | - |
dc.identifier.scopus | eid_2-s2.0-85209359372 | - |
dc.identifier.volume | 15036 LNCS | - |
dc.identifier.spage | 46 | - |
dc.identifier.epage | 63 | - |
dc.identifier.eissn | 1611-3349 | - |