File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1002/aisy.202400243
- Scopus: eid_2-s2.0-85197905014
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: EVI-SAM: Robust, Real-Time, Tightly-Coupled Event–Visual–Inertial State Estimation and 3D Dense Mapping
Title | EVI-SAM: Robust, Real-Time, Tightly-Coupled Event–Visual–Inertial State Estimation and 3D Dense Mapping |
---|---|
Authors | |
Keywords | 6-DoF Pose Tracking event cameras event-based visions robotics simultaneous localization and mapping |
Issue Date | 1-Jan-2024 |
Publisher | Wiley Open Access |
Citation | Advanced Intelligent Systems, 2024 How to Cite? |
Abstract | Event cameras demonstrate substantial potential in handling challenging situations, such as motion blur and high dynamic range. Herein, event–visual–inertial state estimation and 3D dense mapping (EVI-SAM) are introduced to tackle the problem of pose tracking and 3D dense reconstruction using the monocular event camera. A novel event-based hybrid tracking framework is designed to estimate the pose, leveraging the robustness of feature matching and the precision of direct alignment. Specifically, an event-based 2D–2D alignment is developed to construct the photometric constraint and tightly integrated with the event-based reprojection constraint. The mapping module recovers the dense and colorful depth of the scene through the image-guided event-based mapping method. Subsequently, the appearance, texture, and surface mesh of the 3D scene can be reconstructed by fusing the dense depth map from multiple viewpoints using truncated signed distance function fusion. To the best of knowledge, this is the first nonlearning work to realize event-based dense mapping. Numerical evaluations are performed on both publicly available datasets, which qualitatively and quantitatively demonstrate the superior performance of our method. EVI-SAM effectively balances accuracy and robustness while maintaining computational efficiency, showcasing superior pose tracking and dense mapping performance in challenging scenarios. |
Persistent Identifier | http://hdl.handle.net/10722/348558 |
ISSN | 2023 Impact Factor: 6.8 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Guan, Weipeng | - |
dc.contributor.author | Chen, Peiyu | - |
dc.contributor.author | Zhao, Huibin | - |
dc.contributor.author | Wang, Yu | - |
dc.contributor.author | Lu, Peng | - |
dc.date.accessioned | 2024-10-10T00:31:34Z | - |
dc.date.available | 2024-10-10T00:31:34Z | - |
dc.date.issued | 2024-01-01 | - |
dc.identifier.citation | Advanced Intelligent Systems, 2024 | - |
dc.identifier.issn | 2640-4567 | - |
dc.identifier.uri | http://hdl.handle.net/10722/348558 | - |
dc.description.abstract | Event cameras demonstrate substantial potential in handling challenging situations, such as motion blur and high dynamic range. Herein, event–visual–inertial state estimation and 3D dense mapping (EVI-SAM) are introduced to tackle the problem of pose tracking and 3D dense reconstruction using the monocular event camera. A novel event-based hybrid tracking framework is designed to estimate the pose, leveraging the robustness of feature matching and the precision of direct alignment. Specifically, an event-based 2D–2D alignment is developed to construct the photometric constraint and tightly integrated with the event-based reprojection constraint. The mapping module recovers the dense and colorful depth of the scene through the image-guided event-based mapping method. Subsequently, the appearance, texture, and surface mesh of the 3D scene can be reconstructed by fusing the dense depth map from multiple viewpoints using truncated signed distance function fusion. To the best of knowledge, this is the first nonlearning work to realize event-based dense mapping. Numerical evaluations are performed on both publicly available datasets, which qualitatively and quantitatively demonstrate the superior performance of our method. EVI-SAM effectively balances accuracy and robustness while maintaining computational efficiency, showcasing superior pose tracking and dense mapping performance in challenging scenarios. | - |
dc.language | eng | - |
dc.publisher | Wiley Open Access | - |
dc.relation.ispartof | Advanced Intelligent Systems | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject | 6-DoF Pose Tracking | - |
dc.subject | event cameras | - |
dc.subject | event-based visions | - |
dc.subject | robotics | - |
dc.subject | simultaneous localization and mapping | - |
dc.title | EVI-SAM: Robust, Real-Time, Tightly-Coupled Event–Visual–Inertial State Estimation and 3D Dense Mapping | - |
dc.type | Article | - |
dc.identifier.doi | 10.1002/aisy.202400243 | - |
dc.identifier.scopus | eid_2-s2.0-85197905014 | - |
dc.identifier.eissn | 2640-4567 | - |
dc.identifier.issnl | 2640-4567 | - |