File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: FAST-LIVO: Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual Odometry

TitleFAST-LIVO: Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual Odometry
Authors
Issue Date26-Dec-2022
Abstract

To achieve accurate and robust pose estimation in Simultaneous Localization and Mapping (SLAM) task, multisensor fusion is proven to be an effective solution and thus provides great potential in robotic applications. This paper proposes FAST-LIVO, a fast LiDAR-Inertial-Visual Odometry system, which builds on two tightly-coupled and direct odometry subsystems: a VIO subsystem and a LIO subsystem. The LIO subsystem registers raw points (instead of feature points on e.g., edges or planes) of a new scan to an incrementally-built point cloud map. The map points are additionally attached with image patches, which are then used in the VIO subsystem to align a new image by minimizing the direct photometric errors without extracting any visual features (e.g., ORB or FAST corner features). To further improve the VIO robustness and accuracy, a novel outlier rejection method is proposed to reject unstable map points that lie on edges or are occluded in the image view. Experiments on both open data sequences and our customized device data are conducted. The results show our proposed system outperforms other counterparts and can handle challenging environments at reduced computation cost. The system supports both multi-line spinning LiDARs and emerging solid-state LiDARs with completely different scanning patterns, and can run in real-time on both Intel and ARM processors. We open source our code and dataset of this work on Github 2 2 https://github.com/hku-mars/FAST-LIVO to benefit the robotics community.


Persistent Identifierhttp://hdl.handle.net/10722/333731

 

DC FieldValueLanguage
dc.contributor.authorZheng, Chunran-
dc.contributor.authorZhu, Qingyan-
dc.contributor.authorXu, Wei-
dc.contributor.authorLiu, Xiyuan-
dc.contributor.authorGuo, Qizhi-
dc.contributor.authorZhang, Fu-
dc.date.accessioned2023-10-06T08:38:37Z-
dc.date.available2023-10-06T08:38:37Z-
dc.date.issued2022-12-26-
dc.identifier.urihttp://hdl.handle.net/10722/333731-
dc.description.abstract<p>To achieve accurate and robust pose estimation in Simultaneous Localization and Mapping (SLAM) task, multisensor fusion is proven to be an effective solution and thus provides great potential in robotic applications. This paper proposes FAST-LIVO, a fast LiDAR-Inertial-Visual Odometry system, which builds on two tightly-coupled and direct odometry subsystems: a VIO subsystem and a LIO subsystem. The LIO subsystem registers raw points (instead of feature points on e.g., edges or planes) of a new scan to an incrementally-built point cloud map. The map points are additionally attached with image patches, which are then used in the VIO subsystem to align a new image by minimizing the direct photometric errors without extracting any visual features (e.g., ORB or FAST corner features). To further improve the VIO robustness and accuracy, a novel outlier rejection method is proposed to reject unstable map points that lie on edges or are occluded in the image view. Experiments on both open data sequences and our customized device data are conducted. The results show our proposed system outperforms other counterparts and can handle challenging environments at reduced computation cost. The system supports both multi-line spinning LiDARs and emerging solid-state LiDARs with completely different scanning patterns, and can run in real-time on both Intel and ARM processors. We open source our code and dataset of this work on Github 2 2 https://github.com/hku-mars/FAST-LIVO to benefit the robotics community.<br></p>-
dc.languageeng-
dc.relation.ispartof2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) (23/10/2022-27/10/2022, Kyoto)-
dc.titleFAST-LIVO: Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual Odometry-
dc.typeConference_Paper-
dc.identifier.doi10.1109/IROS47612.2022.9981107-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats