File Download
Supplementary

postgraduate thesis: Toward robust, real-time, radiance map reconstruction with multi-sensor fusion

TitleToward robust, real-time, radiance map reconstruction with multi-sensor fusion
Authors
Issue Date2023
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Lin, J. [林家荣]. (2023). Toward robust, real-time, radiance map reconstruction with multi-sensor fusion. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractSimultaneous Localization and Mapping (SLAM) plays a crucial role in the fields of robotics and automation, providing solutions for localization, feedback control, and environment mapping. It is widely utilized in Autonomous Ground Vehicles (AGVs), drones, self-driving cars, and has applications in Augmented Reality (AR), Virtual Reality (VR), Building Information Modeling (BIM), and etc. SLAM systems generate maps representing the robot's environment, and the type of produced map depends on sensor inputs, computational resources, and specific system requirements. These maps can be sparse feature maps, semi-dense maps, dense point cloud maps, triangle meshes, or 3D radiance maps. The choice of map type depends on the application and scenario. For example, sparse visual feature maps are suitable for camera-based localization, where sparse image features are used to calculate the camera's pose. Dense point cloud maps are valuable for robot navigation and obstacle avoidance as they capture detailed geometric structures. Radiance maps, which contain both geometry and radiance information, are employed in various domains such as mobile mapping, AR/VR, video gaming, 3D simulation, and surveying. These applications require both geometric structures and textures to create realistic virtual environments. This thesis focuses on the robust and real-time reconstruction of radiance maps, encompassing both geometric structure and radiance information. In the initial chapters, this thesis focuses on developing LiDAR-based SLAM systems and improving their robustness and accuracy in reconstructing the geometric structure of the environment. Firstly, the thesis introduces Loam_livox, which is the first LiDAR odometry and mapping system in academics specifically designed for solid-state LiDARs. Building upon this contribution, the thesis presents a decentralized framework for simultaneous calibration, localization, and mapping with multiple LiDARs. This proposed framework is based on an extended Kalman filter, but is specifically formulated for decentralized implementation. In the subsequent chapters of this thesis, the focus shifts towards achieving the goal of radiance map reconstruction by working on the fusion of LiDAR and camera. Firstly, a novel open-source multi-sensor fusion framework, R2LIVE, is proposed. This framework combines LiDAR, inertial, and visual data, and exhibits robustness in various challenging scenarios. Based on the foundation laid by R2LIVE, this thesis presents R3LIVE++, a LiDAR-inertial-visual fusion framework designed to achieve robust and accurate state estimation while simultaneously reconstructing radiance maps in real-time. R3LIVE++ comprises two subsystems: a LiDAR-inertial odometry (LIO) subsystem that leverages LiDAR measurements to reconstruct the geometric structure, and a visual-inertial odometry (VIO) subsystem that recovers radiance information from input images. The R3LIVE++ framework stores radiance information at map points, which may lead to the loss of data due to the finite density of the point cloud. To overcome this limitation and recover radiance information without loss, this thesis introduces ImMesh, an online mesh reconstruction framework. ImMesh enables the real-time reconstruction of a surface triangle mesh. By applying texture from camera-captured images onto the mesh facets, the radiance information captured by the camera can be preserved in the map without any loss of fidelity.
DegreeDoctor of Philosophy
SubjectRobot vision
Robots - Control systems
Robots - Motion
Wireless localization
Dept/ProgramMechanical Engineering
Persistent Identifierhttp://hdl.handle.net/10722/335567

 

DC FieldValueLanguage
dc.contributor.authorLin, Jiarong-
dc.contributor.author林家荣-
dc.date.accessioned2023-11-30T06:22:38Z-
dc.date.available2023-11-30T06:22:38Z-
dc.date.issued2023-
dc.identifier.citationLin, J. [林家荣]. (2023). Toward robust, real-time, radiance map reconstruction with multi-sensor fusion. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/335567-
dc.description.abstractSimultaneous Localization and Mapping (SLAM) plays a crucial role in the fields of robotics and automation, providing solutions for localization, feedback control, and environment mapping. It is widely utilized in Autonomous Ground Vehicles (AGVs), drones, self-driving cars, and has applications in Augmented Reality (AR), Virtual Reality (VR), Building Information Modeling (BIM), and etc. SLAM systems generate maps representing the robot's environment, and the type of produced map depends on sensor inputs, computational resources, and specific system requirements. These maps can be sparse feature maps, semi-dense maps, dense point cloud maps, triangle meshes, or 3D radiance maps. The choice of map type depends on the application and scenario. For example, sparse visual feature maps are suitable for camera-based localization, where sparse image features are used to calculate the camera's pose. Dense point cloud maps are valuable for robot navigation and obstacle avoidance as they capture detailed geometric structures. Radiance maps, which contain both geometry and radiance information, are employed in various domains such as mobile mapping, AR/VR, video gaming, 3D simulation, and surveying. These applications require both geometric structures and textures to create realistic virtual environments. This thesis focuses on the robust and real-time reconstruction of radiance maps, encompassing both geometric structure and radiance information. In the initial chapters, this thesis focuses on developing LiDAR-based SLAM systems and improving their robustness and accuracy in reconstructing the geometric structure of the environment. Firstly, the thesis introduces Loam_livox, which is the first LiDAR odometry and mapping system in academics specifically designed for solid-state LiDARs. Building upon this contribution, the thesis presents a decentralized framework for simultaneous calibration, localization, and mapping with multiple LiDARs. This proposed framework is based on an extended Kalman filter, but is specifically formulated for decentralized implementation. In the subsequent chapters of this thesis, the focus shifts towards achieving the goal of radiance map reconstruction by working on the fusion of LiDAR and camera. Firstly, a novel open-source multi-sensor fusion framework, R2LIVE, is proposed. This framework combines LiDAR, inertial, and visual data, and exhibits robustness in various challenging scenarios. Based on the foundation laid by R2LIVE, this thesis presents R3LIVE++, a LiDAR-inertial-visual fusion framework designed to achieve robust and accurate state estimation while simultaneously reconstructing radiance maps in real-time. R3LIVE++ comprises two subsystems: a LiDAR-inertial odometry (LIO) subsystem that leverages LiDAR measurements to reconstruct the geometric structure, and a visual-inertial odometry (VIO) subsystem that recovers radiance information from input images. The R3LIVE++ framework stores radiance information at map points, which may lead to the loss of data due to the finite density of the point cloud. To overcome this limitation and recover radiance information without loss, this thesis introduces ImMesh, an online mesh reconstruction framework. ImMesh enables the real-time reconstruction of a surface triangle mesh. By applying texture from camera-captured images onto the mesh facets, the radiance information captured by the camera can be preserved in the map without any loss of fidelity.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshRobot vision-
dc.subject.lcshRobots - Control systems-
dc.subject.lcshRobots - Motion-
dc.subject.lcshWireless localization-
dc.titleToward robust, real-time, radiance map reconstruction with multi-sensor fusion-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineMechanical Engineering-
dc.description.naturepublished_or_final_version-
dc.date.hkucongregation2024-
dc.identifier.mmsid991044745658203414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats