File Download
Supplementary
-
Citations:
- Appears in Collections:
postgraduate thesis: Augmented reality-assisted surgical guidance for transnasal endoscopy
Title | Augmented reality-assisted surgical guidance for transnasal endoscopy |
---|---|
Authors | |
Advisors | Advisor(s):Kwok, KW |
Issue Date | 2022 |
Publisher | The University of Hong Kong (Pokfulam, Hong Kong) |
Citation | Tong, H. S. [唐漢昇]. (2022). Augmented reality-assisted surgical guidance for transnasal endoscopy. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. |
Abstract | Augmented reality (AR), a technology that enables direct overlay of virtual images onto camera views, sparks a new opportunity to shape the future of the healthcare industry. Surgical ergonomics, efficiency and safety are expected to be further enhanced when compared with conventional surgical navigation. However, AR-assisted surgical guidance has not yet been adopted into mainstream clinical practice. One of the major concerns is the accuracy and stability of augmentation, which will not only cause visual disturbance but may also lead to unnecessary complications. Major factors affecting the accuracy and stability of AR-assisted guidance include but not limited to tracking modalities and the quality of patient’s 3D anatomical models. Possible causes for sub-optimal tracking are electromagnetic (EM) tracking interference, optical tracking line-of-sight issues, and poor ergonomics due to bulky tracking tools. Regarding patient’s 3D anatomical models, their accuracy varies based on scanning quality, reconstruction software and human operation. Augmenting poorly segmented models onto the endoscopic view gives rise to an observed depth that is not representative of the real surface during surgery. As a result, visual inconsistency may cause surgeon’s fatigue. More severely, complications may arise if critical structures such as nerves and vessels are accidentally damaged. In light of these limitations, this thesis aims to explore innovative and alternative sensing solutions related to tracking and mapping in endoscopic procedures. The proposed approaches aim to provide more accurate, stable, and ergonomic AR-assisted guidance.
First, a visual-strain fusion-based camera tracking method is proposed, such that reliable pose estimation is maintained even under adverse visual conditions such as presence of obstacles, complete darkness, and exaggerated lighting. Sparse strain measurement of a single-core fiber Bragg grating (FBG) fiber is utilized in an online learning process to estimate the tip pose of a soft manipulator. Simultaneously, an eye-in-hand mono-camera mounted at the soft manipulator tip is also utilized to estimate poses by simultaneous localization and mapping (SLAM). Sensing fusion is then performed between the FBG-derived pose and SLAM-derived pose to give robust pose feedback. Pose estimation experiments were performed in a LEGO® scenario. The mean estimation error was reduced from 3.116 mm to 1.324 mm when fusion was used in comparison to pure estimation by SLAM.
Second, a monocular depth estimation method is proposed, with the aim to obtain depth information in-situ without relying on pre-operatively segmented 3D anatomical models. A virtual endoscopic environment is utilized to train a supervised depth estimation network. During application, a generative adversarial network (GAN) first transfers image style from the real endoscopic view to a synthetic-like view. Next, the trained depth estimation network predicts framewise depth from the synthetic-like images in real-time. Regarding accuracy evaluation, framewise depth was predicted from images captured from within a nasal airway phantom and compared with ground truth, achieving a structural similarity (SSIM) value of 0.8310 ± 0.0655. In addition, 3D annotation on the endoscopic view was performed with the nasal airway phantom. The annotations created can anchor stably onto target anatomical surfaces even with camera movement. |
Degree | Master of Philosophy |
Subject | Nose - Endoscopic surgery Computer-assisted surgery Augmented reality |
Dept/Program | Mechanical Engineering |
Persistent Identifier | http://hdl.handle.net/10722/322829 |
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Kwok, KW | - |
dc.contributor.author | Tong, Hon Sing | - |
dc.contributor.author | 唐漢昇 | - |
dc.date.accessioned | 2022-11-18T10:40:54Z | - |
dc.date.available | 2022-11-18T10:40:54Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | Tong, H. S. [唐漢昇]. (2022). Augmented reality-assisted surgical guidance for transnasal endoscopy. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. | - |
dc.identifier.uri | http://hdl.handle.net/10722/322829 | - |
dc.description.abstract | Augmented reality (AR), a technology that enables direct overlay of virtual images onto camera views, sparks a new opportunity to shape the future of the healthcare industry. Surgical ergonomics, efficiency and safety are expected to be further enhanced when compared with conventional surgical navigation. However, AR-assisted surgical guidance has not yet been adopted into mainstream clinical practice. One of the major concerns is the accuracy and stability of augmentation, which will not only cause visual disturbance but may also lead to unnecessary complications. Major factors affecting the accuracy and stability of AR-assisted guidance include but not limited to tracking modalities and the quality of patient’s 3D anatomical models. Possible causes for sub-optimal tracking are electromagnetic (EM) tracking interference, optical tracking line-of-sight issues, and poor ergonomics due to bulky tracking tools. Regarding patient’s 3D anatomical models, their accuracy varies based on scanning quality, reconstruction software and human operation. Augmenting poorly segmented models onto the endoscopic view gives rise to an observed depth that is not representative of the real surface during surgery. As a result, visual inconsistency may cause surgeon’s fatigue. More severely, complications may arise if critical structures such as nerves and vessels are accidentally damaged. In light of these limitations, this thesis aims to explore innovative and alternative sensing solutions related to tracking and mapping in endoscopic procedures. The proposed approaches aim to provide more accurate, stable, and ergonomic AR-assisted guidance. First, a visual-strain fusion-based camera tracking method is proposed, such that reliable pose estimation is maintained even under adverse visual conditions such as presence of obstacles, complete darkness, and exaggerated lighting. Sparse strain measurement of a single-core fiber Bragg grating (FBG) fiber is utilized in an online learning process to estimate the tip pose of a soft manipulator. Simultaneously, an eye-in-hand mono-camera mounted at the soft manipulator tip is also utilized to estimate poses by simultaneous localization and mapping (SLAM). Sensing fusion is then performed between the FBG-derived pose and SLAM-derived pose to give robust pose feedback. Pose estimation experiments were performed in a LEGO® scenario. The mean estimation error was reduced from 3.116 mm to 1.324 mm when fusion was used in comparison to pure estimation by SLAM. Second, a monocular depth estimation method is proposed, with the aim to obtain depth information in-situ without relying on pre-operatively segmented 3D anatomical models. A virtual endoscopic environment is utilized to train a supervised depth estimation network. During application, a generative adversarial network (GAN) first transfers image style from the real endoscopic view to a synthetic-like view. Next, the trained depth estimation network predicts framewise depth from the synthetic-like images in real-time. Regarding accuracy evaluation, framewise depth was predicted from images captured from within a nasal airway phantom and compared with ground truth, achieving a structural similarity (SSIM) value of 0.8310 ± 0.0655. In addition, 3D annotation on the endoscopic view was performed with the nasal airway phantom. The annotations created can anchor stably onto target anatomical surfaces even with camera movement. | - |
dc.language | eng | - |
dc.publisher | The University of Hong Kong (Pokfulam, Hong Kong) | - |
dc.relation.ispartof | HKU Theses Online (HKUTO) | - |
dc.rights | The author retains all proprietary rights, (such as patent rights) and the right to use in future works. | - |
dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
dc.subject.lcsh | Nose - Endoscopic surgery | - |
dc.subject.lcsh | Computer-assisted surgery | - |
dc.subject.lcsh | Augmented reality | - |
dc.title | Augmented reality-assisted surgical guidance for transnasal endoscopy | - |
dc.type | PG_Thesis | - |
dc.description.thesisname | Master of Philosophy | - |
dc.description.thesislevel | Master | - |
dc.description.thesisdiscipline | Mechanical Engineering | - |
dc.description.nature | published_or_final_version | - |
dc.date.hkucongregation | 2022 | - |
dc.identifier.mmsid | 991044609097603414 | - |