Artificial intelligence powered registration of volumetric medical images acquired from different medical systems enabling fast and accurate surgical planning


Grant Data
Project Title
Artificial intelligence powered registration of volumetric medical images acquired from different medical systems enabling fast and accurate surgical planning
Principal Investigator
Dr Cheung, Jason Pui Yin   (Principal Investigator (PI))
Co-Investigator(s)
Miss ZHAO Moxin   (Co-Investigator)
Dr Zhang Teng Grace   (Co-Investigator)
Duration
12
Start Date
2022-06-15
Amount
150000
Conference Title
Artificial intelligence powered registration of volumetric medical images acquired from different medical systems enabling fast and accurate surgical planning
Keywords
CT, Registration, Surgical planning, Vertebrae
Discipline
Orthopaedics/Traumatology
HKU Project Code
202111160033
Grant Type
Seed Fund for PI Research – Translational and Applied Research
Funding Year
2021
Status
On-going
Objectives
Objectives of the proposal 1. To develop a fast and accurate algorithm to register medical images (MIs) acquired from different imaging equipment. 2. To quantitatively validate the registration result by comparing running time and final point cloud distances with other methods. 3. To develop software with a user-friendly interface to facilitate clinicians and researchers using the system. Background and other information This project aims to develop a system to register medical images (MIs) of rigid volumes from different imaging equipment efficiently, thus enabling computer-assisted surgery and disease progression tracking. The purpose of medical image registration (MIR) is to align two volumetric MIs so that the information in the two MIs can be related. MIR is essential in both research and clinical practice. In research, MIR can be used to track disease progression or how the tissue changes with ageing. MIR may be utilized in clinical settings for both diagnosis and treatment, such as 1) liver cancer, heart illness, and Alzheimer's disease; 2) computer-assisted surgery, and I-125 implants. The iterative closest point (ICP) algorithm is widely used in volumetric (3D) MIR for rigid tissues like bones. The ICP can incrementally bring two-point clouds, including a moving (source) point cloud and a target (fixed) point cloud, more and more coincident. In this MIR, the point clouds fed into the ICP algorithm are constructed from MIs. However, the registration will be time-consuming because that the number of points will be large. Also, the result will be inaccurate if we use the simple binarized image slices to construct point clouds as the anatomical marker points will be submerged in other noise points. Therefore we propose to develop a weakly supervised learning model to extract contours and use contour points to generate point clouds. The contour points act as anatomical points so to improve accuracy. Also, the number of points is reduced so that the running time will be reduced. Unlike the supervised learning model, where each image should have ground truth to train the model, the weakly supervised model just requires a small number of perfect contours [1]. The labels of other images can be acquired by a rule-based algorithm. Although the labels are imperfect, they can still be used in training the model. The selection of initial transformation of ICP is vital, if the initial distance of the two point clouds is large, the ICP algorithm will fail. Therefore, a supervised learning model is proposed to acquire the initial transformation, so that the accuracy of outcomes will be improved. Supervised learning is a type of deep learning where each example contains an input object and a label. In this practice, the inputs are moving point clouds and the labels are target point clouds. Furthermore, the MIs from different imaging equipment have different image resolutions, resulting in point clouds with varying densities. The vanilla ICP algorithm ignores the resolution differences, leading to inaccurate registration results. So we improve the vanilla ICP by appropriately vowelizing the dense point cloud to improve the accuracy. Therefore, we will develop a fast and accurate registration system that solves these problems and facilitates its usage in both research and clinic by doing: Objective 1 is to develop a fast and accurate algorithm to register MIs from different systems. The pipeline includes three steps (Fig. 1): a) To develop a weakly supervised learning model to extract bone contours from MI slices. Original MIs, a small batch of MIs with contour labels, and imperfect contour masks obtained by our rule-based algorithm are used to train the model. The output is the bone contour point images. b) To develop a supervised learning model to realize point cloud roughly registration and obtain the transformation matrix combined with rotation matrix and translation matrix. The inputs of the model are moving points constructed with contours obtained by using the algorithm in a), and the labels are target point clouds constructed with contours obtained by using the algorithm in a). The outputs are the transformation matrix combined with rotation and translation matrix. c) To improve the vanilla iterative closest point (ICP) registration to reduce the registration time and improve the registration accuracy, and then get the final rotation and translation matrices. We use the rotation matrix and translation matrix obtained in b) as the initial transformation. Objective 2 is to quantitatively validate the registration outcome. Running time, registered point cloud distances, and the error of the real-synthesized point cloud pair registration can be used to access the algorithm. The proposed algorithm will be compared with the vanilla ICP using whole point clouds and the vanilla ICP using contour point clouds. Objective 3 is to develop software embedded with the registration algorithm to realize registration. The functions should include uploading target images, uploading moving images, exporting rotation matrix, exporting translation matrix, exporting registered point cloud files in ply format, exporting the loss.