File Download
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: 3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks
Title | 3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks |
---|---|
Authors | |
Keywords | Transfer learning Large dataset data mining |
Issue Date | 2020 |
Citation | Proceedings of International Conference on Medical Imaging with Deep Learning (MIDL) 2020, Virtual Meeting (online), Montral, Canada, 6-8 July 2020 How to Cite? |
Abstract | Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=90.0%) compared to training from scratch (DICE=41.8%). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets. |
Description | Poster Session #1 - no. P114 MIDL is organized by the MIDL Foundation |
Persistent Identifier | http://hdl.handle.net/10722/284689 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Du, R | - |
dc.contributor.author | Vardhanabhuti, V | - |
dc.date.accessioned | 2020-08-07T09:01:18Z | - |
dc.date.available | 2020-08-07T09:01:18Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Proceedings of International Conference on Medical Imaging with Deep Learning (MIDL) 2020, Virtual Meeting (online), Montral, Canada, 6-8 July 2020 | - |
dc.identifier.uri | http://hdl.handle.net/10722/284689 | - |
dc.description | Poster Session #1 - no. P114 | - |
dc.description | MIDL is organized by the MIDL Foundation | - |
dc.description.abstract | Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=90.0%) compared to training from scratch (DICE=41.8%). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets. | - |
dc.language | eng | - |
dc.relation.ispartof | Medical Imaging Deep Learning 2020 (MIDL) International Conference | - |
dc.subject | Transfer learning | - |
dc.subject | Large dataset | - |
dc.subject | data mining | - |
dc.title | 3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Vardhanabhuti, V: varv@hku.hk | - |
dc.identifier.authority | Vardhanabhuti, V=rp01900 | - |
dc.description.nature | published_or_final_version | - |
dc.identifier.hkuros | 312607 | - |