File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Multi-Scale Context-Guided Deep Network for Automated Lesion Segmentation with Endoscopy Images of Gastrointestinal Tract

TitleMulti-Scale Context-Guided Deep Network for Automated Lesion Segmentation with Endoscopy Images of Gastrointestinal Tract
Authors
Keywordsendoscopy image
fully convolutional network
gastrointestinal tract
lesion segmentation
Multi-scale Context
Issue Date2021
Citation
IEEE Journal of Biomedical and Health Informatics, 2021, v. 25, n. 2, p. 514-525 How to Cite?
AbstractAccurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves text{74}% and text{85}% mean intersection over union (mIoU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract.
Persistent Identifierhttp://hdl.handle.net/10722/325515
ISSN
2021 Impact Factor: 7.021
2020 SCImago Journal Rankings: 1.293

 

DC FieldValueLanguage
dc.contributor.authorWang, Shuai-
dc.contributor.authorCong, Yang-
dc.contributor.authorZhu, Hancan-
dc.contributor.authorChen, Xianyi-
dc.contributor.authorQu, Liangqiong-
dc.contributor.authorFan, Huijie-
dc.contributor.authorZhang, Qiang-
dc.contributor.authorLiu, Mingxia-
dc.date.accessioned2023-02-27T07:33:55Z-
dc.date.available2023-02-27T07:33:55Z-
dc.date.issued2021-
dc.identifier.citationIEEE Journal of Biomedical and Health Informatics, 2021, v. 25, n. 2, p. 514-525-
dc.identifier.issn2168-2194-
dc.identifier.urihttp://hdl.handle.net/10722/325515-
dc.description.abstractAccurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves text{74}% and text{85}% mean intersection over union (mIoU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract.-
dc.languageeng-
dc.relation.ispartofIEEE Journal of Biomedical and Health Informatics-
dc.subjectendoscopy image-
dc.subjectfully convolutional network-
dc.subjectgastrointestinal tract-
dc.subjectlesion segmentation-
dc.subjectMulti-scale Context-
dc.titleMulti-Scale Context-Guided Deep Network for Automated Lesion Segmentation with Endoscopy Images of Gastrointestinal Tract-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/JBHI.2020.2997760-
dc.identifier.pmid32750912-
dc.identifier.scopuseid_2-s2.0-85100823832-
dc.identifier.volume25-
dc.identifier.issue2-
dc.identifier.spage514-
dc.identifier.epage525-
dc.identifier.eissn2168-2208-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats