File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Patch-Based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation

TitlePatch-Based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation
Authors
Keywordsdeep learning
optic cup segmentation
domain adaptation
adversarial learning
Optic disc segmentation
Issue Date2019
Citation
IEEE Transactions on Medical Imaging, 2019, v. 38, n. 11, p. 2485-2495 How to Cite?
AbstractGlaucoma is a leading cause of irreversible blindness. Accurate segmentation of the optic disc (OD) and optic cup (OC) from fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks demonstrate promising progress in the joint OD and OC segmentation. However, affected by the domain shift among different datasets, deep networks are severely hindered in generalizing across different scanners and institutions. In this paper, we present a novel patch-based output space adversarial learning framework (p OSAL) to jointly and robustly segment the OD and OC from different fundus image datasets. We first devise a lightweight and efficient segmentation network as a backbone. Considering the specific morphology of OD and OC, a novel morphology-aware segmentation loss is proposed to guide the network to generate accurate and smooth segmentation. Our p OSAL framework then exploits unsupervised domain adaptation to address the domain shift challenge by encouraging the segmentation in the target domain to be similar to the source ones. Since the whole-segmentation-based adversarial loss is insufficient to drive the network to capture segmentation details, we further design the p OSAL in a patch-based fashion to enable fine-grained discrimination on local segmentation details. We extensively evaluate our p OSAL framework and demonstrate its effectiveness in improving the segmentation performance on three public retinal fundus image datasets, i.e., Drishti-GS, RIM-ONE-r3, and REFUGE. Furthermore, our p OSAL framework achieved the first place in the OD and OC segmentation tasks in the MICCAI 2018 Retinal Fundus Glaucoma Challenge.
Persistent Identifierhttp://hdl.handle.net/10722/299595
ISSN
2023 Impact Factor: 8.9
2023 SCImago Journal Rankings: 3.703
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Shujun-
dc.contributor.authorYu, Lequan-
dc.contributor.authorYang, Xin-
dc.contributor.authorFu, Chi Wing-
dc.contributor.authorHeng, Pheng Ann-
dc.date.accessioned2021-05-21T03:34:45Z-
dc.date.available2021-05-21T03:34:45Z-
dc.date.issued2019-
dc.identifier.citationIEEE Transactions on Medical Imaging, 2019, v. 38, n. 11, p. 2485-2495-
dc.identifier.issn0278-0062-
dc.identifier.urihttp://hdl.handle.net/10722/299595-
dc.description.abstractGlaucoma is a leading cause of irreversible blindness. Accurate segmentation of the optic disc (OD) and optic cup (OC) from fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks demonstrate promising progress in the joint OD and OC segmentation. However, affected by the domain shift among different datasets, deep networks are severely hindered in generalizing across different scanners and institutions. In this paper, we present a novel patch-based output space adversarial learning framework (p OSAL) to jointly and robustly segment the OD and OC from different fundus image datasets. We first devise a lightweight and efficient segmentation network as a backbone. Considering the specific morphology of OD and OC, a novel morphology-aware segmentation loss is proposed to guide the network to generate accurate and smooth segmentation. Our p OSAL framework then exploits unsupervised domain adaptation to address the domain shift challenge by encouraging the segmentation in the target domain to be similar to the source ones. Since the whole-segmentation-based adversarial loss is insufficient to drive the network to capture segmentation details, we further design the p OSAL in a patch-based fashion to enable fine-grained discrimination on local segmentation details. We extensively evaluate our p OSAL framework and demonstrate its effectiveness in improving the segmentation performance on three public retinal fundus image datasets, i.e., Drishti-GS, RIM-ONE-r3, and REFUGE. Furthermore, our p OSAL framework achieved the first place in the OD and OC segmentation tasks in the MICCAI 2018 Retinal Fundus Glaucoma Challenge.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Medical Imaging-
dc.subjectdeep learning-
dc.subjectoptic cup segmentation-
dc.subjectdomain adaptation-
dc.subjectadversarial learning-
dc.subjectOptic disc segmentation-
dc.titlePatch-Based Output Space Adversarial Learning for Joint Optic Disc and Cup Segmentation-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TMI.2019.2899910-
dc.identifier.pmid30794170-
dc.identifier.scopuseid_2-s2.0-85069862302-
dc.identifier.volume38-
dc.identifier.issue11-
dc.identifier.spage2485-
dc.identifier.epage2495-
dc.identifier.eissn1558-254X-
dc.identifier.isiWOS:000494433300001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats