File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation

TitleAmos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation
Authors
Issue Date2022
PublisherIEEE.
Citation
36th Neural Information Processing Systems (NeurlPS) Benchmark and Dataset Track (Hybrid), November 28-December 9, 2022 How to Cite?
AbstractDespite the considerable progress in automatic abdominal multi-organ segmentation from CT/MRI scans in recent years, a comprehensive evaluation of the models' capabilities is hampered by the lack of a large-scale benchmark from diverse clinical scenarios. Constraint by the high cost of collecting and labeling 3D medical data, most of the deep learning models to date are driven by datasets with a limited number of organs of interest or samples, which still limits the power of modern deep models and makes it difficult to provide a fully comprehensive and fair estimate of various methods. To mitigate the limitations, we present AMOS, a large-scale, diverse, clinical dataset for abdominal organ segmentation. AMOS provides 500 CT and 100 MRI scans collected from multi-center, multi-vendor, multi-modality, multi-phase, multi-disease patients, each with voxel-level annotations of 15 abdominal organs, providing challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios. We further benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset. We have made our datasets, benchmark servers, and baselines publicly available, and hope to inspire future research.
Persistent Identifierhttp://hdl.handle.net/10722/315543

 

DC FieldValueLanguage
dc.contributor.authorJi, Y-
dc.contributor.authorBai, H-
dc.contributor.authorYang, J-
dc.contributor.authorGe, C-
dc.contributor.authorZhu, Y-
dc.contributor.authorZhang, R-
dc.contributor.authorLi, Z-
dc.contributor.authorZhang, L-
dc.contributor.authorMa, W-
dc.contributor.authorWang, X-
dc.contributor.authorLuo, P-
dc.date.accessioned2022-08-19T08:59:51Z-
dc.date.available2022-08-19T08:59:51Z-
dc.date.issued2022-
dc.identifier.citation36th Neural Information Processing Systems (NeurlPS) Benchmark and Dataset Track (Hybrid), November 28-December 9, 2022-
dc.identifier.urihttp://hdl.handle.net/10722/315543-
dc.description.abstractDespite the considerable progress in automatic abdominal multi-organ segmentation from CT/MRI scans in recent years, a comprehensive evaluation of the models' capabilities is hampered by the lack of a large-scale benchmark from diverse clinical scenarios. Constraint by the high cost of collecting and labeling 3D medical data, most of the deep learning models to date are driven by datasets with a limited number of organs of interest or samples, which still limits the power of modern deep models and makes it difficult to provide a fully comprehensive and fair estimate of various methods. To mitigate the limitations, we present AMOS, a large-scale, diverse, clinical dataset for abdominal organ segmentation. AMOS provides 500 CT and 100 MRI scans collected from multi-center, multi-vendor, multi-modality, multi-phase, multi-disease patients, each with voxel-level annotations of 15 abdominal organs, providing challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios. We further benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset. We have made our datasets, benchmark servers, and baselines publicly available, and hope to inspire future research.-
dc.languageeng-
dc.publisherIEEE.-
dc.rightsCopyright © IEEE.-
dc.titleAmos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation-
dc.typeConference_Paper-
dc.identifier.emailLuo, P: pluo@hku.hk-
dc.identifier.authorityLuo, P=rp02575-
dc.identifier.doi10.48550/arXiv.2206.08023-
dc.identifier.hkuros335570-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats