File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/978-3-030-87240-3_29
- Scopus: eid_2-s2.0-85116425919
- WOS: WOS:000712025900029
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Multi-frame Collaboration for Effective Endoscopic Video Polyp Detection via Spatial-Temporal Feature Transformation
Title | Multi-frame Collaboration for Effective Endoscopic Video Polyp Detection via Spatial-Temporal Feature Transformation |
---|---|
Authors | |
Issue Date | 2021 |
Publisher | Springer. |
Citation | Wu, L ... et al. Multi-frame Collaboration for Effective Endoscopic Video Polyp Detection via Spatial-Temporal Feature Transformation. In de Bruijne, M ... et al. (eds), The 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021), Virtual Conference, Strasbourg, France, 27 September - 1 October 2021. Proceedings, Part V, p. 302-312. Cham: Springer, 2021 How to Cite? |
Abstract | Precise localization of polyp is crucial for early cancer screening in gastrointestinal endoscopy. Videos given by endoscopy bring both richer contextual information as well as more challenges than still images. The camera-moving situation, instead of the common camera-fixed-object-moving one, leads to significant background variation between frames. Severe internal artifacts (e.g. water flow in the human body, specular reflection by tissues) can make the quality of adjacent frames vary considerately. These factors hinder a video-based model to effectively aggregate features from neighborhood frames and give better predictions. In this paper, we present Spatial-Temporal Feature Transformation (STFT), a multi-frame collaborative framework to address these issues. Spatially, STFT mitigates inter-frame variations in the camera-moving situation with feature alignment by proposal-guided deformable convolutions. Temporally, STFT proposes a channel-aware attention module to simultaneously estimate the quality and correlation of adjacent frames for adaptive feature aggregation. Empirical studies and superior results demonstrate the effectiveness and stability of our method. For example, STFT improves the still image baseline FCOS by 10.6% and 20.6% on the comprehensive F1-score of the polyp localization task in CVC-Clinic and ASUMayo datasets, respectively, and outperforms the state-of-the-art video-based method by 3.6% and 8.0% , respectively. Code is available at https://github.com/lingyunwu14/STFT. |
Description | Poster Session We-S1: Topic: Computer Aided Diagnosis - no. 906 |
Persistent Identifier | http://hdl.handle.net/10722/301314 |
ISBN | |
ISI Accession Number ID | |
Series/Report no. | Lecture Notes in Computer Science (LNCS) ; v. 12905 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wu, L | - |
dc.contributor.author | Hu, Z | - |
dc.contributor.author | Ji, Y | - |
dc.contributor.author | Zhang, S | - |
dc.contributor.author | Luo, P | - |
dc.date.accessioned | 2021-07-27T08:09:17Z | - |
dc.date.available | 2021-07-27T08:09:17Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Wu, L ... et al. Multi-frame Collaboration for Effective Endoscopic Video Polyp Detection via Spatial-Temporal Feature Transformation. In de Bruijne, M ... et al. (eds), The 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021), Virtual Conference, Strasbourg, France, 27 September - 1 October 2021. Proceedings, Part V, p. 302-312. Cham: Springer, 2021 | - |
dc.identifier.isbn | 9783030872397 | - |
dc.identifier.uri | http://hdl.handle.net/10722/301314 | - |
dc.description | Poster Session We-S1: Topic: Computer Aided Diagnosis - no. 906 | - |
dc.description.abstract | Precise localization of polyp is crucial for early cancer screening in gastrointestinal endoscopy. Videos given by endoscopy bring both richer contextual information as well as more challenges than still images. The camera-moving situation, instead of the common camera-fixed-object-moving one, leads to significant background variation between frames. Severe internal artifacts (e.g. water flow in the human body, specular reflection by tissues) can make the quality of adjacent frames vary considerately. These factors hinder a video-based model to effectively aggregate features from neighborhood frames and give better predictions. In this paper, we present Spatial-Temporal Feature Transformation (STFT), a multi-frame collaborative framework to address these issues. Spatially, STFT mitigates inter-frame variations in the camera-moving situation with feature alignment by proposal-guided deformable convolutions. Temporally, STFT proposes a channel-aware attention module to simultaneously estimate the quality and correlation of adjacent frames for adaptive feature aggregation. Empirical studies and superior results demonstrate the effectiveness and stability of our method. For example, STFT improves the still image baseline FCOS by 10.6% and 20.6% on the comprehensive F1-score of the polyp localization task in CVC-Clinic and ASUMayo datasets, respectively, and outperforms the state-of-the-art video-based method by 3.6% and 8.0% , respectively. Code is available at https://github.com/lingyunwu14/STFT. | - |
dc.language | eng | - |
dc.publisher | Springer. | - |
dc.relation.ispartof | International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2021 | - |
dc.relation.ispartofseries | Lecture Notes in Computer Science (LNCS) ; v. 12905 | - |
dc.title | Multi-frame Collaboration for Effective Endoscopic Video Polyp Detection via Spatial-Temporal Feature Transformation | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Luo, P: pluo@hku.hk | - |
dc.identifier.authority | Luo, P=rp02575 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/978-3-030-87240-3_29 | - |
dc.identifier.scopus | eid_2-s2.0-85116425919 | - |
dc.identifier.hkuros | 323753 | - |
dc.identifier.spage | 302 | - |
dc.identifier.epage | 312 | - |
dc.identifier.isi | WOS:000712025900029 | - |
dc.publisher.place | Cham | - |
dc.identifier.eisbn | 9783030872403 | - |