File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: SPFTN: A Joint Learning Framework for Localizing and Segmenting Objects in Weakly Labeled Videos

TitleSPFTN: A Joint Learning Framework for Localizing and Segmenting Objects in Weakly Labeled Videos
Authors
Keywordsdeep neural networks
object segmentation
self-paced learning
video object localization
Weakly labeled videos
Issue Date2020
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, v. 42, n. 2, p. 475-489 How to Cite?
AbstractObject localization and segmentation in weakly labeled videos are two interesting yet challenging tasks. Models built for simultaneous object localization and segmentation have been explored in the conventional fully supervised learning scenario to boost the performance of each task. However, none of the existing works has attempted to jointly learn object localization and segmentation models under weak supervision. To this end, we propose a joint learning framework called Self-Paced Fine-Tuning Network (SPFTN) for localizing and segmenting objects in weakly labelled videos. Learning the deep model jointly for object localization and segmentation under weak supervision is very challenging as the learning process of each single task would face serious ambiguity issue due to the lack of bounding-box or pixel-level supervision. To address this problem, our proposed deep SPFTN model is carefully designed with a novel multi-task self-paced learning objective, which leverages the task-specific prior knowledge and the knowledge that has been already captured to infer the confident training samples for each task. By aggregating the confident knowledge from each single task to mine reliable patterns and learning deep feature representation for both tasks, the proposed learning framework can address the ambiguity issue under weak supervision with simple optimization. Comprehensive experiments on the large-scale YouTube-Objects and DAVIS datasets demonstrate that the proposed approach achieves superior performance when compared with other state-of-the-art methods and the baseline networks/models.
Persistent Identifierhttp://hdl.handle.net/10722/321821
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhang, Dingwen-
dc.contributor.authorHan, Junwei-
dc.contributor.authorYang, Le-
dc.contributor.authorXu, Dong-
dc.date.accessioned2022-11-03T02:21:40Z-
dc.date.available2022-11-03T02:21:40Z-
dc.date.issued2020-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, v. 42, n. 2, p. 475-489-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/321821-
dc.description.abstractObject localization and segmentation in weakly labeled videos are two interesting yet challenging tasks. Models built for simultaneous object localization and segmentation have been explored in the conventional fully supervised learning scenario to boost the performance of each task. However, none of the existing works has attempted to jointly learn object localization and segmentation models under weak supervision. To this end, we propose a joint learning framework called Self-Paced Fine-Tuning Network (SPFTN) for localizing and segmenting objects in weakly labelled videos. Learning the deep model jointly for object localization and segmentation under weak supervision is very challenging as the learning process of each single task would face serious ambiguity issue due to the lack of bounding-box or pixel-level supervision. To address this problem, our proposed deep SPFTN model is carefully designed with a novel multi-task self-paced learning objective, which leverages the task-specific prior knowledge and the knowledge that has been already captured to infer the confident training samples for each task. By aggregating the confident knowledge from each single task to mine reliable patterns and learning deep feature representation for both tasks, the proposed learning framework can address the ambiguity issue under weak supervision with simple optimization. Comprehensive experiments on the large-scale YouTube-Objects and DAVIS datasets demonstrate that the proposed approach achieves superior performance when compared with other state-of-the-art methods and the baseline networks/models.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.subjectdeep neural networks-
dc.subjectobject segmentation-
dc.subjectself-paced learning-
dc.subjectvideo object localization-
dc.subjectWeakly labeled videos-
dc.titleSPFTN: A Joint Learning Framework for Localizing and Segmenting Objects in Weakly Labeled Videos-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPAMI.2018.2881114-
dc.identifier.pmid30442600-
dc.identifier.scopuseid_2-s2.0-85056580958-
dc.identifier.volume42-
dc.identifier.issue2-
dc.identifier.spage475-
dc.identifier.epage489-
dc.identifier.eissn1939-3539-
dc.identifier.isiWOS:000508386100017-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats