File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Textual query of personal photos facilitated by large-scale web data

TitleTextual query of personal photos facilitated by large-scale web data
Authors
Keywordscross-domain learning
large-scale Web data
Textual query-based consumer photo retrieval
Issue Date2011
Citation
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, v. 33, n. 5, p. 1022-1036 How to Cite?
AbstractThe rapid popularization of digital cameras and mobile phone cameras has led to an explosive growth of personal photo collections by consumers. In this paper, we present a real-time textual query-based personal photo retrieval system by leveraging millions of Web images and their associated rich textual descriptions (captions, categories, etc.). After a user provides a textual query (e.g., "water), our system exploits the inverted file to automatically find the positive Web images that are related to the textual query "water as well as the negative Web images that are irrelevant to the textual query. Based on these automatically retrieved relevant and irrelevant Web images, we employ three simple but effective classification methods, k-Nearest Neighbor (kNN), decision stumps, and linear SVM, to rank personal photos. To further improve the photo retrieval performance, we propose two relevance feedback methods via cross-domain learning, which effectively utilize both the Web images and personal images. In particular, our proposed cross-domain learning methods can learn robust classifiers with only a very limited amount of labeled personal photos from the user by leveraging the prelearned linear SVM classifiers in real time. We further propose an incremental cross-domain learning method in order to significantly accelerate the relevance feedback process on large consumer photo databases. Extensive experiments on two consumer photo data sets demonstrate the effectiveness and efficiency of our system, which is also inherently not limited by any predefined lexicon. © 2006 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/321437
ISSN
2023 Impact Factor: 20.8
2023 SCImago Journal Rankings: 6.158
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, Yiming-
dc.contributor.authorXu, Dong-
dc.contributor.authorTsang, Ivor W.-
dc.contributor.authorLuo, Jiebo-
dc.date.accessioned2022-11-03T02:18:55Z-
dc.date.available2022-11-03T02:18:55Z-
dc.date.issued2011-
dc.identifier.citationIEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, v. 33, n. 5, p. 1022-1036-
dc.identifier.issn0162-8828-
dc.identifier.urihttp://hdl.handle.net/10722/321437-
dc.description.abstractThe rapid popularization of digital cameras and mobile phone cameras has led to an explosive growth of personal photo collections by consumers. In this paper, we present a real-time textual query-based personal photo retrieval system by leveraging millions of Web images and their associated rich textual descriptions (captions, categories, etc.). After a user provides a textual query (e.g., "water), our system exploits the inverted file to automatically find the positive Web images that are related to the textual query "water as well as the negative Web images that are irrelevant to the textual query. Based on these automatically retrieved relevant and irrelevant Web images, we employ three simple but effective classification methods, k-Nearest Neighbor (kNN), decision stumps, and linear SVM, to rank personal photos. To further improve the photo retrieval performance, we propose two relevance feedback methods via cross-domain learning, which effectively utilize both the Web images and personal images. In particular, our proposed cross-domain learning methods can learn robust classifiers with only a very limited amount of labeled personal photos from the user by leveraging the prelearned linear SVM classifiers in real time. We further propose an incremental cross-domain learning method in order to significantly accelerate the relevance feedback process on large consumer photo databases. Extensive experiments on two consumer photo data sets demonstrate the effectiveness and efficiency of our system, which is also inherently not limited by any predefined lexicon. © 2006 IEEE.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Pattern Analysis and Machine Intelligence-
dc.subjectcross-domain learning-
dc.subjectlarge-scale Web data-
dc.subjectTextual query-based consumer photo retrieval-
dc.titleTextual query of personal photos facilitated by large-scale web data-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TPAMI.2010.142-
dc.identifier.scopuseid_2-s2.0-79953043001-
dc.identifier.volume33-
dc.identifier.issue5-
dc.identifier.spage1022-
dc.identifier.epage1036-
dc.identifier.isiWOS:000288677800012-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats