File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPAMI.2010.142
- Scopus: eid_2-s2.0-79953043001
- WOS: WOS:000288677800012
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Textual query of personal photos facilitated by large-scale web data
Title | Textual query of personal photos facilitated by large-scale web data |
---|---|
Authors | |
Keywords | cross-domain learning large-scale Web data Textual query-based consumer photo retrieval |
Issue Date | 2011 |
Citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, v. 33, n. 5, p. 1022-1036 How to Cite? |
Abstract | The rapid popularization of digital cameras and mobile phone cameras has led to an explosive growth of personal photo collections by consumers. In this paper, we present a real-time textual query-based personal photo retrieval system by leveraging millions of Web images and their associated rich textual descriptions (captions, categories, etc.). After a user provides a textual query (e.g., "water), our system exploits the inverted file to automatically find the positive Web images that are related to the textual query "water as well as the negative Web images that are irrelevant to the textual query. Based on these automatically retrieved relevant and irrelevant Web images, we employ three simple but effective classification methods, k-Nearest Neighbor (kNN), decision stumps, and linear SVM, to rank personal photos. To further improve the photo retrieval performance, we propose two relevance feedback methods via cross-domain learning, which effectively utilize both the Web images and personal images. In particular, our proposed cross-domain learning methods can learn robust classifiers with only a very limited amount of labeled personal photos from the user by leveraging the prelearned linear SVM classifiers in real time. We further propose an incremental cross-domain learning method in order to significantly accelerate the relevance feedback process on large consumer photo databases. Extensive experiments on two consumer photo data sets demonstrate the effectiveness and efficiency of our system, which is also inherently not limited by any predefined lexicon. © 2006 IEEE. |
Persistent Identifier | http://hdl.handle.net/10722/321437 |
ISSN | 2023 Impact Factor: 20.8 2023 SCImago Journal Rankings: 6.158 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Yiming | - |
dc.contributor.author | Xu, Dong | - |
dc.contributor.author | Tsang, Ivor W. | - |
dc.contributor.author | Luo, Jiebo | - |
dc.date.accessioned | 2022-11-03T02:18:55Z | - |
dc.date.available | 2022-11-03T02:18:55Z | - |
dc.date.issued | 2011 | - |
dc.identifier.citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, v. 33, n. 5, p. 1022-1036 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | http://hdl.handle.net/10722/321437 | - |
dc.description.abstract | The rapid popularization of digital cameras and mobile phone cameras has led to an explosive growth of personal photo collections by consumers. In this paper, we present a real-time textual query-based personal photo retrieval system by leveraging millions of Web images and their associated rich textual descriptions (captions, categories, etc.). After a user provides a textual query (e.g., "water), our system exploits the inverted file to automatically find the positive Web images that are related to the textual query "water as well as the negative Web images that are irrelevant to the textual query. Based on these automatically retrieved relevant and irrelevant Web images, we employ three simple but effective classification methods, k-Nearest Neighbor (kNN), decision stumps, and linear SVM, to rank personal photos. To further improve the photo retrieval performance, we propose two relevance feedback methods via cross-domain learning, which effectively utilize both the Web images and personal images. In particular, our proposed cross-domain learning methods can learn robust classifiers with only a very limited amount of labeled personal photos from the user by leveraging the prelearned linear SVM classifiers in real time. We further propose an incremental cross-domain learning method in order to significantly accelerate the relevance feedback process on large consumer photo databases. Extensive experiments on two consumer photo data sets demonstrate the effectiveness and efficiency of our system, which is also inherently not limited by any predefined lexicon. © 2006 IEEE. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
dc.subject | cross-domain learning | - |
dc.subject | large-scale Web data | - |
dc.subject | Textual query-based consumer photo retrieval | - |
dc.title | Textual query of personal photos facilitated by large-scale web data | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TPAMI.2010.142 | - |
dc.identifier.scopus | eid_2-s2.0-79953043001 | - |
dc.identifier.volume | 33 | - |
dc.identifier.issue | 5 | - |
dc.identifier.spage | 1022 | - |
dc.identifier.epage | 1036 | - |
dc.identifier.isi | WOS:000288677800012 | - |