File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/1631135.1631138
- Scopus: eid_2-s2.0-72249106177
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Understanding tag-cloud and visual features for better annotation of concepts in NUS-WIDE dataset
Title | Understanding tag-cloud and visual features for better annotation of concepts in NUS-WIDE dataset |
---|---|
Authors | |
Keywords | Concept prediction Large-scale date set Tag Visual feature |
Issue Date | 2009 |
Citation | 1st International Workshop on Web-Scale Multimedia Corpus, WSMC'09, Co-located with the 2009 ACM International Conference on Multimedia, MM'09, 2009, p. 9-16 How to Cite? |
Abstract | Large-scale dataset construction will require a significant large amount of well labeled ground truth. For the NUS-WIDE dataset, a less labor-intensive annotation process was used and this paper will focuses on improving the semi-manual annotation method used. For the NUS-WIDE dataset, improving the average accuracy for top retrievals of individual concepts will effectively improve the results of the semi-manual annotation method. For web images, both tags and visual feature play important roles in predicting the concept of the image. For visual features, we have adopted an adaptive feature selection method to construct a middle level feature by concatenating the k-NN results for each type of visual feature. This middle feature is more robust than the average combination of single features, and we have shown it achieves good performance for the concept prediction. For Tag cloud, we construct a concept-tag co-occurrence matrix. The co-occurrence information to compute the probability of an image belonging to certain concept and according to Bayes theory for the annotated tags. By understanding the WordNet's taxonomy level, which indicates whether the concept is generic of specific, and exploring the tags clouds distribution, we propose a selection method of using either tag cloud or visual features, to enhance the concepts annotation performance. In this way, the advantages of both tag and visual features are boosted. Experimental results have shown that our method can achieve very high average precision for the NUS-WIDE dataset, which greatly facilitates the construction of large-scale web image data set. Copyright 2009 ACM. |
Persistent Identifier | http://hdl.handle.net/10722/345051 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Gao, Shenghua | - |
dc.contributor.author | Chia, Liang Tien | - |
dc.contributor.author | Cheng, Xiangang | - |
dc.date.accessioned | 2024-08-15T09:24:53Z | - |
dc.date.available | 2024-08-15T09:24:53Z | - |
dc.date.issued | 2009 | - |
dc.identifier.citation | 1st International Workshop on Web-Scale Multimedia Corpus, WSMC'09, Co-located with the 2009 ACM International Conference on Multimedia, MM'09, 2009, p. 9-16 | - |
dc.identifier.uri | http://hdl.handle.net/10722/345051 | - |
dc.description.abstract | Large-scale dataset construction will require a significant large amount of well labeled ground truth. For the NUS-WIDE dataset, a less labor-intensive annotation process was used and this paper will focuses on improving the semi-manual annotation method used. For the NUS-WIDE dataset, improving the average accuracy for top retrievals of individual concepts will effectively improve the results of the semi-manual annotation method. For web images, both tags and visual feature play important roles in predicting the concept of the image. For visual features, we have adopted an adaptive feature selection method to construct a middle level feature by concatenating the k-NN results for each type of visual feature. This middle feature is more robust than the average combination of single features, and we have shown it achieves good performance for the concept prediction. For Tag cloud, we construct a concept-tag co-occurrence matrix. The co-occurrence information to compute the probability of an image belonging to certain concept and according to Bayes theory for the annotated tags. By understanding the WordNet's taxonomy level, which indicates whether the concept is generic of specific, and exploring the tags clouds distribution, we propose a selection method of using either tag cloud or visual features, to enhance the concepts annotation performance. In this way, the advantages of both tag and visual features are boosted. Experimental results have shown that our method can achieve very high average precision for the NUS-WIDE dataset, which greatly facilitates the construction of large-scale web image data set. Copyright 2009 ACM. | - |
dc.language | eng | - |
dc.relation.ispartof | 1st International Workshop on Web-Scale Multimedia Corpus, WSMC'09, Co-located with the 2009 ACM International Conference on Multimedia, MM'09 | - |
dc.subject | Concept prediction | - |
dc.subject | Large-scale date set | - |
dc.subject | Tag | - |
dc.subject | Visual feature | - |
dc.title | Understanding tag-cloud and visual features for better annotation of concepts in NUS-WIDE dataset | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/1631135.1631138 | - |
dc.identifier.scopus | eid_2-s2.0-72249106177 | - |
dc.identifier.spage | 9 | - |
dc.identifier.epage | 16 | - |