File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: A discriminative and sparse topic model for image classification and annotation

TitleA discriminative and sparse topic model for image classification and annotation
Authors
KeywordsImage classification
Discriminative topic
Graphical model
Image annotation
Sparsity
Issue Date2016
Citation
Image and Vision Computing, 2016, v. 51, p. 22-35 How to Cite?
Abstract© 2016 Elsevier B.V. All rights reserved. Image classification is to assign a category of an image and image annotation is to describe individual components of an image by using some annotation terms. These two learning tasks are strongly related. The main contribution of this paper is to propose a new discriminative and sparse topic model (DSTM) for image classification and annotation by combining visual, annotation and label information from a set of training images. The essential features of DSTM different from existing approaches are that (i) the label information is enforced in the generation of both visual words and annotation terms such that each generative latent topic corresponds to a category; (ii) the zero-mean Laplace distribution is employed to give a sparse representation of images in visual words and annotation terms such that relevant words and terms are associated with latent topics. Experimental results demonstrate that the proposed method provides the discrimination ability in classification and annotation, and its performance is better than the other testing methods (sLDA-ann, abc-corr-LDA, SupDocNADE, SAGE and MedSTC) for LabelMe, UIUC, NUS-WIDE and PascalVOC07 images.
Persistent Identifierhttp://hdl.handle.net/10722/276504
ISSN
2021 Impact Factor: 3.860
2020 SCImago Journal Rankings: 0.570
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorYang, Liu-
dc.contributor.authorJing, Liping-
dc.contributor.authorNg, Michael K.-
dc.contributor.authorYu, Jian-
dc.date.accessioned2019-09-18T08:33:49Z-
dc.date.available2019-09-18T08:33:49Z-
dc.date.issued2016-
dc.identifier.citationImage and Vision Computing, 2016, v. 51, p. 22-35-
dc.identifier.issn0262-8856-
dc.identifier.urihttp://hdl.handle.net/10722/276504-
dc.description.abstract© 2016 Elsevier B.V. All rights reserved. Image classification is to assign a category of an image and image annotation is to describe individual components of an image by using some annotation terms. These two learning tasks are strongly related. The main contribution of this paper is to propose a new discriminative and sparse topic model (DSTM) for image classification and annotation by combining visual, annotation and label information from a set of training images. The essential features of DSTM different from existing approaches are that (i) the label information is enforced in the generation of both visual words and annotation terms such that each generative latent topic corresponds to a category; (ii) the zero-mean Laplace distribution is employed to give a sparse representation of images in visual words and annotation terms such that relevant words and terms are associated with latent topics. Experimental results demonstrate that the proposed method provides the discrimination ability in classification and annotation, and its performance is better than the other testing methods (sLDA-ann, abc-corr-LDA, SupDocNADE, SAGE and MedSTC) for LabelMe, UIUC, NUS-WIDE and PascalVOC07 images.-
dc.languageeng-
dc.relation.ispartofImage and Vision Computing-
dc.subjectImage classification-
dc.subjectDiscriminative topic-
dc.subjectGraphical model-
dc.subjectImage annotation-
dc.subjectSparsity-
dc.titleA discriminative and sparse topic model for image classification and annotation-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1016/j.imavis.2016.03.005-
dc.identifier.scopuseid_2-s2.0-84964253414-
dc.identifier.volume51-
dc.identifier.spage22-
dc.identifier.epage35-
dc.identifier.isiWOS:000378455900003-
dc.identifier.issnl0262-8856-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats