File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TPAMI.2007.250598
- Scopus: eid_2-s2.0-33947194180
- PMID: 17108382
- WOS: WOS:000241988300004
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Graph embedding and extensions: A general framework for dimensionality reduction
Title | Graph embedding and extensions: A general framework for dimensionality reduction |
---|---|
Authors | |
Keywords | Dimensionality reduction Graph embedding framework Manifold learning Subspace learning |
Issue Date | 2007 |
Citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, v. 29, n. 1, p. 40-51 How to Cite? |
Abstract | Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/ tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called marginal Fisher analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional linear discriminant analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions. © 2007 IEEE. |
Persistent Identifier | http://hdl.handle.net/10722/321319 |
ISSN | 2023 Impact Factor: 20.8 2023 SCImago Journal Rankings: 6.158 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yan, Shuicheng | - |
dc.contributor.author | Xu, Dong | - |
dc.contributor.author | Zhang, Benyu | - |
dc.contributor.author | Zhang, Hong Jiang | - |
dc.contributor.author | Yang, Qiang | - |
dc.contributor.author | Lin, Steve | - |
dc.date.accessioned | 2022-11-03T02:18:07Z | - |
dc.date.available | 2022-11-03T02:18:07Z | - |
dc.date.issued | 2007 | - |
dc.identifier.citation | IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, v. 29, n. 1, p. 40-51 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | http://hdl.handle.net/10722/321319 | - |
dc.description.abstract | Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/ tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called marginal Fisher analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional linear discriminant analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions. © 2007 IEEE. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
dc.subject | Dimensionality reduction | - |
dc.subject | Graph embedding framework | - |
dc.subject | Manifold learning | - |
dc.subject | Subspace learning | - |
dc.title | Graph embedding and extensions: A general framework for dimensionality reduction | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TPAMI.2007.250598 | - |
dc.identifier.pmid | 17108382 | - |
dc.identifier.scopus | eid_2-s2.0-33947194180 | - |
dc.identifier.volume | 29 | - |
dc.identifier.issue | 1 | - |
dc.identifier.spage | 40 | - |
dc.identifier.epage | 51 | - |
dc.identifier.isi | WOS:000241988300004 | - |