File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/MMMC.2005.57
- Scopus: eid_2-s2.0-56149118158
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Parallel image matrix compression for face recognition
Title | Parallel image matrix compression for face recognition |
---|---|
Authors | |
Issue Date | 2005 |
Citation | Proceedings of the 11th International Multimedia Modelling Conference, MMM 2005, 2005, p. 232-238 How to Cite? |
Abstract | The canonical face recognition algorithm Eigenface and Fisherface are both based on one dimensional vector representation. However, with the high feature dimensions and the small training data, face recognition often suffers from the curse of dimension and the small sample problem. Recent research [4] shows that face recognition based on direct 2D matrix representation, i.e. 2DPCA, obtains better performance than that based on traditional vector representation. However, there are three questions left unresolved in the 2DPCA algorithm: I ) what is the meaning of the eigenvalue and eigenvector of the covariance matrix in 2DPCA; 2) why 2DPCA can outperform Eigenface; and 3) how to reduce the dimension after 2DPCA directly. In this paper, we analyze 2DPCA in a different view and proof that is 2DPCA actually a "localized" PCA with each row vector of an image as object. With this explanation, we discover the intrinsic reason that 2DPCA can outperform Eigenface is because fewer feature dimensions and more samples are used in 2DPCA when compared with Eigenface. To further reduce the dimension after 2DPCA, a two-stage strategy, namely parallel image matrix compression (PIMC), is proposed to compress the image matrix redundancy, which exists among row vectors and column vectors. The exhaustive experiment results demonstrate that PIMC is superior to 2DPCA and Eigenface, and PIMC+LDA outperforms 2DPC+LDA and Fisherface. © 2005 IEEE. |
Persistent Identifier | http://hdl.handle.net/10722/321361 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xu, Dong | - |
dc.contributor.author | Yan, Shuicheng | - |
dc.contributor.author | Zhang, Lei | - |
dc.contributor.author | Li, Mingjing | - |
dc.contributor.author | Ma, Weiying | - |
dc.contributor.author | Liu, Zhengkai | - |
dc.contributor.author | Zhang, Hongjiang | - |
dc.date.accessioned | 2022-11-03T02:18:23Z | - |
dc.date.available | 2022-11-03T02:18:23Z | - |
dc.date.issued | 2005 | - |
dc.identifier.citation | Proceedings of the 11th International Multimedia Modelling Conference, MMM 2005, 2005, p. 232-238 | - |
dc.identifier.uri | http://hdl.handle.net/10722/321361 | - |
dc.description.abstract | The canonical face recognition algorithm Eigenface and Fisherface are both based on one dimensional vector representation. However, with the high feature dimensions and the small training data, face recognition often suffers from the curse of dimension and the small sample problem. Recent research [4] shows that face recognition based on direct 2D matrix representation, i.e. 2DPCA, obtains better performance than that based on traditional vector representation. However, there are three questions left unresolved in the 2DPCA algorithm: I ) what is the meaning of the eigenvalue and eigenvector of the covariance matrix in 2DPCA; 2) why 2DPCA can outperform Eigenface; and 3) how to reduce the dimension after 2DPCA directly. In this paper, we analyze 2DPCA in a different view and proof that is 2DPCA actually a "localized" PCA with each row vector of an image as object. With this explanation, we discover the intrinsic reason that 2DPCA can outperform Eigenface is because fewer feature dimensions and more samples are used in 2DPCA when compared with Eigenface. To further reduce the dimension after 2DPCA, a two-stage strategy, namely parallel image matrix compression (PIMC), is proposed to compress the image matrix redundancy, which exists among row vectors and column vectors. The exhaustive experiment results demonstrate that PIMC is superior to 2DPCA and Eigenface, and PIMC+LDA outperforms 2DPC+LDA and Fisherface. © 2005 IEEE. | - |
dc.language | eng | - |
dc.relation.ispartof | Proceedings of the 11th International Multimedia Modelling Conference, MMM 2005 | - |
dc.title | Parallel image matrix compression for face recognition | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/MMMC.2005.57 | - |
dc.identifier.scopus | eid_2-s2.0-56149118158 | - |
dc.identifier.spage | 232 | - |
dc.identifier.epage | 238 | - |