File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: A multi-scale hybrid linear model for lossy image representation

TitleA multi-scale hybrid linear model for lossy image representation
Authors
Issue Date2005
Citation
Proceedings of the IEEE International Conference on Computer Vision, 2005, v. I, p. 764-771 How to Cite?
AbstractThis paper introduces a simple and efficient representation for natural images. We partition an image into blocks and treat the blocks as vectors in a high-dimensional space. We then fit a piece-wise linear model (i.e. a union of affine subspaces) to the vectors at each down-sampling scale. We call this a multi-scale hybrid linear model of the image. The hybrid and hierarchical structure of this model allows us effectively to extract and exploit multi-modal correlations among the imagery data at different scales. It conceptually and computationally remedies limitations of many existing image representation methods that are based on either a fixed linear transformation (e.g. DCT, wavelets), an adaptive uni-modal linear transformation (e.g. PCA), or a multi-modal model at a single scale. We will justify both analytically and experimentally why and how such a simple multi-scale hybrid model is able to reduce simultaneously the model complexity and computational cost. Despite a small overhead for the model, our results show that this new model gives more compact representations for a wide variety of natural images under a wide range of signal-to-noise ratio than many existing methods, including wavelets. © 2005 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/326712

 

DC FieldValueLanguage
dc.contributor.authorHong, Wei-
dc.contributor.authorWright, John-
dc.contributor.authorHuang, Kun-
dc.contributor.authorMa, Yi-
dc.date.accessioned2023-03-31T05:25:59Z-
dc.date.available2023-03-31T05:25:59Z-
dc.date.issued2005-
dc.identifier.citationProceedings of the IEEE International Conference on Computer Vision, 2005, v. I, p. 764-771-
dc.identifier.urihttp://hdl.handle.net/10722/326712-
dc.description.abstractThis paper introduces a simple and efficient representation for natural images. We partition an image into blocks and treat the blocks as vectors in a high-dimensional space. We then fit a piece-wise linear model (i.e. a union of affine subspaces) to the vectors at each down-sampling scale. We call this a multi-scale hybrid linear model of the image. The hybrid and hierarchical structure of this model allows us effectively to extract and exploit multi-modal correlations among the imagery data at different scales. It conceptually and computationally remedies limitations of many existing image representation methods that are based on either a fixed linear transformation (e.g. DCT, wavelets), an adaptive uni-modal linear transformation (e.g. PCA), or a multi-modal model at a single scale. We will justify both analytically and experimentally why and how such a simple multi-scale hybrid model is able to reduce simultaneously the model complexity and computational cost. Despite a small overhead for the model, our results show that this new model gives more compact representations for a wide variety of natural images under a wide range of signal-to-noise ratio than many existing methods, including wavelets. © 2005 IEEE.-
dc.languageeng-
dc.relation.ispartofProceedings of the IEEE International Conference on Computer Vision-
dc.titleA multi-scale hybrid linear model for lossy image representation-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ICCV.2005.12-
dc.identifier.scopuseid_2-s2.0-33745934479-
dc.identifier.volumeI-
dc.identifier.spage764-
dc.identifier.epage771-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats