File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Bayesian learning of sparse multiscale image representations

TitleBayesian learning of sparse multiscale image representations
Authors
KeywordsBayesian statistical modeling
Dictionary learning
Multiscale image processing
Sparse coding
Issue Date2013
Citation
IEEE Transactions on Image Processing, 2013, v. 22, n. 12, p. 4972-4983 How to Cite?
AbstractMultiscale representations of images have become a standard tool in image analysis. Such representations offer a number of advantages over fixed-scale methods, including the potential for improved performance in denoising, compression, and the ability to represent distinct but complementary information that exists at various scales. A variety of multiresolution transforms exist, including both orthogonal decompositions such as wavelets as well as nonorthogonal, overcomplete representations. Recently, techniques for finding adaptive, sparse representations have yielded state-of-the-art results when applied to traditional image processing problems. Attempts at developing multiscale versions of these so-called dictionary learning models have yielded modest but encouraging results. However, none of these techniques has sought to combine a rigorous statistical formulation of the multiscale dictionary learning problem and the ability to share atoms across scales. We present a model for multiscale dictionary learning that overcomes some of the drawbacks of previous approaches by first decomposing an input into a pyramid of distinct frequency bands using a recursive filtering scheme, after which we perform dictionary learning and sparse coding on the individual levels of the resulting pyramid. The associated image model allows us to use a single set of adapted dictionary atoms that is shared - and learned - across all scales in the model. The underlying statistical model of our proposed method is fully Bayesian and allows for efficient inference of parameters, including the level of additive noise for denoising applications. We apply the proposed model to several common image processing problems including non-Gaussian and nonstationary denoising of real-world color images. © 1992-2012 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/363185
ISSN
2023 Impact Factor: 10.8
2023 SCImago Journal Rankings: 3.556

 

DC FieldValueLanguage
dc.contributor.authorHughes, James Michael-
dc.contributor.authorRockmore, Daniel N.-
dc.contributor.authorWang, Yang-
dc.date.accessioned2025-10-10T07:45:04Z-
dc.date.available2025-10-10T07:45:04Z-
dc.date.issued2013-
dc.identifier.citationIEEE Transactions on Image Processing, 2013, v. 22, n. 12, p. 4972-4983-
dc.identifier.issn1057-7149-
dc.identifier.urihttp://hdl.handle.net/10722/363185-
dc.description.abstractMultiscale representations of images have become a standard tool in image analysis. Such representations offer a number of advantages over fixed-scale methods, including the potential for improved performance in denoising, compression, and the ability to represent distinct but complementary information that exists at various scales. A variety of multiresolution transforms exist, including both orthogonal decompositions such as wavelets as well as nonorthogonal, overcomplete representations. Recently, techniques for finding adaptive, sparse representations have yielded state-of-the-art results when applied to traditional image processing problems. Attempts at developing multiscale versions of these so-called dictionary learning models have yielded modest but encouraging results. However, none of these techniques has sought to combine a rigorous statistical formulation of the multiscale dictionary learning problem and the ability to share atoms across scales. We present a model for multiscale dictionary learning that overcomes some of the drawbacks of previous approaches by first decomposing an input into a pyramid of distinct frequency bands using a recursive filtering scheme, after which we perform dictionary learning and sparse coding on the individual levels of the resulting pyramid. The associated image model allows us to use a single set of adapted dictionary atoms that is shared - and learned - across all scales in the model. The underlying statistical model of our proposed method is fully Bayesian and allows for efficient inference of parameters, including the level of additive noise for denoising applications. We apply the proposed model to several common image processing problems including non-Gaussian and nonstationary denoising of real-world color images. © 1992-2012 IEEE.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Image Processing-
dc.subjectBayesian statistical modeling-
dc.subjectDictionary learning-
dc.subjectMultiscale image processing-
dc.subjectSparse coding-
dc.titleBayesian learning of sparse multiscale image representations-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TIP.2013.2280188-
dc.identifier.scopuseid_2-s2.0-84885598278-
dc.identifier.volume22-
dc.identifier.issue12-
dc.identifier.spage4972-
dc.identifier.epage4983-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats