File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TIP.2013.2280188
- Scopus: eid_2-s2.0-84885598278
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: Bayesian learning of sparse multiscale image representations
| Title | Bayesian learning of sparse multiscale image representations |
|---|---|
| Authors | |
| Keywords | Bayesian statistical modeling Dictionary learning Multiscale image processing Sparse coding |
| Issue Date | 2013 |
| Citation | IEEE Transactions on Image Processing, 2013, v. 22, n. 12, p. 4972-4983 How to Cite? |
| Abstract | Multiscale representations of images have become a standard tool in image analysis. Such representations offer a number of advantages over fixed-scale methods, including the potential for improved performance in denoising, compression, and the ability to represent distinct but complementary information that exists at various scales. A variety of multiresolution transforms exist, including both orthogonal decompositions such as wavelets as well as nonorthogonal, overcomplete representations. Recently, techniques for finding adaptive, sparse representations have yielded state-of-the-art results when applied to traditional image processing problems. Attempts at developing multiscale versions of these so-called dictionary learning models have yielded modest but encouraging results. However, none of these techniques has sought to combine a rigorous statistical formulation of the multiscale dictionary learning problem and the ability to share atoms across scales. We present a model for multiscale dictionary learning that overcomes some of the drawbacks of previous approaches by first decomposing an input into a pyramid of distinct frequency bands using a recursive filtering scheme, after which we perform dictionary learning and sparse coding on the individual levels of the resulting pyramid. The associated image model allows us to use a single set of adapted dictionary atoms that is shared - and learned - across all scales in the model. The underlying statistical model of our proposed method is fully Bayesian and allows for efficient inference of parameters, including the level of additive noise for denoising applications. We apply the proposed model to several common image processing problems including non-Gaussian and nonstationary denoising of real-world color images. © 1992-2012 IEEE. |
| Persistent Identifier | http://hdl.handle.net/10722/363185 |
| ISSN | 2023 Impact Factor: 10.8 2023 SCImago Journal Rankings: 3.556 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Hughes, James Michael | - |
| dc.contributor.author | Rockmore, Daniel N. | - |
| dc.contributor.author | Wang, Yang | - |
| dc.date.accessioned | 2025-10-10T07:45:04Z | - |
| dc.date.available | 2025-10-10T07:45:04Z | - |
| dc.date.issued | 2013 | - |
| dc.identifier.citation | IEEE Transactions on Image Processing, 2013, v. 22, n. 12, p. 4972-4983 | - |
| dc.identifier.issn | 1057-7149 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/363185 | - |
| dc.description.abstract | Multiscale representations of images have become a standard tool in image analysis. Such representations offer a number of advantages over fixed-scale methods, including the potential for improved performance in denoising, compression, and the ability to represent distinct but complementary information that exists at various scales. A variety of multiresolution transforms exist, including both orthogonal decompositions such as wavelets as well as nonorthogonal, overcomplete representations. Recently, techniques for finding adaptive, sparse representations have yielded state-of-the-art results when applied to traditional image processing problems. Attempts at developing multiscale versions of these so-called dictionary learning models have yielded modest but encouraging results. However, none of these techniques has sought to combine a rigorous statistical formulation of the multiscale dictionary learning problem and the ability to share atoms across scales. We present a model for multiscale dictionary learning that overcomes some of the drawbacks of previous approaches by first decomposing an input into a pyramid of distinct frequency bands using a recursive filtering scheme, after which we perform dictionary learning and sparse coding on the individual levels of the resulting pyramid. The associated image model allows us to use a single set of adapted dictionary atoms that is shared - and learned - across all scales in the model. The underlying statistical model of our proposed method is fully Bayesian and allows for efficient inference of parameters, including the level of additive noise for denoising applications. We apply the proposed model to several common image processing problems including non-Gaussian and nonstationary denoising of real-world color images. © 1992-2012 IEEE. | - |
| dc.language | eng | - |
| dc.relation.ispartof | IEEE Transactions on Image Processing | - |
| dc.subject | Bayesian statistical modeling | - |
| dc.subject | Dictionary learning | - |
| dc.subject | Multiscale image processing | - |
| dc.subject | Sparse coding | - |
| dc.title | Bayesian learning of sparse multiscale image representations | - |
| dc.type | Article | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1109/TIP.2013.2280188 | - |
| dc.identifier.scopus | eid_2-s2.0-84885598278 | - |
| dc.identifier.volume | 22 | - |
| dc.identifier.issue | 12 | - |
| dc.identifier.spage | 4972 | - |
| dc.identifier.epage | 4983 | - |
