File Download

There are no files associated with this item.

Supplementary

Conference Paper: On the Feature Learning in Diffusion Models

TitleOn the Feature Learning in Diffusion Models
Authors
Issue Date24-Apr-2025
Abstract

The predominant success of diffusion models in generative modeling has spurred significant interest in understanding their theoretical foundations. In this work, we propose a feature learning framework aimed at analyzing and comparing the training dynamics of diffusion models with those of traditional classification models. Our theoretical analysis demonstrates that diffusion models, due to the denoising objective, are encouraged to learn more balanced and comprehensive representations of the data. In contrast, neural networks with a similar architecture trained for classification tend to prioritize learning specific patterns in the data, often focusing on easy-to-learn components. To support these theoretical insights, we conduct several experiments on both synthetic and real-world datasets, which empirically validate our findings and highlight the distinct feature learning dynamics in diffusion models compared to classification.


Persistent Identifierhttp://hdl.handle.net/10722/360544

 

DC FieldValueLanguage
dc.contributor.authorHan, Andi-
dc.contributor.authorHuang, Wei-
dc.contributor.authorCao, Yuan-
dc.contributor.authorZou, Difan-
dc.date.accessioned2025-09-12T00:36:54Z-
dc.date.available2025-09-12T00:36:54Z-
dc.date.issued2025-04-24-
dc.identifier.urihttp://hdl.handle.net/10722/360544-
dc.description.abstract<p>The predominant success of diffusion models in generative modeling has spurred significant interest in understanding their theoretical foundations. In this work, we propose a feature learning framework aimed at analyzing and comparing the training dynamics of diffusion models with those of traditional classification models. Our theoretical analysis demonstrates that diffusion models, due to the denoising objective, are encouraged to learn more balanced and comprehensive representations of the data. In contrast, neural networks with a similar architecture trained for classification tend to prioritize learning specific patterns in the data, often focusing on easy-to-learn components. To support these theoretical insights, we conduct several experiments on both synthetic and real-world datasets, which empirically validate our findings and highlight the distinct feature learning dynamics in diffusion models compared to classification.<br></p>-
dc.languageeng-
dc.relation.ispartofThe 13th International Conference on Learning Representations (ICLR) (24/04/2025-28/04/2025, Singapore)-
dc.titleOn the Feature Learning in Diffusion Models-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats