File Download

There are no files associated with this item.

Conference Paper: The Benefits of Mixup for Feature Learning

TitleThe Benefits of Mixup for Feature Learning
Authors
Issue Date23-Jul-2023
Abstract

Mixup, a simple data augmentation method that randomly mixes two data points via linear interpolation, has been extensively applied in various deep learning applications to gain better generalization. However, the theoretical underpinnings of its efficacy are not yet fully understood. In this paper, we aim to seek a fundamental understanding of the benefits of Mixup. We first show that Mixup using different linear interpolation parameters for features and labels can still achieve similar performance to the standard Mixup. This indicates that the intuitive linearity explanation in Zhang et al. (2018) may not fully explain the success of Mixup. Then we perform a theoretical study of Mixup from the feature learning perspective. We consider a feature-noise data model and show that Mixup training can effectively learn the rare features (appearing in a small fraction of data) from its mixture with the common features (appearing in a large fraction of data). In contrast, standard training can only learn the common features but fails to learn the rare features, thus suffering from bad generalization performance. Moreover, our theoretical analysis also shows that the benefits of Mixup for feature learning are mostly gained in the early training phase, based on which we propose to apply early stopping in Mixup. Experimental results verify our theoretical findings and demonstrate the effectiveness of the early-stopped Mixup training.


Persistent Identifierhttp://hdl.handle.net/10722/340297

 

DC FieldValueLanguage
dc.contributor.authorZou, Difan-
dc.contributor.authorCao, Yuan-
dc.contributor.authorLi, Yuanzhi-
dc.contributor.authorGu, Quanquan-
dc.date.accessioned2024-03-11T10:43:06Z-
dc.date.available2024-03-11T10:43:06Z-
dc.date.issued2023-07-23-
dc.identifier.urihttp://hdl.handle.net/10722/340297-
dc.description.abstract<p>Mixup, a simple data augmentation method that randomly mixes two data points via linear interpolation, has been extensively applied in various deep learning applications to gain better generalization. However, the theoretical underpinnings of its efficacy are not yet fully understood. In this paper, we aim to seek a fundamental understanding of the benefits of Mixup. We first show that Mixup using different linear interpolation parameters for features and labels can still achieve similar performance to the standard Mixup. This indicates that the intuitive linearity explanation in Zhang et al. (2018) may not fully explain the success of Mixup. Then we perform a theoretical study of Mixup from the feature learning perspective. We consider a feature-noise data model and show that Mixup training can effectively learn the rare features (appearing in a small fraction of data) from its mixture with the common features (appearing in a large fraction of data). In contrast, standard training can only learn the common features but fails to learn the rare features, thus suffering from bad generalization performance. Moreover, our theoretical analysis also shows that the benefits of Mixup for feature learning are mostly gained in the early training phase, based on which we propose to apply early stopping in Mixup. Experimental results verify our theoretical findings and demonstrate the effectiveness of the early-stopped Mixup training.<br></p>-
dc.languageeng-
dc.relation.ispartofFortieth International Conference on Machine Learning (ICML) (23/07/2023-29/07/2023, Honolulu, Hawaii, USA)-
dc.titleThe Benefits of Mixup for Feature Learning -
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats