File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning
Title | Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning |
---|---|
Authors | |
Issue Date | 1-May-2023 |
Abstract | There is a rising interest in further exploring the zero-shot learning potential of large pre-trained language models (PLMs). A new paradigm called data-generationbased zero-shot learning has achieved impressive success. In this paradigm, the synthesized data from the PLM acts as the carrier of knowledge, which is used to train a task-specific model with orders of magnitude fewer parameters than the PLM, achieving both higher performance and efficiency than prompt-based zero-shot learning methods on PLMs. The main hurdle of this approach is that the synthesized data from PLM usually contains a significant portion of low-quality samples. Fitting on such data will greatly hamper the performance of the taskspecific model, making it unreliable for deployment. Previous methods remedy this issue mainly by filtering synthetic data using heuristic metrics(e.g., output confidence), or refining the data with the help of a human expert, which comes with excessive manual tuning or expensive costs. In this paper, we propose a novel noise-robust re-weighting framework SUNGEN to automatically construct high-quality data for zero-shot classification problems. Our framework features the ability to learn the sample weights indicating data quality without requiring any human annotation. We theoretically and empirically verify the ability of our method to help construct good-quality synthetic datasets. Notably, SUNGEN-LSTM yields a 9.8% relative improvement than the baseline on average accuracy across eight different established text classification tasks. |
Persistent Identifier | http://hdl.handle.net/10722/333818 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Gao, Jiahui | - |
dc.contributor.author | Pi, Renjie | - |
dc.contributor.author | Yong, Lin | - |
dc.contributor.author | Xu, Hang | - |
dc.contributor.author | Ye, Jiacheng | - |
dc.contributor.author | Wu, Zhiyong | - |
dc.contributor.author | Zhang, Weizhong | - |
dc.contributor.author | Liang, Xiaodan | - |
dc.contributor.author | Li, Zhenguo | - |
dc.contributor.author | Kong, Lingpeng | - |
dc.date.accessioned | 2023-10-06T08:39:19Z | - |
dc.date.available | 2023-10-06T08:39:19Z | - |
dc.date.issued | 2023-05-01 | - |
dc.identifier.uri | http://hdl.handle.net/10722/333818 | - |
dc.description.abstract | <p>There is a rising interest in further exploring the zero-shot learning potential of large pre-trained language models (PLMs). A new paradigm called data-generationbased zero-shot learning has achieved impressive success. In this paradigm, the synthesized data from the PLM acts as the carrier of knowledge, which is used to train a task-specific model with orders of magnitude fewer parameters than the PLM, achieving both higher performance and efficiency than prompt-based zero-shot learning methods on PLMs. The main hurdle of this approach is that the synthesized data from PLM usually contains a significant portion of low-quality samples. Fitting on such data will greatly hamper the performance of the taskspecific model, making it unreliable for deployment. Previous methods remedy this issue mainly by filtering synthetic data using heuristic metrics(e.g., output confidence), or refining the data with the help of a human expert, which comes with excessive manual tuning or expensive costs. In this paper, we propose a novel noise-robust re-weighting framework SUNGEN to automatically construct high-quality data for zero-shot classification problems. Our framework features the ability to learn the sample weights indicating data quality without requiring any human annotation. We theoretically and empirically verify the ability of our method to help construct good-quality synthetic datasets. Notably, SUNGEN-LSTM yields a 9.8% relative improvement than the baseline on average accuracy across eight different established text classification tasks.<br></p> | - |
dc.language | eng | - |
dc.relation.ispartof | International Conference on Learning Representations (ICLR 2023) (01/05/2023-05/05/2023, Kigali, Rwanda) | - |
dc.title | Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning | - |
dc.type | Conference_Paper | - |
dc.identifier.doi | 10.48550/arXiv.2205.12679 | - |