File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: AutoBERT-Zero: Evolving BERT Backbone from Scratch

TitleAutoBERT-Zero: Evolving BERT Backbone from Scratch
Authors
KeywordsSpeech & Natural Language Processing (SNLP)
Issue Date2022
PublisherAssociation for the Advancement of Artificial Intelligence.
Citation
36th AAAI Conference on Artificial Intelligence (Virtual), February 22-March 1, 2022. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, v. 36 n. 10, p. 10663-10671 How to Cite?
AbstractTransformer-based pre-trained language models like BERT and its variants have recently achieved promising performance in various natural language processing (NLP) tasks. However, the conventional paradigm constructs the backbone by purely stacking the manually designed global self-attention layers, introducing inductive bias and thus leads to sub-optimal. In this work, we make the first attempt to automatically discover novel pre-trained language model (PLM) backbone on a flexible search space containing the most fundamental operations from scratch. Specifically, we propose a well-designed search space which (i) contains primitive math operations in the intra-layer level to explore novel attention structures, and (ii) leverages convolution blocks to be the supplementary for attentions in the inter-layer level to better learn local dependency. To enhance the efficiency for finding promising architectures, we propose an Operation-Priority Neural Architecture Search (OP-NAS) algorithm, which optimizes both the search algorithm and evaluation of candidate models. Specifically, we propose Operation-Priority (OP) evolution strategy to facilitate model search via balancing exploration and exploitation. Furthermore, we design a Bi-branch Weight-Sharing (BIWS) training strategy for fast model evaluation. Extensive experiments show that the searched architecture (named AutoBERT-Zero) significantly outperforms BERT and its variants of different model capacities in various downstream tasks, proving the architecture's transfer and scaling abilities. Remarkably, AutoBERT-Zero-base outperforms RoBERTa-base (using much more data) and BERT-large (with much larger model size) by 2.4 and 1.4 higher score on GLUE test set.
DescriptionAAAI-22 Technical Tracks 10
Persistent Identifierhttp://hdl.handle.net/10722/315046

 

DC FieldValueLanguage
dc.contributor.authorGao, J-
dc.contributor.authorXu, H-
dc.contributor.authorShi, H-
dc.contributor.authorRen, X-
dc.contributor.authorYu, PLH-
dc.contributor.authorLiang, X-
dc.contributor.authorJiang, X-
dc.contributor.authorLi, Z-
dc.date.accessioned2022-08-05T09:39:13Z-
dc.date.available2022-08-05T09:39:13Z-
dc.date.issued2022-
dc.identifier.citation36th AAAI Conference on Artificial Intelligence (Virtual), February 22-March 1, 2022. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, v. 36 n. 10, p. 10663-10671-
dc.identifier.urihttp://hdl.handle.net/10722/315046-
dc.descriptionAAAI-22 Technical Tracks 10-
dc.description.abstractTransformer-based pre-trained language models like BERT and its variants have recently achieved promising performance in various natural language processing (NLP) tasks. However, the conventional paradigm constructs the backbone by purely stacking the manually designed global self-attention layers, introducing inductive bias and thus leads to sub-optimal. In this work, we make the first attempt to automatically discover novel pre-trained language model (PLM) backbone on a flexible search space containing the most fundamental operations from scratch. Specifically, we propose a well-designed search space which (i) contains primitive math operations in the intra-layer level to explore novel attention structures, and (ii) leverages convolution blocks to be the supplementary for attentions in the inter-layer level to better learn local dependency. To enhance the efficiency for finding promising architectures, we propose an Operation-Priority Neural Architecture Search (OP-NAS) algorithm, which optimizes both the search algorithm and evaluation of candidate models. Specifically, we propose Operation-Priority (OP) evolution strategy to facilitate model search via balancing exploration and exploitation. Furthermore, we design a Bi-branch Weight-Sharing (BIWS) training strategy for fast model evaluation. Extensive experiments show that the searched architecture (named AutoBERT-Zero) significantly outperforms BERT and its variants of different model capacities in various downstream tasks, proving the architecture's transfer and scaling abilities. Remarkably, AutoBERT-Zero-base outperforms RoBERTa-base (using much more data) and BERT-large (with much larger model size) by 2.4 and 1.4 higher score on GLUE test set.-
dc.languageeng-
dc.publisherAssociation for the Advancement of Artificial Intelligence.-
dc.relation.ispartofProceedings of the 36th AAAI Conference on Artificial Intelligence-
dc.subjectSpeech & Natural Language Processing (SNLP)-
dc.titleAutoBERT-Zero: Evolving BERT Backbone from Scratch-
dc.typeConference_Paper-
dc.identifier.doi10.1609/aaai.v36i10.21311-
dc.identifier.hkuros335298-
dc.identifier.volume36-
dc.identifier.issue10-
dc.identifier.spage10663-
dc.identifier.epage10671-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats