File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Appears in Collections:
Conference Paper: Random Feature Attention
Title | Random Feature Attention |
---|---|
Authors | |
Keywords | Attention transformers machine translation language modeling |
Issue Date | 2021 |
Citation | The 9th International Conference on Learning Representations (ICLR 2021), Virtual Event, Austria, 3-7 May 2021 How to Cite? |
Abstract | Transformers are state-of-the-art models for a variety of sequence modeling tasks. At their core is an attention function which models pairwise interactions between the inputs at every timestep. While attention is powerful, it does not scale efficiently to long sequences due to its quadratic time and space complexity in the sequence length. We propose RFA, a linear time and space attention that uses random feature methods to approximate the softmax function, and explore its application in transformers. RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism. Experiments on language modeling and machine translation demonstrate that RFA achieves similar or better performance compared to strong transformer baselines. In the machine translation experiment, RFA decodes twice as fast as a vanilla transformer. Compared to existing efficient transformer variants, RFA is competitive in terms of both accuracy and efficiency on three long text classification datasets. Our analysis shows that RFA’s efficiency gains are especially notable on long sequences, suggesting that RFA will be particularly useful in tasks that require working with large inputs, fast decoding speed, or low memory footprints. |
Description | Spotlight Presentation |
Persistent Identifier | http://hdl.handle.net/10722/304336 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Peng, H | - |
dc.contributor.author | Pappas, N | - |
dc.contributor.author | Yogatama, D | - |
dc.contributor.author | Schwartz, R | - |
dc.contributor.author | Smith, N | - |
dc.contributor.author | Kong, L | - |
dc.date.accessioned | 2021-09-23T08:58:37Z | - |
dc.date.available | 2021-09-23T08:58:37Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | The 9th International Conference on Learning Representations (ICLR 2021), Virtual Event, Austria, 3-7 May 2021 | - |
dc.identifier.uri | http://hdl.handle.net/10722/304336 | - |
dc.description | Spotlight Presentation | - |
dc.description.abstract | Transformers are state-of-the-art models for a variety of sequence modeling tasks. At their core is an attention function which models pairwise interactions between the inputs at every timestep. While attention is powerful, it does not scale efficiently to long sequences due to its quadratic time and space complexity in the sequence length. We propose RFA, a linear time and space attention that uses random feature methods to approximate the softmax function, and explore its application in transformers. RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism. Experiments on language modeling and machine translation demonstrate that RFA achieves similar or better performance compared to strong transformer baselines. In the machine translation experiment, RFA decodes twice as fast as a vanilla transformer. Compared to existing efficient transformer variants, RFA is competitive in terms of both accuracy and efficiency on three long text classification datasets. Our analysis shows that RFA’s efficiency gains are especially notable on long sequences, suggesting that RFA will be particularly useful in tasks that require working with large inputs, fast decoding speed, or low memory footprints. | - |
dc.language | eng | - |
dc.relation.ispartof | International Conference on Learning Representations (ICLR 2021) | - |
dc.subject | Attention | - |
dc.subject | transformers | - |
dc.subject | machine translation | - |
dc.subject | language modeling | - |
dc.title | Random Feature Attention | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Kong, L: lpk@cs.hku.hk | - |
dc.identifier.authority | Kong, L=rp02775 | - |
dc.identifier.hkuros | 324952 | - |