File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: LiteGT: Efficient and Lightweight Graph Transformers

TitleLiteGT: Efficient and Lightweight Graph Transformers
Authors
Issue Date2021
PublisherAssociation for Computing Machinery.
Citation
Proceedings of th 30th ACM International Conference on Information and Knowledge Management (CIKM2021), Online Meeting, Gold Coast, Queensland, Australia, 1-5 November 2021, p. 161-170 How to Cite?
AbstractTransformers have shown great potential for modeling long-term dependencies for natural language processing and computer vision. However, little study has applied transformers to graphs, which is challenging due to the poor scalability of the attention mechanism and the under-exploration of graph inductive bias. To bridge this gap, we propose a Lite Graph Transformer (LiteGT) that learns on arbitrary graphs efficiently. First, a node sampling strategy is proposed to sparsify the considered nodes in self-attention with only $mathcal{O}(Nlog N)$ time. Second, we devise two kernelization approaches to form two-branch attention blocks, which not only leverage graph-specific topology information, but also reduce computation further to $mathcal{O}(frac{1}{2}Nlog N)$. Third, the nodes are updated with different attention schemes during training, thus largely mitigating over-smoothing problems when the model layers deepen. Extensive experiments demonstrate that LiteGT achieves competitive performance on both extit{node classification} and extit{link prediction} on datasets with millions of nodes. Specifically, extit{Jaccard + Sampling + Dim. reducing} setting reduces more than $100 imes$ computation and halves the model size without performance degradation.
DescriptionFull Papers
Persistent Identifierhttp://hdl.handle.net/10722/301981
ISBN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorChen, C-
dc.contributor.authorTao, C-
dc.contributor.authorWong, N-
dc.date.accessioned2021-08-21T03:29:49Z-
dc.date.available2021-08-21T03:29:49Z-
dc.date.issued2021-
dc.identifier.citationProceedings of th 30th ACM International Conference on Information and Knowledge Management (CIKM2021), Online Meeting, Gold Coast, Queensland, Australia, 1-5 November 2021, p. 161-170-
dc.identifier.isbn9781450384469-
dc.identifier.urihttp://hdl.handle.net/10722/301981-
dc.descriptionFull Papers-
dc.description.abstractTransformers have shown great potential for modeling long-term dependencies for natural language processing and computer vision. However, little study has applied transformers to graphs, which is challenging due to the poor scalability of the attention mechanism and the under-exploration of graph inductive bias. To bridge this gap, we propose a Lite Graph Transformer (LiteGT) that learns on arbitrary graphs efficiently. First, a node sampling strategy is proposed to sparsify the considered nodes in self-attention with only $mathcal{O}(Nlog N)$ time. Second, we devise two kernelization approaches to form two-branch attention blocks, which not only leverage graph-specific topology information, but also reduce computation further to $mathcal{O}(frac{1}{2}Nlog N)$. Third, the nodes are updated with different attention schemes during training, thus largely mitigating over-smoothing problems when the model layers deepen. Extensive experiments demonstrate that LiteGT achieves competitive performance on both extit{node classification} and extit{link prediction} on datasets with millions of nodes. Specifically, extit{Jaccard + Sampling + Dim. reducing} setting reduces more than $100 imes$ computation and halves the model size without performance degradation.-
dc.languageeng-
dc.publisherAssociation for Computing Machinery.-
dc.relation.ispartofThe 30th ACM International Conference on Information and Knowledge Management (CIKM2021) Proceedings-
dc.rightsThe 30th ACM International Conference on Information and Knowledge Management (CIKM2021) Proceedings. Copyright © Association for Computing Machinery.-
dc.titleLiteGT: Efficient and Lightweight Graph Transformers-
dc.typeConference_Paper-
dc.identifier.emailWong, N: nwong@eee.hku.hk-
dc.identifier.authorityWong, N=rp00190-
dc.identifier.doi10.1145/3459637.3482272-
dc.identifier.scopuseid_2-s2.0-85119211996-
dc.identifier.hkuros324505-
dc.identifier.spage161-
dc.identifier.epage170-
dc.identifier.isiWOS:001054156200019-
dc.publisher.placeNew York, NY-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats