File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Understanding Programs by Exploiting (Fuzzing) Test Cases

TitleUnderstanding Programs by Exploiting (Fuzzing) Test Cases
Authors
Issue Date2023
Citation
Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2023, p. 10667-10679 How to Cite?
AbstractSemantic understanding of programs has attracted great attention in the community. Inspired by recent successes of large language models (LLMs) in natural language understanding, tremendous progress has been made by treating programming language as another sort of natural language and training LLMs on corpora of program code. However, programs are essentially different from texts after all, in a sense that they are normally heavily structured and syntax-strict. In particular, programs and their basic units (i.e., functions and subroutines) are designed to demonstrate a variety of behaviors and/or provide possible outputs, given different inputs. The relationship between inputs and possible outputs/behaviors represents the functions/subroutines and profiles the program as a whole. Therefore, we propose to incorporate such a relationship into learning, for achieving a deeper semantic understanding of programs. To obtain inputs that are representative enough to trigger the execution of most part of the code, we resort to fuzz testing and propose fuzz tuning to boost the performance of program understanding and code representation learning, given a pre-trained LLM. The effectiveness of the proposed method is verified on two program understanding tasks including code clone detection and code classification, and it outperforms current state-of-the-arts by large margins. Code is available at https://github.com/rabbitjy/FuzzTuning.
Persistent Identifierhttp://hdl.handle.net/10722/347072
ISSN

 

DC FieldValueLanguage
dc.contributor.authorZhao, Jianyu-
dc.contributor.authorRong, Yuyang-
dc.contributor.authorGuo, Yiwen-
dc.contributor.authorHe, Yifeng-
dc.contributor.authorChen, Hao-
dc.date.accessioned2024-09-17T04:15:10Z-
dc.date.available2024-09-17T04:15:10Z-
dc.date.issued2023-
dc.identifier.citationProceedings of the Annual Meeting of the Association for Computational Linguistics, 2023, p. 10667-10679-
dc.identifier.issn0736-587X-
dc.identifier.urihttp://hdl.handle.net/10722/347072-
dc.description.abstractSemantic understanding of programs has attracted great attention in the community. Inspired by recent successes of large language models (LLMs) in natural language understanding, tremendous progress has been made by treating programming language as another sort of natural language and training LLMs on corpora of program code. However, programs are essentially different from texts after all, in a sense that they are normally heavily structured and syntax-strict. In particular, programs and their basic units (i.e., functions and subroutines) are designed to demonstrate a variety of behaviors and/or provide possible outputs, given different inputs. The relationship between inputs and possible outputs/behaviors represents the functions/subroutines and profiles the program as a whole. Therefore, we propose to incorporate such a relationship into learning, for achieving a deeper semantic understanding of programs. To obtain inputs that are representative enough to trigger the execution of most part of the code, we resort to fuzz testing and propose fuzz tuning to boost the performance of program understanding and code representation learning, given a pre-trained LLM. The effectiveness of the proposed method is verified on two program understanding tasks including code clone detection and code classification, and it outperforms current state-of-the-arts by large margins. Code is available at https://github.com/rabbitjy/FuzzTuning.-
dc.languageeng-
dc.relation.ispartofProceedings of the Annual Meeting of the Association for Computational Linguistics-
dc.titleUnderstanding Programs by Exploiting (Fuzzing) Test Cases-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85173691344-
dc.identifier.spage10667-
dc.identifier.epage10679-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats