File Download
Supplementary
-
Citations:
- Appears in Collections:
Book: Explanation Regeneration via Information Bottleneck
Title | Explanation Regeneration via Information Bottleneck 基于信息瓶颈优化的解释再生成 |
---|---|
Authors | |
Issue Date | 9-Jul-2023 |
Abstract | Explaining the black-box predictions of NLP models naturally and accurately is an important open problem in natural language generation. These free-text explanations are expected to contain sufficient and carefully-selected evidence to form supportive arguments for predictions. Thanks to the superior generative capacity of large pretrained language models (PLM), recent work built on prompt engineering enables explanations generated without specific training. However, explanations generated through single-pass prompting often lack sufficiency and conciseness, due to the prompt complexity and hallucination issues. To discard the dross and take the essence of current PLM’s results, we propose to produce sufficient and concise explanations via the information bottleneck (EIB) theory. EIB regenerates explanations by polishing the single-pass output of PLM but retaining the information that supports the contents being explained by balancing two information bottleneck objectives. Experiments on two different tasks verify the effectiveness of EIB through automatic evaluation and thoroughly-conducted human evaluation. 自然准确地解释NLP模型的黑箱预测是自然语言生成中一个重要的开放性问题。这些自由文本的解释被期望包含足够的和精心挑选的证据,以形成支持预测的论据。由于大型预训练语言模型(PLM)的卓越生成能力,最近建立在快速工程上的工作使得无需特定训练即可生成解释。然而,由于提示的复杂性和幻觉问题,单次提示产生的解释往往缺乏充分性和简洁性。为了去其糟粕,取其精华,我们建议通过信息瓶颈(EIB)理论给出充分而简洁的解释。EIB通过优化PLM的单次输出来重新生成解释,但通过平衡两个信息瓶颈目标来保留支持被解释内容的信息。在两个不同任务上的实验通过自动评估和彻底的人工评估验证了EIB的有效性。 |
Persistent Identifier | http://hdl.handle.net/10722/350571 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li, Qintong | - |
dc.contributor.author | Wu, Zhiyong | - |
dc.contributor.author | Kong, Lingpeng | - |
dc.contributor.author | Bi, Wei | - |
dc.date.accessioned | 2024-10-30T00:30:10Z | - |
dc.date.available | 2024-10-30T00:30:10Z | - |
dc.date.issued | 2023-07-09 | - |
dc.identifier.uri | http://hdl.handle.net/10722/350571 | - |
dc.description.abstract | <p>Explaining the black-box predictions of NLP models naturally and accurately is an important open problem in natural language generation. These free-text explanations are expected to contain sufficient and carefully-selected evidence to form supportive arguments for predictions. Thanks to the superior generative capacity of large pretrained language models (PLM), recent work built on prompt engineering enables explanations generated without specific training. However, explanations generated through single-pass prompting often lack sufficiency and conciseness, due to the prompt complexity and hallucination issues. To discard the dross and take the essence of current PLM’s results, we propose to produce sufficient and concise explanations via the information bottleneck (EIB) theory. EIB regenerates explanations by polishing the single-pass output of PLM but retaining the information that supports the contents being explained by balancing two information bottleneck objectives. Experiments on two different tasks verify the effectiveness of EIB through automatic evaluation and thoroughly-conducted human evaluation.</p> | - |
dc.description.abstract | 自然准确地解释NLP模型的黑箱预测是自然语言生成中一个重要的开放性问题。这些自由文本的解释被期望包含足够的和精心挑选的证据,以形成支持预测的论据。由于大型预训练语言模型(PLM)的卓越生成能力,最近建立在快速工程上的工作使得无需特定训练即可生成解释。然而,由于提示的复杂性和幻觉问题,单次提示产生的解释往往缺乏充分性和简洁性。为了去其糟粕,取其精华,我们建议通过信息瓶颈(EIB)理论给出充分而简洁的解释。EIB通过优化PLM的单次输出来重新生成解释,但通过平衡两个信息瓶颈目标来保留支持被解释内容的信息。在两个不同任务上的实验通过自动评估和彻底的人工评估验证了EIB的有效性。 | - |
dc.language | eng | - |
dc.relation.ispartof | Annual Meeting of the Association for Computational Linguistics - ACL 2023 (09/07/2023-14/07/2023, Toronto, Canada) | - |
dc.title | Explanation Regeneration via Information Bottleneck | - |
dc.title | 基于信息瓶颈优化的解释再生成 | - |
dc.type | Book | - |
dc.description.nature | preprint | - |
dc.identifier.doi | 10.18653/v1/2023.findings-acl.765 | - |
dc.identifier.spage | 12081 | - |
dc.identifier.epage | 12102 | - |