File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Book: Explanation Regeneration via Information Bottleneck

TitleExplanation Regeneration via Information Bottleneck
基于信息瓶颈优化的解释再生成
Authors
Issue Date9-Jul-2023
Abstract

Explaining the black-box predictions of NLP models naturally and accurately is an important open problem in natural language generation. These free-text explanations are expected to contain sufficient and carefully-selected evidence to form supportive arguments for predictions. Thanks to the superior generative capacity of large pretrained language models (PLM), recent work built on prompt engineering enables explanations generated without specific training. However, explanations generated through single-pass prompting often lack sufficiency and conciseness, due to the prompt complexity and hallucination issues. To discard the dross and take the essence of current PLM’s results, we propose to produce sufficient and concise explanations via the information bottleneck (EIB) theory. EIB regenerates explanations by polishing the single-pass output of PLM but retaining the information that supports the contents being explained by balancing two information bottleneck objectives. Experiments on two different tasks verify the effectiveness of EIB through automatic evaluation and thoroughly-conducted human evaluation.


自然准确地解释NLP模型的黑箱预测是自然语言生成中一个重要的开放性问题。这些自由文本的解释被期望包含足够的和精心挑选的证据,以形成支持预测的论据。由于大型预训练语言模型(PLM)的卓越生成能力,最近建立在快速工程上的工作使得无需特定训练即可生成解释。然而,由于提示的复杂性和幻觉问题,单次提示产生的解释往往缺乏充分性和简洁性。为了去其糟粕,取其精华,我们建议通过信息瓶颈(EIB)理论给出充分而简洁的解释。EIB通过优化PLM的单次输出来重新生成解释,但通过平衡两个信息瓶颈目标来保留支持被解释内容的信息。在两个不同任务上的实验通过自动评估和彻底的人工评估验证了EIB的有效性。
Persistent Identifierhttp://hdl.handle.net/10722/350571

 

DC FieldValueLanguage
dc.contributor.authorLi, Qintong-
dc.contributor.authorWu, Zhiyong-
dc.contributor.authorKong, Lingpeng-
dc.contributor.authorBi, Wei-
dc.date.accessioned2024-10-30T00:30:10Z-
dc.date.available2024-10-30T00:30:10Z-
dc.date.issued2023-07-09-
dc.identifier.urihttp://hdl.handle.net/10722/350571-
dc.description.abstract<p>Explaining the black-box predictions of NLP models naturally and accurately is an important open problem in natural language generation. These free-text explanations are expected to contain sufficient and carefully-selected evidence to form supportive arguments for predictions. Thanks to the superior generative capacity of large pretrained language models (PLM), recent work built on prompt engineering enables explanations generated without specific training. However, explanations generated through single-pass prompting often lack sufficiency and conciseness, due to the prompt complexity and hallucination issues. To discard the dross and take the essence of current PLM’s results, we propose to produce sufficient and concise explanations via the information bottleneck (EIB) theory. EIB regenerates explanations by polishing the single-pass output of PLM but retaining the information that supports the contents being explained by balancing two information bottleneck objectives. Experiments on two different tasks verify the effectiveness of EIB through automatic evaluation and thoroughly-conducted human evaluation.</p>-
dc.description.abstract自然准确地解释NLP模型的黑箱预测是自然语言生成中一个重要的开放性问题。这些自由文本的解释被期望包含足够的和精心挑选的证据,以形成支持预测的论据。由于大型预训练语言模型(PLM)的卓越生成能力,最近建立在快速工程上的工作使得无需特定训练即可生成解释。然而,由于提示的复杂性和幻觉问题,单次提示产生的解释往往缺乏充分性和简洁性。为了去其糟粕,取其精华,我们建议通过信息瓶颈(EIB)理论给出充分而简洁的解释。EIB通过优化PLM的单次输出来重新生成解释,但通过平衡两个信息瓶颈目标来保留支持被解释内容的信息。在两个不同任务上的实验通过自动评估和彻底的人工评估验证了EIB的有效性。 -
dc.languageeng-
dc.relation.ispartofAnnual Meeting of the Association for Computational Linguistics - ACL 2023 (09/07/2023-14/07/2023, Toronto, Canada)-
dc.titleExplanation Regeneration via Information Bottleneck-
dc.title基于信息瓶颈优化的解释再生成-
dc.typeBook-
dc.description.naturepreprint-
dc.identifier.doi10.18653/v1/2023.findings-acl.765-
dc.identifier.spage12081-
dc.identifier.epage12102-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats