File Download

There are no files associated with this item.

Supplementary

Conference Paper: SEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving

TitleSEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving
Authors
Issue Date11-Aug-2024
Abstract

Large Language Models (LLMs) have driven substantial progress in artificial intelligence in recent years, exhibiting impressive capabilities across a wide range of tasks, including mathematical problem-solving. Inspired by the success of subgoal-based methods, we propose a novel framework called SEquential subGoal Optimization (SEGO) to enhance LLMs’ ability to solve mathematical problems. By establishing a connection between the subgoal breakdown process and the probability of solving problems, SEGO aims to identify better subgoals with theoretical guarantees. Addressing the challenge of identifying suitable subgoals in a large solution space, our framework generates problem-specific subgoals and adjusts them according to carefully designed criteria. Incorporating these optimized subgoals into the policy model training leads to significant improvements in problem-solving performance. We validate SEGO’s efficacy through experiments on two benchmarks, GSM8K and MATH, where our approach outperforms existing methods, highlighting the potential of SEGO in AI-driven mathematical problem-solving.


Persistent Identifierhttp://hdl.handle.net/10722/347178

 

DC FieldValueLanguage
dc.contributor.authorZhao, Xueliang-
dc.contributor.authorHuang, Xinting-
dc.contributor.authorBi, Wei-
dc.contributor.authorKong, Lingpeng-
dc.date.accessioned2024-09-18T00:30:55Z-
dc.date.available2024-09-18T00:30:55Z-
dc.date.issued2024-08-11-
dc.identifier.urihttp://hdl.handle.net/10722/347178-
dc.description.abstract<p>Large Language Models (LLMs) have driven substantial progress in artificial intelligence in recent years, exhibiting impressive capabilities across a wide range of tasks, including mathematical problem-solving. Inspired by the success of subgoal-based methods, we propose a novel framework called <strong>SE</strong>quential sub<strong>G</strong>oal <strong>O</strong>ptimization (SEGO) to enhance LLMs’ ability to solve mathematical problems. By establishing a connection between the subgoal breakdown process and the probability of solving problems, SEGO aims to identify better subgoals with theoretical guarantees. Addressing the challenge of identifying suitable subgoals in a large solution space, our framework generates problem-specific subgoals and adjusts them according to carefully designed criteria. Incorporating these optimized subgoals into the policy model training leads to significant improvements in problem-solving performance. We validate SEGO’s efficacy through experiments on two benchmarks, GSM8K and MATH, where our approach outperforms existing methods, highlighting the potential of SEGO in AI-driven mathematical problem-solving.<br></p>-
dc.languageeng-
dc.languageeng-
dc.relation.ispartofThe 62nd Annual Meeting of the Association for Computational Linguistics (11/08/2024-16/08/2024, Bangkok, Thailand)-
dc.titleSEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats