File Download

There are no files associated with this item.

Supplementary

Conference Paper: Affordances-Oriented Planning using Foundation Models for Continuous Vision-Language Navigation

TitleAffordances-Oriented Planning using Foundation Models for Continuous Vision-Language Navigation
Authors
Issue Date25-Feb-2025
Abstract

LLM-based agents have demonstrated impressive zero-shot performance in vision-language navigation (VLN) task. However, existing LLM-based methods often focus only on solving high-level task planning by selecting nodes in predefined navigation graphs for movements, overlooking low-level control in navigation scenarios. To bridge this gap, we propose AOPlanner, a novel Affordances-Oriented Planner for continuous VLN task. Our AO-Planner integrates various foundation models to achieve affordances-oriented low-level motion planning and high-level decision-making, both performed in a zeroshot setting. Specifically, we employ a Visual Affordances Prompting (VAP) approach, where the visible ground is segmented by SAM to provide navigational affordances, based on which the LLM selects potential candidate waypoints and plans low-level paths towards selected waypoints. We further propose a high-level PathAgent which marks planned paths on the image input and reasons the most probable path by comprehending all environmental information. Finally, we convert the selected path into 3D coordinates using camera intrinsic parameters and depth information, avoiding challenging 3D predictions for LLMs. Experiments on the challenging R2R-CE and RxR-CE datasets show that AO-Planner achieves state-of-the-art zero-shot performance (8.8% improvement on SPL). Our method can also serve as a data annotator to obtain pseudo-labels, distilling its waypoint prediction ability into a learning-based predictor. This new predictor does not require any waypoint data from the simulator and achieves 47% SR competing with supervised methods. We establish an effective connection between LLM and 3D world, presenting novel prospects for employing foundation models in low-level motion control.


Persistent Identifierhttp://hdl.handle.net/10722/354548

 

DC FieldValueLanguage
dc.contributor.authorChen, Jiaqi-
dc.contributor.authorLin, Bingqian-
dc.contributor.authorLiu, Xinmin-
dc.contributor.authorMa, Lin-
dc.contributor.authorLiang, Xiaodan-
dc.contributor.authorWong, Kwan-Yee K-
dc.date.accessioned2025-02-13T00:35:16Z-
dc.date.available2025-02-13T00:35:16Z-
dc.date.issued2025-02-25-
dc.identifier.urihttp://hdl.handle.net/10722/354548-
dc.description.abstract<p>LLM-based agents have demonstrated impressive zero-shot performance in vision-language navigation (VLN) task. However, existing LLM-based methods often focus only on solving high-level task planning by selecting nodes in predefined navigation graphs for movements, overlooking low-level control in navigation scenarios. To bridge this gap, we propose AOPlanner, a novel Affordances-Oriented Planner for continuous VLN task. Our AO-Planner integrates various foundation models to achieve affordances-oriented low-level motion planning and high-level decision-making, both performed in a zeroshot setting. Specifically, we employ a Visual Affordances Prompting (VAP) approach, where the visible ground is segmented by SAM to provide navigational affordances, based on which the LLM selects potential candidate waypoints and plans low-level paths towards selected waypoints. We further propose a high-level PathAgent which marks planned paths on the image input and reasons the most probable path by comprehending all environmental information. Finally, we convert the selected path into 3D coordinates using camera intrinsic parameters and depth information, avoiding challenging 3D predictions for LLMs. Experiments on the challenging R2R-CE and RxR-CE datasets show that AO-Planner achieves state-of-the-art zero-shot performance (8.8% improvement on SPL). Our method can also serve as a data annotator to obtain pseudo-labels, distilling its waypoint prediction ability into a learning-based predictor. This new predictor does not require any waypoint data from the simulator and achieves 47% SR competing with supervised methods. We establish an effective connection between LLM and 3D world, presenting novel prospects for employing foundation models in low-level motion control.<br></p>-
dc.languageeng-
dc.relation.ispartofThe 39th Annual AAAI Conference on Artificial Intelligence (AAAI) (25/02/2025-04/03/2025, Philadelphia, Pennsylvania)-
dc.titleAffordances-Oriented Planning using Foundation Models for Continuous Vision-Language Navigation-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats