File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Safe Navigation With Human Instructions in Complex Scenes

TitleSafe Navigation With Human Instructions in Complex Scenes
Authors
KeywordsNavigation
Grounding
Semantics
Task analysis
Robot kinematics
Issue Date2019
PublisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at https://www.ieee.org/membership-catalog/productdetail/showProductDetailPage.html?product=PER481-ELE
Citation
IEEE Robotics and Automation Letters, 2019, v. 4 n. 2, p. 753-760 How to Cite?
AbstractIn this letter, we present a robotic navigation algorithm with natural language interfaces that enables a robot to safely walk through a changing environment with moving persons by following human instructions such as “go to the restaurant and keep away from people.” We first classify human instructions into three types: goal, constraints, and uninformative phrases. Next, we provide grounding in a dynamic manner for the extracted goal and constraint items along with the navigation process to deal with target objects that are too far away for sensor observation and the appearance of moving obstacles such as humans. In particular, for a goal phrase (e.g., “go to the restaurant”), we ground it to a location in a predefined semantic map and treat it as a goal for a global motion planner, which plans a collision-free path in the workspace for the robot to follow. For a constraint phrase (e.g., “keep away from people”), we dynamically add the corresponding constraint into a local planner by adjusting the values of a local costmap according to the results returned by the object detection module. The updated costmap is then used to compute a local collision avoidance control for the safe navigation of the robot. By combining natural language processing, motion planning, and computer vision, our developed system can successfully follow natural language navigation instructions to achieve navigation tasks in both simulated and real-world scenarios. Videos are available at https://sites.google.com/view/snhi.
Persistent Identifierhttp://hdl.handle.net/10722/273148
ISSN
2019 Impact Factor: 3.608

 

DC FieldValueLanguage
dc.contributor.authorHu, Z-
dc.contributor.authorPan, J-
dc.contributor.authorFan, T-
dc.contributor.authorYang, R-
dc.contributor.authorManocha, D-
dc.date.accessioned2019-08-06T09:23:25Z-
dc.date.available2019-08-06T09:23:25Z-
dc.date.issued2019-
dc.identifier.citationIEEE Robotics and Automation Letters, 2019, v. 4 n. 2, p. 753-760-
dc.identifier.issn2377-3766-
dc.identifier.urihttp://hdl.handle.net/10722/273148-
dc.description.abstractIn this letter, we present a robotic navigation algorithm with natural language interfaces that enables a robot to safely walk through a changing environment with moving persons by following human instructions such as “go to the restaurant and keep away from people.” We first classify human instructions into three types: goal, constraints, and uninformative phrases. Next, we provide grounding in a dynamic manner for the extracted goal and constraint items along with the navigation process to deal with target objects that are too far away for sensor observation and the appearance of moving obstacles such as humans. In particular, for a goal phrase (e.g., “go to the restaurant”), we ground it to a location in a predefined semantic map and treat it as a goal for a global motion planner, which plans a collision-free path in the workspace for the robot to follow. For a constraint phrase (e.g., “keep away from people”), we dynamically add the corresponding constraint into a local planner by adjusting the values of a local costmap according to the results returned by the object detection module. The updated costmap is then used to compute a local collision avoidance control for the safe navigation of the robot. By combining natural language processing, motion planning, and computer vision, our developed system can successfully follow natural language navigation instructions to achieve navigation tasks in both simulated and real-world scenarios. Videos are available at https://sites.google.com/view/snhi.-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers. The Journal's web site is located at https://www.ieee.org/membership-catalog/productdetail/showProductDetailPage.html?product=PER481-ELE-
dc.relation.ispartofIEEE Robotics and Automation Letters-
dc.rightsIEEE Robotics and Automation Letters. Copyright © Institute of Electrical and Electronics Engineers.-
dc.rights©20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectNavigation-
dc.subjectGrounding-
dc.subjectSemantics-
dc.subjectTask analysis-
dc.subjectRobot kinematics-
dc.titleSafe Navigation With Human Instructions in Complex Scenes-
dc.typeArticle-
dc.identifier.emailPan, J: jpan@cs.hku.hk-
dc.identifier.authorityPan, J=rp01984-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/LRA.2019.2893432-
dc.identifier.scopuseid_2-s2.0-85063310810-
dc.identifier.hkuros300342-
dc.identifier.volume4-
dc.identifier.issue2-
dc.identifier.spage753-
dc.identifier.epage760-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats