File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints

TitleDeep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints
Authors
Keywordscollision avoidance
neural networks
reinforcement learning
robotics
trajectory planning
uncertain environment
Issue Date2022
Citation
Frontiers in Neurorobotics, 2022, v. 16, article no. 883562 How to Cite?
AbstractWith the advance in algorithms, deep reinforcement learning (DRL) offers solutions to trajectory planning under uncertain environments. Different from traditional trajectory planning which requires lots of effort to tackle complicated high-dimensional problems, the recently proposed DRL enables the robot manipulator to autonomously learn and discover optimal trajectory planning by interacting with the environment. In this article, we present state-of-the-art DRL-based collision-avoidance trajectory planning for uncertain environments such as a safe human coexistent environment. Since the robot manipulator operates in high dimensional continuous state-action spaces, model-free, policy gradient-based soft actor-critic (SAC), and deep deterministic policy gradient (DDPG) framework are adapted to our scenario for comparison. In order to assess our proposal, we simulate a 7-DOF Panda (Franka Emika) robot manipulator in the PyBullet physics engine and then evaluate its trajectory planning with reward, loss, safe rate, and accuracy. Finally, our final report shows the effectiveness of state-of-the-art DRL algorithms for trajectory planning under uncertain environments with zero collision after 5,000 episodes of training.
Persistent Identifierhttp://hdl.handle.net/10722/365398

 

DC FieldValueLanguage
dc.contributor.authorChen, Lienhung-
dc.contributor.authorJiang, Zhongliang-
dc.contributor.authorCheng, Long-
dc.contributor.authorKnoll, Alois C.-
dc.contributor.authorZhou, Mingchuan-
dc.date.accessioned2025-11-05T06:55:53Z-
dc.date.available2025-11-05T06:55:53Z-
dc.date.issued2022-
dc.identifier.citationFrontiers in Neurorobotics, 2022, v. 16, article no. 883562-
dc.identifier.urihttp://hdl.handle.net/10722/365398-
dc.description.abstractWith the advance in algorithms, deep reinforcement learning (DRL) offers solutions to trajectory planning under uncertain environments. Different from traditional trajectory planning which requires lots of effort to tackle complicated high-dimensional problems, the recently proposed DRL enables the robot manipulator to autonomously learn and discover optimal trajectory planning by interacting with the environment. In this article, we present state-of-the-art DRL-based collision-avoidance trajectory planning for uncertain environments such as a safe human coexistent environment. Since the robot manipulator operates in high dimensional continuous state-action spaces, model-free, policy gradient-based soft actor-critic (SAC), and deep deterministic policy gradient (DDPG) framework are adapted to our scenario for comparison. In order to assess our proposal, we simulate a 7-DOF Panda (Franka Emika) robot manipulator in the PyBullet physics engine and then evaluate its trajectory planning with reward, loss, safe rate, and accuracy. Finally, our final report shows the effectiveness of state-of-the-art DRL algorithms for trajectory planning under uncertain environments with zero collision after 5,000 episodes of training.-
dc.languageeng-
dc.relation.ispartofFrontiers in Neurorobotics-
dc.subjectcollision avoidance-
dc.subjectneural networks-
dc.subjectreinforcement learning-
dc.subjectrobotics-
dc.subjecttrajectory planning-
dc.subjectuncertain environment-
dc.titleDeep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.3389/fnbot.2022.883562-
dc.identifier.scopuseid_2-s2.0-85130018027-
dc.identifier.volume16-
dc.identifier.spagearticle no. 883562-
dc.identifier.epagearticle no. 883562-
dc.identifier.eissn1662-5218-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats