File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Automated vehicle overtaking based on a multiple-goal reinforcement learning framework

TitleAutomated vehicle overtaking based on a multiple-goal reinforcement learning framework
Authors
Issue Date2007
PublisherIEEE.
Citation
Ieee Conference On Intelligent Transportation Systems, Proceedings, Itsc, 2007, p. 818-823 How to Cite?
AbstractIn this paper, we propose a reinforcement learning multiple-goal framework to solve the automated vehicle overtaking problem. Here, the overtaking problem is solved by considering the destination seeking goal and collision avoidance goal simultaneously. The host vehicle uses Double-action Q-Learning for collision avoidance and Q-learning for destination seeking by learning to react with different motions carried out by a leading vehicle. Simulations show that the proposed method performs well disregarding whether the vehicle to be overtaken holds a steady or un-steady course. Given the promising results, better navigation is expected if additional goals such as lane following is introduced in the multiple-goal framework. © 2007 IEEE.
Persistent Identifierhttp://hdl.handle.net/10722/99022
References

 

DC FieldValueLanguage
dc.contributor.authorNgai, DCKen_HK
dc.contributor.authorYung, NHCen_HK
dc.date.accessioned2010-09-25T18:12:39Z-
dc.date.available2010-09-25T18:12:39Z-
dc.date.issued2007en_HK
dc.identifier.citationIeee Conference On Intelligent Transportation Systems, Proceedings, Itsc, 2007, p. 818-823en_HK
dc.identifier.urihttp://hdl.handle.net/10722/99022-
dc.description.abstractIn this paper, we propose a reinforcement learning multiple-goal framework to solve the automated vehicle overtaking problem. Here, the overtaking problem is solved by considering the destination seeking goal and collision avoidance goal simultaneously. The host vehicle uses Double-action Q-Learning for collision avoidance and Q-learning for destination seeking by learning to react with different motions carried out by a leading vehicle. Simulations show that the proposed method performs well disregarding whether the vehicle to be overtaken holds a steady or un-steady course. Given the promising results, better navigation is expected if additional goals such as lane following is introduced in the multiple-goal framework. © 2007 IEEE.en_HK
dc.languageengen_HK
dc.publisherIEEE.en_HK
dc.relation.ispartofIEEE Conference on Intelligent Transportation Systems, Proceedings, ITSCen_HK
dc.titleAutomated vehicle overtaking based on a multiple-goal reinforcement learning frameworken_HK
dc.typeConference_Paperen_HK
dc.identifier.emailYung, NHC:nyung@eee.hku.hken_HK
dc.identifier.authorityYung, NHC=rp00226en_HK
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/ITSC.2007.4357682en_HK
dc.identifier.scopuseid_2-s2.0-49249113108en_HK
dc.identifier.hkuros143216en_HK
dc.relation.referenceshttp://www.scopus.com/mlt/select.url?eid=2-s2.0-49249113108&selection=ref&src=s&origin=recordpageen_HK
dc.identifier.spage818en_HK
dc.identifier.epage823en_HK
dc.identifier.scopusauthoridNgai, DCK=9332358900en_HK
dc.identifier.scopusauthoridYung, NHC=7003473369en_HK

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats