File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Enhancing the Fuel-Economy of V2I-Assisted Autonomous Driving: A Reinforcement Learning Approach

TitleEnhancing the Fuel-Economy of V2I-Assisted Autonomous Driving: A Reinforcement Learning Approach
Authors
KeywordsAutonomous driving
deep reinforcement learning
fuel consumption
safe driving
trajectory design
V2I communications
Issue Date2020
Citation
IEEE Transactions on Vehicular Technology, 2020, v. 69, n. 8, p. 8329-8342 How to Cite?
AbstractA novel framework is proposed for enhancing the driving safety and fuel economy of autonomous vehicles (AVs) with the aid of vehicle-To-infrastructure (V2I) communication networks. The problem of driving trajectory design is formulated for minimizing the total fuel consumption, while enhancing driving safety (by obeying the traffic rules and avoiding obstacles). In an effort to solve this pertinent problem, a deep reinforcement learning (DRL) approach is proposed for making collision-free decisions. Firstly, a deep Q-network (DQN) aided algorithm is proposed for determining the trajectory and velocity of the AV by receiving real-Time traffic information from the base stations (BSs). More particularly, the AV acts as an agent to carry out optimal action such as lane change and velocity change by interacting with the environment. Secondly, to overcome the large overestimation of action values by the Q-learning model, a double deep Q-network (DDQN) algorithm is proposed by decomposing the max-Q-value operation into action selection and action evaluation. Additionally, three practical driving policies are also proposed as benchmarks. Numerical results are provided for demonstrating that the proposed trajectory design algorithms are capable of enhancing the driving safety and fuel economy of AVs. We demonstrate that the proposed DDQN based algorithm outperforms the DQN based algorithm. Additionally, it is also demonstrated that the proposed fuel-economy (FE) based driving policy derived from the DRL algorithm is capable of achieving in excess of 24% of fuel savings over the benchmarks.
Persistent Identifierhttp://hdl.handle.net/10722/349461
ISSN
2023 Impact Factor: 6.1
2023 SCImago Journal Rankings: 2.714

 

DC FieldValueLanguage
dc.contributor.authorLiu, Xiao-
dc.contributor.authorLiu, Yuanwei-
dc.contributor.authorChen, Yue-
dc.contributor.authorHanzo, Lajos-
dc.date.accessioned2024-10-17T06:58:41Z-
dc.date.available2024-10-17T06:58:41Z-
dc.date.issued2020-
dc.identifier.citationIEEE Transactions on Vehicular Technology, 2020, v. 69, n. 8, p. 8329-8342-
dc.identifier.issn0018-9545-
dc.identifier.urihttp://hdl.handle.net/10722/349461-
dc.description.abstractA novel framework is proposed for enhancing the driving safety and fuel economy of autonomous vehicles (AVs) with the aid of vehicle-To-infrastructure (V2I) communication networks. The problem of driving trajectory design is formulated for minimizing the total fuel consumption, while enhancing driving safety (by obeying the traffic rules and avoiding obstacles). In an effort to solve this pertinent problem, a deep reinforcement learning (DRL) approach is proposed for making collision-free decisions. Firstly, a deep Q-network (DQN) aided algorithm is proposed for determining the trajectory and velocity of the AV by receiving real-Time traffic information from the base stations (BSs). More particularly, the AV acts as an agent to carry out optimal action such as lane change and velocity change by interacting with the environment. Secondly, to overcome the large overestimation of action values by the Q-learning model, a double deep Q-network (DDQN) algorithm is proposed by decomposing the max-Q-value operation into action selection and action evaluation. Additionally, three practical driving policies are also proposed as benchmarks. Numerical results are provided for demonstrating that the proposed trajectory design algorithms are capable of enhancing the driving safety and fuel economy of AVs. We demonstrate that the proposed DDQN based algorithm outperforms the DQN based algorithm. Additionally, it is also demonstrated that the proposed fuel-economy (FE) based driving policy derived from the DRL algorithm is capable of achieving in excess of 24% of fuel savings over the benchmarks.-
dc.languageeng-
dc.relation.ispartofIEEE Transactions on Vehicular Technology-
dc.subjectAutonomous driving-
dc.subjectdeep reinforcement learning-
dc.subjectfuel consumption-
dc.subjectsafe driving-
dc.subjecttrajectory design-
dc.subjectV2I communications-
dc.titleEnhancing the Fuel-Economy of V2I-Assisted Autonomous Driving: A Reinforcement Learning Approach-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/TVT.2020.2996187-
dc.identifier.scopuseid_2-s2.0-85090148317-
dc.identifier.volume69-
dc.identifier.issue8-
dc.identifier.spage8329-
dc.identifier.epage8342-
dc.identifier.eissn1939-9359-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats