File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TVT.2020.2996187
- Scopus: eid_2-s2.0-85090148317
- Find via
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Enhancing the Fuel-Economy of V2I-Assisted Autonomous Driving: A Reinforcement Learning Approach
Title | Enhancing the Fuel-Economy of V2I-Assisted Autonomous Driving: A Reinforcement Learning Approach |
---|---|
Authors | |
Keywords | Autonomous driving deep reinforcement learning fuel consumption safe driving trajectory design V2I communications |
Issue Date | 2020 |
Citation | IEEE Transactions on Vehicular Technology, 2020, v. 69, n. 8, p. 8329-8342 How to Cite? |
Abstract | A novel framework is proposed for enhancing the driving safety and fuel economy of autonomous vehicles (AVs) with the aid of vehicle-To-infrastructure (V2I) communication networks. The problem of driving trajectory design is formulated for minimizing the total fuel consumption, while enhancing driving safety (by obeying the traffic rules and avoiding obstacles). In an effort to solve this pertinent problem, a deep reinforcement learning (DRL) approach is proposed for making collision-free decisions. Firstly, a deep Q-network (DQN) aided algorithm is proposed for determining the trajectory and velocity of the AV by receiving real-Time traffic information from the base stations (BSs). More particularly, the AV acts as an agent to carry out optimal action such as lane change and velocity change by interacting with the environment. Secondly, to overcome the large overestimation of action values by the Q-learning model, a double deep Q-network (DDQN) algorithm is proposed by decomposing the max-Q-value operation into action selection and action evaluation. Additionally, three practical driving policies are also proposed as benchmarks. Numerical results are provided for demonstrating that the proposed trajectory design algorithms are capable of enhancing the driving safety and fuel economy of AVs. We demonstrate that the proposed DDQN based algorithm outperforms the DQN based algorithm. Additionally, it is also demonstrated that the proposed fuel-economy (FE) based driving policy derived from the DRL algorithm is capable of achieving in excess of 24% of fuel savings over the benchmarks. |
Persistent Identifier | http://hdl.handle.net/10722/349461 |
ISSN | 2023 Impact Factor: 6.1 2023 SCImago Journal Rankings: 2.714 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Xiao | - |
dc.contributor.author | Liu, Yuanwei | - |
dc.contributor.author | Chen, Yue | - |
dc.contributor.author | Hanzo, Lajos | - |
dc.date.accessioned | 2024-10-17T06:58:41Z | - |
dc.date.available | 2024-10-17T06:58:41Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | IEEE Transactions on Vehicular Technology, 2020, v. 69, n. 8, p. 8329-8342 | - |
dc.identifier.issn | 0018-9545 | - |
dc.identifier.uri | http://hdl.handle.net/10722/349461 | - |
dc.description.abstract | A novel framework is proposed for enhancing the driving safety and fuel economy of autonomous vehicles (AVs) with the aid of vehicle-To-infrastructure (V2I) communication networks. The problem of driving trajectory design is formulated for minimizing the total fuel consumption, while enhancing driving safety (by obeying the traffic rules and avoiding obstacles). In an effort to solve this pertinent problem, a deep reinforcement learning (DRL) approach is proposed for making collision-free decisions. Firstly, a deep Q-network (DQN) aided algorithm is proposed for determining the trajectory and velocity of the AV by receiving real-Time traffic information from the base stations (BSs). More particularly, the AV acts as an agent to carry out optimal action such as lane change and velocity change by interacting with the environment. Secondly, to overcome the large overestimation of action values by the Q-learning model, a double deep Q-network (DDQN) algorithm is proposed by decomposing the max-Q-value operation into action selection and action evaluation. Additionally, three practical driving policies are also proposed as benchmarks. Numerical results are provided for demonstrating that the proposed trajectory design algorithms are capable of enhancing the driving safety and fuel economy of AVs. We demonstrate that the proposed DDQN based algorithm outperforms the DQN based algorithm. Additionally, it is also demonstrated that the proposed fuel-economy (FE) based driving policy derived from the DRL algorithm is capable of achieving in excess of 24% of fuel savings over the benchmarks. | - |
dc.language | eng | - |
dc.relation.ispartof | IEEE Transactions on Vehicular Technology | - |
dc.subject | Autonomous driving | - |
dc.subject | deep reinforcement learning | - |
dc.subject | fuel consumption | - |
dc.subject | safe driving | - |
dc.subject | trajectory design | - |
dc.subject | V2I communications | - |
dc.title | Enhancing the Fuel-Economy of V2I-Assisted Autonomous Driving: A Reinforcement Learning Approach | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1109/TVT.2020.2996187 | - |
dc.identifier.scopus | eid_2-s2.0-85090148317 | - |
dc.identifier.volume | 69 | - |
dc.identifier.issue | 8 | - |
dc.identifier.spage | 8329 | - |
dc.identifier.epage | 8342 | - |
dc.identifier.eissn | 1939-9359 | - |