Deep Reinforcement Learning for Intelligent Internet of Vehicles: An Energy-Efficient Computational Offloading Scheme

Zhaolong Ning, Peiran Dong, Xiaojie Wang, Lei Guo, Joel J.P.C. Rodrigues, Xiangjie Kong, Jun Huang, Ricky Y.K. Kwok

Research output: Contribution to journalArticlepeer-review

145 Citations (Scopus)

Abstract

The emerging vehicular services call for updated communication and computing platforms. Fog computing, whose infrastructure is deployed in close proximity to terminals, extends the facilities of cloud computing. However, due to the limitation of vehicular fog nodes, it is challenging to satisfy the quality of experiences of users, calling for intelligent networks with updated computing abilities. This paper constructs a three-layer offloading framework in intelligent Internet of Vehicles (IoV) to minimize the overall energy consumption while satisfying the delay constraint of users. Due to its high computational complexity, the formulated problem is decomposed into two parts: 1) flow redirection and 2) offloading decision. After that, a deep reinforcement learning-based scheme is put forward to solve the optimization problem. Performance evaluations based on real-world traces of taxis in Shanghai, China, demonstrate the effectiveness of our methods, where average energy consumption can be decreased by around 60% compared with the baseline algorithm.

Original languageEnglish
Article number8770298
Pages (from-to)1060-1072
Number of pages13
JournalIEEE Transactions on Cognitive Communications and Networking
Volume5
Issue number4
DOIs
Publication statusPublished - Dec 2019
Externally publishedYes

Keywords

  • Internet of Vehicles
  • computation offloading
  • deep reinforcement learning
  • energy efficiency

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning for Intelligent Internet of Vehicles: An Energy-Efficient Computational Offloading Scheme'. Together they form a unique fingerprint.

Cite this