A-DDPG: Attention Mechanism-based Deep Reinforcement Learning for NFV

Nan He, S. Yang, Fan Li, S. Trajanovski, F.A. Kuipers, Xiaoming Fu

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

2 Citations (Scopus)
590 Downloads (Pure)


The efficacy of Network Function Virtualization (NFV) depends critically on (1) where the virtual network functions (VNFs) are placed and (2) how the traffic is routed. Unfortunately, these aspects are not easily optimized, especially under time-varying network states with different quality of service (QoS) requirements. Given the importance of NFV, many approaches have been proposed to solve the VNF placement and traffic routing problem. However, those prior approaches mainly assume that the state of the network is static and known, disregarding real-time network variations. To bridge that gap, in this paper, we formulate the VNF placement and traffic routing problem as a Markov Decision Process model to capture the dynamic network state transitions. In order to jointly minimize the delay and cost of NFV providers and maximize the revenue, we devise a customized Deep Reinforcement Learning (DRL) algorithm, called A-DDPG, for VNF placement and traffic routing in a real-time network. A-DDPG uses the attention mechanism to ascertain smooth network behavior within the general framework of network utility maximization (NUM). The simulation results show that A-DDPG outperforms the state-of- the-art in terms of network utility, delay, and cost.
Original languageEnglish
Title of host publicationIWQoS 2021 - IEEE/ACM International Symposium on Quality of Service
Number of pages10
ISBN (Print)978-1-6654-3054-8
Publication statusPublished - 2021

Bibliographical note

Accepted Author Manuscript


  • Network function virtualization
  • deep reinforcement learning
  • placement
  • routing


Dive into the research topics of 'A-DDPG: Attention Mechanism-based Deep Reinforcement Learning for NFV'. Together they form a unique fingerprint.

Cite this