MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution

Author(s)
Shao, Yulin; Rezaee, Arman; Liew, Soung Chang; Chan, Vincent
Thumbnail
DownloadAccepted version (818.0Kb)
Open Access Policy

Open Access Policy

Creative Commons Attribution-Noncommercial-Share Alike

Terms of use
Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/
Metadata
Show full item record
Abstract
We face a growing ecosystem of applications that produce and consume data at unprecedented rates and with strict latency requirements. Meanwhile, the bursty and unpredictable nature of their traffic can induce highly dynamic environments within networks which endanger their own viability. Unencumbered operation of these applications requires rapid (re)actions by Network Management and Control (NMC) systems which themselves depends on timely collection of network state information. Given the size of today's networks, collection of detailed network states is prohibitively costly for the network transport and computational resources. Thus, judicious sampling of network states is necessary for a cost-effective NMC system. This paper proposes a deep reinforcement learning (DRL) solution that learns the principle of significant sampling and effectively balances the need for accurate state information against the cost of sampling. Modeling the problem as a Markov Decision Process, we treat the NMC system as an agent that samples the state of various network elements to make optimal routing decisions. The agent will periodically receive a reward commensurate with the quality of its routing decisions. The decision on when to sample will progressively improve as the agent learns the relationship between the sampling frequency and the reward function. We show that our solution has a comparable performance to the recently published analytical optimal without the need for an explicit knowledge of the traffic model. Furthermore, we show that our solution can adapt to new environments, a feature that has been largely absent in the analytical considerations of the problem.
Date issued
2020-02
URI
https://hdl.handle.net/1721.1/131056
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Journal
2019 IEEE Global Communications Conference (GLOBECOM)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Shao, Yulin et al. "Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution." 2019 IEEE Global Communications Conference, December 2019, Waikoloa, HI, Institute of Electrical and Electronics Engineers, February 2020. © 2019 IEEE
Version: Author's final manuscript
ISBN
9781728109626
ISSN
2576-6813

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.