Show simple item record

dc.contributor.authorShao, Yulin
dc.contributor.authorRezaee, Arman
dc.contributor.authorLiew, Soung Chang
dc.contributor.authorChan, Vincent
dc.date.accessioned2021-06-29T20:10:19Z
dc.date.available2021-06-29T20:10:19Z
dc.date.issued2020-02
dc.date.submitted2019-12
dc.identifier.isbn9781728109626
dc.identifier.issn2576-6813
dc.identifier.urihttps://hdl.handle.net/1721.1/131056
dc.description.abstractWe face a growing ecosystem of applications that produce and consume data at unprecedented rates and with strict latency requirements. Meanwhile, the bursty and unpredictable nature of their traffic can induce highly dynamic environments within networks which endanger their own viability. Unencumbered operation of these applications requires rapid (re)actions by Network Management and Control (NMC) systems which themselves depends on timely collection of network state information. Given the size of today's networks, collection of detailed network states is prohibitively costly for the network transport and computational resources. Thus, judicious sampling of network states is necessary for a cost-effective NMC system. This paper proposes a deep reinforcement learning (DRL) solution that learns the principle of significant sampling and effectively balances the need for accurate state information against the cost of sampling. Modeling the problem as a Markov Decision Process, we treat the NMC system as an agent that samples the state of various network elements to make optimal routing decisions. The agent will periodically receive a reward commensurate with the quality of its routing decisions. The decision on when to sample will progressively improve as the agent learns the relationship between the sampling frequency and the reward function. We show that our solution has a comparable performance to the recently published analytical optimal without the need for an explicit knowledge of the traffic model. Furthermore, we show that our solution can adapt to new environments, a feature that has been largely absent in the analytical considerations of the problem.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/globecom38437.2019.9013908en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceProf. Chan via Phoebe Ayersen_US
dc.titleSignificant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solutionen_US
dc.typeArticleen_US
dc.identifier.citationShao, Yulin et al. "Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution." 2019 IEEE Global Communications Conference, December 2019, Waikoloa, HI, Institute of Electrical and Electronics Engineers, February 2020. © 2019 IEEEen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journal2019 IEEE Global Communications Conference (GLOBECOM)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-06-28T16:52:28Z
dspace.orderedauthorsShao, Y; Rezaee, A; Liew, SC; Chan, VWSen_US
dspace.date.submission2021-06-28T16:52:29Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record