Show simple item record

dc.contributor.authorShao, Yulin
dc.contributor.authorRezaee, Arman
dc.contributor.authorLiew, Soung Chang
dc.contributor.authorChan, Vincent W. S.
dc.date.accessioned2021-06-29T20:11:33Z
dc.date.available2021-06-29T20:11:33Z
dc.date.issued2020-06
dc.identifier.issn0733-8716
dc.identifier.issn1558-0008
dc.identifier.urihttps://hdl.handle.net/1721.1/131057
dc.description.abstractSignificant sampling is an adaptive monitoring technique proposed for highly dynamic networks with centralized network management and control systems. The essential spirit of significant sampling is to collect and disseminate network state information when it is of significant value to the optimal operation of the network, and in particular when it helps identify the shortest routes. Discovering the optimal sampling policy that specifies the optimal sampling frequency is referred to as the significant sampling problem. Modeling the problem as a Markov Decision process, this paper puts forth a deep reinforcement learning (DRL) approach to tackle the significant sampling problem. This approach is more flexible and general than prior approaches as it can accommodate a diverse set of network environments. Experimental results show that, 1) by following the objectives set in the prior work, our DRL approach can achieve performance comparable to their analytically derived policy $\phi '$ - unlike the prior approach, our approach is model-free and unaware of the underlying traffic model; 2) by appropriately modifying the objective functions, we obtain a new policy which addresses the never-sample problem of policy $\phi '$ , consequently reducing the overall cost; 3) our DRL approach works well under different stochastic variations of the network environment - it can provide good solutions under complex network environments where analytically tractable solutions are not feasible.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/jsac.2020.3000364en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceProf. Chan via Phoebe Ayersen_US
dc.titleSignificant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solutionen_US
dc.typeArticleen_US
dc.identifier.citationShao, Yulin et al. "Significant Sampling for Shortest Path Routing: A Deep Reinforcement Learning Solution." IEEE Journal on Selected Areas in Communications 38, 10 (October 2020): 2234 - 2248. © 2020 IEEEen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalIEEE Journal on Selected Areas in Communicationsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2021-06-28T16:48:31Z
dspace.orderedauthorsShao, Y; Rezaee, A; Liew, SC; Chan, VWSen_US
dspace.date.submission2021-06-28T16:48:33Z
mit.journal.volume38en_US
mit.journal.issue10en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record