MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Unified and Generalizable Reinforcement Learning for Facility Location Problems on Graphs

Author(s)
Guo, Wenxuan; Wang, Runzhong; Xu, Yanyan; Jin, Yaohui
Thumbnail
Download3696410.3714812.pdf (9.670Mb)
Publisher Policy

Publisher Policy

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.

Terms of use
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Metadata
Show full item record
Abstract
Facility location problems on graphs are ubiquitous in the real world and hold significant importance, yet their resolution is often impeded by NP-hardness. MIP solvers can find the optimal solutions but fail to handle large instances, while algorithm efficiency has a higher priority in cases of emergency. Recently, machine learning methods have been proposed to tackle such classical problems with fast inference, but they are limited to the myopic constructive pattern and only consider simple cases in Euclidean space. This paper introduces a unified and generalizable approach to tackle facility location problems on weighted graphs with deep reinforcement learning, demonstrating a keen awareness of complex graph structures. Striking a harmonious balance between solution quality and running time, our method stands out with superior efficiency and steady performance. Our model trained on small graphs is highly scalable and consistently generates high-quality solutions, achieving a speedup of more than 2000 times to Gurobi on instances with 1000 nodes. The experiments on Shanghai road networks further demonstrate its practical value in solving real-world problems. The source codes are available at https://github.com/AryaGuo/PPO-swap.
Description
WWW ’25, Sydney, NSW, Australia
Date issued
2025-04-22
URI
https://hdl.handle.net/1721.1/164249
Department
Massachusetts Institute of Technology. Department of Chemical Engineering
Publisher
ACM|Proceedings of the ACM Web Conference 2025
Citation
Wenxuan Guo, Runzhong Wang, Yanyan Xu, and Yaohui Jin. 2025. Unified and Generalizable Reinforcement Learning for Facility Location Problems on Graphs. In Proceedings of the ACM on Web Conference 2025 (WWW '25). Association for Computing Machinery, New York, NY, USA, 1182–1195.
Version: Final published version
ISBN
979-8-4007-1274-6

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.