MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Surpassing Legacy Approaches to PWR Core Reload Optimization with Single-Objective Reinforcement Learning

Author(s)
Seurin, Paul; Shirvan, Koroush
Thumbnail
DownloadPublished version (32.31Mb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution-NonCommercial-NoDerivatives https://creativecommons.org/licenses/by-nc-nd/4.0/
Metadata
Show full item record
Abstract
Optimizing the fuel cycle cost through the optimization of nuclear reactor core loading patterns (LPs) involves multiple objectives and constraints, leading to a vast number of candidate solutions that cannot be explicitly solved. To advance the state of the art in core reload patterns, we have developed methods based on deep Reinforcement Learning (RL) for both single- and multi-objective optimization. Our previous research laid the groundwork for these approaches and demonstrated their ability to discover high-quality patterns within a reasonable time frame. On the other hand, Stochastic Optimization (SO) approaches are commonly used in the literature, but there is no rigorous explanation that shows which approach is better in which scenario. In this paper, we demonstrate the advantage of our RL-based approach, specifically using Proximal Policy Optimization (PPO) against the most commonly used SO-based methods: Genetic Algorithm, Parallel Simulated Annealing with mixing of states, and Tabu Search, as well as an ensemble-based method, i.e. the Prioritized replay Evolutionary and Swarm Algorithm. We found that the LP scenarios derived in this paper are amenable to a global search to identify promising research directions rapidly but then need to transition into a local search to exploit these directions efficiently and prevent getting stuck in local optima. PPO adapts its search capability via a policy with learnable weights, allowing it to function as both a global search method and a local search method. Subsequently, we compared all algorithms against PPO in long runs, which exacerbated the differences seen in the shorter cases. Overall, the work demonstrates the statistical superiority of PPO compared to the other considered algorithms.
URI
https://hdl.handle.net/1721.1/159863
Department
Massachusetts Institute of Technology. Department of Nuclear Science and Engineering
Journal
Nuclear Science and Engineering
Publisher
Informa UK Limited
Citation
Seurin, P., & Shirvan, K. (2025). Surpassing Legacy Approaches to PWR Core Reload Optimization with Single-Objective Reinforcement Learning. Nuclear Science and Engineering, 1–32.
Version: Final published version

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.