Show simple item record

dc.contributor.authorSeurin, Paul
dc.contributor.authorShirvan, Koroush
dc.date.accessioned2025-07-03T15:13:50Z
dc.date.available2025-07-03T15:13:50Z
dc.identifier.urihttps://hdl.handle.net/1721.1/159863
dc.description.abstractOptimizing the fuel cycle cost through the optimization of nuclear reactor core loading patterns (LPs) involves multiple objectives and constraints, leading to a vast number of candidate solutions that cannot be explicitly solved. To advance the state of the art in core reload patterns, we have developed methods based on deep Reinforcement Learning (RL) for both single- and multi-objective optimization. Our previous research laid the groundwork for these approaches and demonstrated their ability to discover high-quality patterns within a reasonable time frame. On the other hand, Stochastic Optimization (SO) approaches are commonly used in the literature, but there is no rigorous explanation that shows which approach is better in which scenario. In this paper, we demonstrate the advantage of our RL-based approach, specifically using Proximal Policy Optimization (PPO) against the most commonly used SO-based methods: Genetic Algorithm, Parallel Simulated Annealing with mixing of states, and Tabu Search, as well as an ensemble-based method, i.e. the Prioritized replay Evolutionary and Swarm Algorithm. We found that the LP scenarios derived in this paper are amenable to a global search to identify promising research directions rapidly but then need to transition into a local search to exploit these directions efficiently and prevent getting stuck in local optima. PPO adapts its search capability via a policy with learnable weights, allowing it to function as both a global search method and a local search method. Subsequently, we compared all algorithms against PPO in long runs, which exacerbated the differences seen in the shorter cases. Overall, the work demonstrates the statistical superiority of PPO compared to the other considered algorithms.en_US
dc.language.isoen
dc.publisherInforma UK Limiteden_US
dc.relation.isversionof10.1080/00295639.2025.2488702en_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivativesen_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.sourceInforma UK Limiteden_US
dc.titleSurpassing Legacy Approaches to PWR Core Reload Optimization with Single-Objective Reinforcement Learningen_US
dc.typeArticleen_US
dc.identifier.citationSeurin, P., & Shirvan, K. (2025). Surpassing Legacy Approaches to PWR Core Reload Optimization with Single-Objective Reinforcement Learning. Nuclear Science and Engineering, 1–32.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Nuclear Science and Engineeringen_US
dc.relation.journalNuclear Science and Engineeringen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2025-07-02T20:38:40Z
dspace.orderedauthorsSeurin, P; Shirvan, Ken_US
dspace.date.submission2025-07-02T20:38:45Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record