MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Physics-informed reinforcement learning optimization of nuclear assembly design

Author(s)
Radaideh, Majdi I.; Wolverton, Isaac; Joseph, Joshua Mason; Tusar, James J.; Otgonbaatar, Uuganbayar; Roy, Nicholas; Forget, Benoit Robert Yves; Shirvan, Koroush; ... Show more Show less
Thumbnail
DownloadNED-S-20-00912.pdf (2.601Mb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Additional downloads
NED-S-20-00912.pdf (2.115Mb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution-NonCommercial-NoDerivs License http://creativecommons.org/licenses/by-nc-nd/4.0/
Metadata
Show full item record
Abstract
Optimization of nuclear fuel assemblies if performed effectively, will lead to fuel efficiency improvement, cost reduction, and safety assurance. However, assembly optimization involves solving high-dimensional and computationally expensive combinatorial problems. As such, fuel designers’ expert judgement has commonly prevailed over the use of stochastic optimization (SO) algorithms such as genetic algorithms and simulated annealing. To improve the state-of-art, we explore a class of artificial intelligence (AI) algorithms, namely, reinforcement learning (RL) in this work. We propose a physics-informed AI optimization methodology by establishing a connection through reward shaping between RL and the tactics fuel designers follow in practice by moving fuel rods in the assembly to meet specific constraints and objectives. The methodology utilizes RL algorithms, deep Q learning and proximal policy optimization, and compares their performance to SO algorithms. The methodology is applied on two boiling water reactor assemblies of low-dimensional ( ~2 x 10⁶ combinations) and high-dimensional ( ~10³¹ combinations) natures. The results demonstrate that RL is more effective than SO in solving high dimensional problems, i.e., 10 × 10 assembly, through embedding expert knowledge in form of game rules and effectively exploring the search space. For a given computational resources and timeframe relevant to fuel designers, RL algorithms outperformed SO through finding more feasible patterns, 4–5 times more than SO, and through increasing search speed, as indicated by the RL outstanding computational efficiency. The results of this work clearly demonstrate RL effectiveness as another decision support tool for nuclear fuel assembly optimization.
Date issued
2021-02
URI
https://hdl.handle.net/1721.1/130571
Department
Massachusetts Institute of Technology. Department of Nuclear Science and Engineering; MIT Intelligence Initiative
Journal
Nuclear Engineering and Design
Publisher
Elsevier BV
Citation
Radaideh, Majdi I. et al. "Physics-informed reinforcement learning optimization of nuclear assembly design." Nuclear Engineering and Design 372 (February 2021): 110966.
Version: Author's final manuscript
ISSN
0029-5493

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.