Show simple item record

dc.contributor.authorRadaideh, Majdi I.
dc.contributor.authorWolverton, Isaac
dc.contributor.authorJoseph, Joshua Mason
dc.contributor.authorTusar, James J.
dc.contributor.authorOtgonbaatar, Uuganbayar
dc.contributor.authorRoy, Nicholas
dc.contributor.authorForget, Benoit Robert Yves
dc.contributor.authorShirvan, Koroush
dc.date.accessioned2021-05-11T21:34:31Z
dc.date.available2021-05-11T21:34:31Z
dc.date.issued2021-02
dc.date.submitted2020-09
dc.identifier.issn0029-5493
dc.identifier.urihttps://hdl.handle.net/1721.1/130571
dc.description.abstractOptimization of nuclear fuel assemblies if performed effectively, will lead to fuel efficiency improvement, cost reduction, and safety assurance. However, assembly optimization involves solving high-dimensional and computationally expensive combinatorial problems. As such, fuel designers’ expert judgement has commonly prevailed over the use of stochastic optimization (SO) algorithms such as genetic algorithms and simulated annealing. To improve the state-of-art, we explore a class of artificial intelligence (AI) algorithms, namely, reinforcement learning (RL) in this work. We propose a physics-informed AI optimization methodology by establishing a connection through reward shaping between RL and the tactics fuel designers follow in practice by moving fuel rods in the assembly to meet specific constraints and objectives. The methodology utilizes RL algorithms, deep Q learning and proximal policy optimization, and compares their performance to SO algorithms. The methodology is applied on two boiling water reactor assemblies of low-dimensional ( ~2 x 10⁶ combinations) and high-dimensional ( ~10³¹ combinations) natures. The results demonstrate that RL is more effective than SO in solving high dimensional problems, i.e., 10 × 10 assembly, through embedding expert knowledge in form of game rules and effectively exploring the search space. For a given computational resources and timeframe relevant to fuel designers, RL algorithms outperformed SO through finding more feasible patterns, 4–5 times more than SO, and through increasing search speed, as indicated by the RL outstanding computational efficiency. The results of this work clearly demonstrate RL effectiveness as another decision support tool for nuclear fuel assembly optimization.en_US
dc.publisherElsevier BVen_US
dc.relation.isversionofhttp://dx.doi.org/10.1016/j.nucengdes.2020.110966en_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivs Licenseen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.sourceProf. Royen_US
dc.titlePhysics-informed reinforcement learning optimization of nuclear assembly designen_US
dc.typeArticleen_US
dc.identifier.citationRadaideh, Majdi I. et al. "Physics-informed reinforcement learning optimization of nuclear assembly design." Nuclear Engineering and Design 372 (February 2021): 110966.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Nuclear Science and Engineeringen_US
dc.contributor.departmentMIT Intelligence Initiativeen_US
dc.relation.journalNuclear Engineering and Designen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.date.submission2021-05-07T17:18:26Z
mit.journal.volume372en_US
mit.licensePUBLISHER_CC
mit.metadata.statusComplete


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record