Show simple item record

dc.contributor.authorRadaideh, Majdi I
dc.contributor.authorShirvan, Koroush
dc.date.accessioned2021-10-27T19:53:05Z
dc.date.available2021-10-27T19:53:05Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/1721.1/133486
dc.description.abstract© 2021 For practical engineering optimization problems, the design space is typically narrow, given all the real-world constraints. Reinforcement Learning (RL) has commonly been guided by stochastic algorithms to tune hyperparameters and leverage exploration. Conversely in this work, we propose a rule-based RL methodology to guide evolutionary algorithms (EA) in constrained optimization. First, RL proximal policy optimization agents are trained to master matching some of the problem rules/constraints, then RL is used to inject experiences to guide various evolutionary/stochastic algorithms such as genetic algorithms, simulated annealing, particle swarm optimization, differential evolution, and natural evolution strategies. Accordingly, we develop RL-guided EAs, which are benchmarked against their standalone counterparts. RL-guided EA in continuous optimization demonstrates significant improvement over standalone EA for two engineering benchmarks. The main problem analyzed is nuclear fuel assembly combinatorial optimization with high-dimensional and computationally expensive physics. The results demonstrate the ability of RL to efficiently learn the rules that nuclear fuel engineers follow to realize candidate solutions. Without these rules, the design space is large for RL/EA to find many candidates. With imposing the rule-based RL methodology, we found that RL-guided EA outperforms standalone algorithms by a wide margin, with >10 times improvement in exploration capabilities and computational efficiency. These insights imply that when facing a constrained problem with numerous local optima, RL can be useful in focusing the search space in the areas where expert knowledge has demonstrated merit, while evolutionary/stochastic algorithms utilize their exploratory features to improve the number of feasible solutions.en_US
dc.language.isoen
dc.publisherElsevier BVen_US
dc.relation.isversionof10.1016/J.KNOSYS.2021.106836en_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivs Licenseen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.sourceOther repositoryen_US
dc.titleRule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applicationsen_US
dc.typeArticleen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Nuclear Science and Engineering
dc.relation.journalKnowledge-Based Systemsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2021-08-11T17:48:00Z
dspace.orderedauthorsRadaideh, MI; Shirvan, Ken_US
dspace.date.submission2021-08-11T17:48:01Z
mit.journal.volume217en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record