Show simple item record

dc.contributor.authorFoucart, Corbin
dc.contributor.authorCharous, Aaron
dc.contributor.authorLermusiaux, Pierre F.J.
dc.date.accessioned2024-03-15T19:18:48Z
dc.date.available2024-03-15T19:18:48Z
dc.date.issued2023-10
dc.identifier.issn0021-9991
dc.identifier.urihttps://hdl.handle.net/1721.1/153763
dc.description.abstractFinite element discretizations of problems in computational physics often rely on adaptive mesh refinement (AMR) to preferentially resolve regions containing important features during simulation. However, these spatial refinement strategies are often heuristic and rely on domain-specific knowledge or trial-and-error. We treat the process of adaptive mesh refinement as a local, sequential decision-making problem under incomplete information, formulating AMR as a partially observable Markov decision process. Using a deep reinforcement learning approach, we train policy networks for AMR strategy directly from numerical simulation. The training process does not require an exact solution or a high-fidelity ground truth to the partial differential equation at hand, nor does it require a pre-computed training dataset. The local nature of our reinforcement learning formulation allows the policy network to be trained inexpensively on much smaller problems than those on which they are deployed. The methodology is not specific to any particular partial differential equation, problem dimension, or numerical discretization, and can flexibly incorporate diverse problem physics. To that end, we apply the approach to a diverse set of partial differential equations, using a variety of high-order discontinuous Galerkin and hybridizable discontinuous Galerkin finite element discretizations. We show that the resultant deep reinforcement learning policies are competitive with common AMR heuristics, generalize well across problem classes, and strike a favorable balance between accuracy and cost such that they often lead to a higher accuracy per problem degree of freedom.en_US
dc.language.isoen
dc.publisherElsevier BVen_US
dc.relation.isversionof10.1016/j.jcp.2023.112381en_US
dc.rightsCreative Commons Attribution-Noncommercial-ShareAlikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearxiven_US
dc.subjectComputer Science Applicationsen_US
dc.subjectPhysics and Astronomy (miscellaneous)en_US
dc.subjectApplied Mathematicsen_US
dc.subjectComputational Mathematicsen_US
dc.subjectModeling and Simulationen_US
dc.subjectNumerical Analysisen_US
dc.titleDeep reinforcement learning for adaptive mesh refinementen_US
dc.typeArticleen_US
dc.identifier.citationFoucart, Corbin, Charous, Aaron and Lermusiaux, Pierre F.J. 2023. "Deep reinforcement learning for adaptive mesh refinement." Journal of Computational Physics, 491.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineering
dc.contributor.departmentMassachusetts Institute of Technology. Center for Computational Science and Engineering
dc.relation.journalJournal of Computational Physicsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-03-15T19:13:27Z
dspace.orderedauthorsFoucart, C; Charous, A; Lermusiaux, PFJen_US
dspace.date.submission2024-03-15T19:13:31Z
mit.journal.volume491en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record