| dc.contributor.author | Foucart, Corbin | |
| dc.contributor.author | Charous, Aaron | |
| dc.contributor.author | Lermusiaux, Pierre F.J. | |
| dc.date.accessioned | 2024-03-15T19:18:48Z | |
| dc.date.available | 2024-03-15T19:18:48Z | |
| dc.date.issued | 2023-10 | |
| dc.identifier.issn | 0021-9991 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/153763 | |
| dc.description.abstract | Finite element discretizations of problems in computational physics often rely on adaptive mesh refinement (AMR) to preferentially resolve regions containing important features during simulation. However, these spatial refinement strategies are often heuristic and rely on domain-specific knowledge or trial-and-error. We treat the process of adaptive mesh refinement as a local, sequential decision-making problem under incomplete information, formulating AMR as a partially observable Markov decision process. Using a deep reinforcement learning approach, we train policy networks for AMR strategy directly from numerical simulation. The training process does not require an exact solution or a high-fidelity ground truth to the partial differential equation at hand, nor does it require a pre-computed training dataset. The local nature of our reinforcement learning formulation allows the policy network to be trained inexpensively on much smaller problems than those on which they are deployed. The methodology is not specific to any particular partial differential equation, problem dimension, or numerical discretization, and can flexibly incorporate diverse problem physics. To that end, we apply the approach to a diverse set of partial differential equations, using a variety of high-order discontinuous Galerkin and hybridizable discontinuous Galerkin finite element discretizations. We show that the resultant deep reinforcement learning policies are competitive with common AMR heuristics, generalize well across problem classes, and strike a favorable balance between accuracy and cost such that they often lead to a higher accuracy per problem degree of freedom. | en_US |
| dc.language.iso | en | |
| dc.publisher | Elsevier BV | en_US |
| dc.relation.isversionof | 10.1016/j.jcp.2023.112381 | en_US |
| dc.rights | Creative Commons Attribution-Noncommercial-ShareAlike | en_US |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
| dc.source | arxiv | en_US |
| dc.subject | Computer Science Applications | en_US |
| dc.subject | Physics and Astronomy (miscellaneous) | en_US |
| dc.subject | Applied Mathematics | en_US |
| dc.subject | Computational Mathematics | en_US |
| dc.subject | Modeling and Simulation | en_US |
| dc.subject | Numerical Analysis | en_US |
| dc.title | Deep reinforcement learning for adaptive mesh refinement | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Foucart, Corbin, Charous, Aaron and Lermusiaux, Pierre F.J. 2023. "Deep reinforcement learning for adaptive mesh refinement." Journal of Computational Physics, 491. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Mechanical Engineering | |
| dc.contributor.department | Massachusetts Institute of Technology. Center for Computational Science and Engineering | |
| dc.relation.journal | Journal of Computational Physics | en_US |
| dc.eprint.version | Author's final manuscript | en_US |
| dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
| eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
| dc.date.updated | 2024-03-15T19:13:27Z | |
| dspace.orderedauthors | Foucart, C; Charous, A; Lermusiaux, PFJ | en_US |
| dspace.date.submission | 2024-03-15T19:13:31Z | |
| mit.journal.volume | 491 | en_US |
| mit.license | OPEN_ACCESS_POLICY | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |