Show simple item record

dc.contributor.authorHarper, Daniel R
dc.contributor.authorNandy, Aditya
dc.contributor.authorArunachalam, Naveen
dc.contributor.authorDuan, Chenru
dc.contributor.authorJanet, Jon Paul
dc.contributor.authorKulik, Heather J
dc.date.accessioned2022-09-19T12:10:40Z
dc.date.available2022-09-19T12:10:40Z
dc.date.issued2022
dc.identifier.urihttps://hdl.handle.net/1721.1/145470
dc.description.abstractStrategies for machine-learning(ML)-accelerated discovery that are general across materials composition spaces are essential, but demonstrations of ML have been primarily limited to narrow composition variations. By addressing the scarcity of data in promising regions of chemical space for challenging targets like open-shell transition-metal complexes, general representations and transferable ML models that leverage known relationships in existing data will accelerate discovery. Over a large set (ca. 1000) of isovalent transition-metal complexes, we quantify evident relationships for different properties (i.e., spin-splitting and ligand dissociation) between rows of the periodic table (i.e., 3d/4d metals and 2p/3p ligands). We demonstrate an extension to graph-based revised autocorrelation (RAC) representation (i.e., eRAC) that incorporates the effective nuclear charge alongside the nuclear charge heuristic that otherwise overestimates dissimilarity of isovalent complexes. To address the common challenge of discovery in a new space where data is limited, we introduce a transfer learning approach in which we seed models trained on a large amount of data from one row of the periodic table with a small number of data points from the additional row. We demonstrate the synergistic value of the eRACs alongside this transfer learning strategy to consistently improve model performance. Analysis of these models highlights how the approach succeeds by reordering the distances between complexes to be more consistent with the periodic table, a property we expect to be broadly useful for other materials domains.en_US
dc.language.isoen
dc.publisherAIP Publishingen_US
dc.relation.isversionof10.1063/5.0082964en_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAmerican Institute of Physics (AIP)en_US
dc.titleRepresentations and strategies for transferable machine learning improve model performance in chemical discoveryen_US
dc.typeArticleen_US
dc.identifier.citationHarper, Daniel R, Nandy, Aditya, Arunachalam, Naveen, Duan, Chenru, Janet, Jon Paul et al. 2022. "Representations and strategies for transferable machine learning improve model performance in chemical discovery." The Journal of Chemical Physics, 156 (7).
dc.contributor.departmentMassachusetts Institute of Technology. Department of Chemistryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Chemical Engineeringen_US
dc.relation.journalThe Journal of Chemical Physicsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-09-19T11:59:37Z
dspace.orderedauthorsHarper, DR; Nandy, A; Arunachalam, N; Duan, C; Janet, JP; Kulik, HJen_US
dspace.date.submission2022-09-19T11:59:42Z
mit.journal.volume156en_US
mit.journal.issue7en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record