Show simple item record

dc.contributor.authorBau, D Anthony
dc.contributor.authorAndreas, Jacob
dc.date.accessioned2022-06-02T18:44:03Z
dc.date.available2022-06-02T18:44:03Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/1721.1/142869
dc.language.isoen
dc.publisherAssociation for Computational Linguistics (ACL)en_US
dc.relation.isversionof10.18653/V1/2021.EMNLP-MAIN.448en_US
dc.rightsCreative Commons Attribution 4.0 International Licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0en_US
dc.sourceAssociation for Computational Linguisticsen_US
dc.titleHow Do Neural Sequence Models Generalize? Local and Global Cues for Out-of-Distribution Predictionen_US
dc.typeArticleen_US
dc.identifier.citationBau, D Anthony and Andreas, Jacob. 2021. "How Do Neural Sequence Models Generalize? Local and Global Cues for Out-of-Distribution Prediction." Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.relation.journalProceedings of the 2021 Conference on Empirical Methods in Natural Language Processingen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2022-06-02T18:40:09Z
dspace.orderedauthorsBau, DA; Andreas, Jen_US
dspace.date.submission2022-06-02T18:40:11Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record