Show simple item record

dc.contributor.advisorKim, Yoon
dc.contributor.authorSun, Melinda
dc.date.accessioned2023-11-02T20:21:05Z
dc.date.available2023-11-02T20:21:05Z
dc.date.issued2023-09
dc.date.submitted2023-10-03T18:21:20.340Z
dc.identifier.urihttps://hdl.handle.net/1721.1/152839
dc.description.abstractTransformers are powerful and effective tools in natural language processing, but their scalability is limited by the quadratic complexity of attention. Several transformer variants that address this problem have recently been proposed, including Moving Average Equipped Gated Attention (Mega). In this thesis, we evaluate how effectively Mega uses past context, by comparing the perplexity trend as context length varies with the perplexity trend of a standard transformer. We find that Mega does not show greater benefit from longer context in a Wikipedia or book setting, though it does have a much better ability to extrapolate beyond training context lengths.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleLong Sequence Transformer Variants on Varying Context Length
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record