| dc.contributor.advisor | Kim, Yoon | |
| dc.contributor.author | Sun, Melinda | |
| dc.date.accessioned | 2023-11-02T20:21:05Z | |
| dc.date.available | 2023-11-02T20:21:05Z | |
| dc.date.issued | 2023-09 | |
| dc.date.submitted | 2023-10-03T18:21:20.340Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/152839 | |
| dc.description.abstract | Transformers are powerful and effective tools in natural language processing, but their scalability is limited by the quadratic complexity of attention. Several transformer variants that address this problem have recently been proposed, including Moving Average Equipped Gated Attention (Mega). In this thesis, we evaluate how effectively Mega uses past context, by comparing the perplexity trend as context length varies with the perplexity trend of a standard transformer. We find that Mega does not show greater benefit from longer context in a Wikipedia or book setting, though it does have a much better ability to extrapolate beyond training context lengths. | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | In Copyright - Educational Use Permitted | |
| dc.rights | Copyright retained by author(s) | |
| dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
| dc.title | Long Sequence Transformer Variants on Varying Context Length | |
| dc.type | Thesis | |
| dc.description.degree | M.Eng. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| mit.thesis.degree | Master | |
| thesis.degree.name | Master of Engineering in Electrical Engineering and Computer Science | |