Show simple item record

dc.contributor.advisorKim, Yoon
dc.contributor.authorZeitoun, Abbas
dc.date.accessioned2023-07-31T19:49:37Z
dc.date.available2023-07-31T19:49:37Z
dc.date.issued2023-06
dc.date.submitted2023-07-13T14:31:10.958Z
dc.identifier.urihttps://hdl.handle.net/1721.1/151573
dc.description.abstractRecent work has shown that large language models can be made to parse the contents of non-text embeddings and use those contents to perform various tasks. However, work focusing on audio inputs to large language models has thus far focused on either training a joint audio-text model from scratch on a lot of data or on training the model to perform surface-level audio-text classification tasks. In this work, we show that a pretrained T5 encoder-decoder language model fine-tuned on as little as 10 hours of speech data can transcribe the contents of input audio embeddings and even outperforms a specialized baseline speech-to-text model at transcribing more difficult speech utterances. The resulting model serves as a first step towards language models that can manipulate audio inputs just as well as text inputs and can leverage the additional information in audio inputs to perform tasks that are not possible with text inputs alone.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleRecognizing Speech with Large Language Models
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record