Show simple item record

dc.contributor.advisorThompson, Neil
dc.contributor.authorOu, Anthony C.
dc.date.accessioned2024-03-21T19:10:03Z
dc.date.available2024-03-21T19:10:03Z
dc.date.issued2024-02
dc.date.submitted2024-03-04T16:38:12.047Z
dc.identifier.urihttps://hdl.handle.net/1721.1/153846
dc.description.abstractThere is a rapidly growing number of open-source Large Language Models (LLMs) and benchmark datasets to compare them. While some models dominate these benchmarks, no single model typically achieves the best accuracy in all tasks and use cases. With a new dataset, it can be difficult to determine which LLM is best suited to the task. In this work we will address the challenges associated with selecting the best LLM model out of a collection for a new task. To do so, benchmark datasets are repurposed to learn a “router” model for this LLM selection, such that the “router” model will solve a collection of binary classification tasks. This work will demonstrate the utility and limitations of learning model routers from various benchmark datasets, where performance is improved upon using any single model for all tasks.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleLarge Language Model Routing with Benchmark Datasets
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record