| dc.contributor.advisor | Rubinfeld, Ronitt | |
| dc.contributor.advisor | Indyk, Piotr | |
| dc.contributor.author | Quaye, Isabelle A. | |
| dc.date.accessioned | 2024-03-21T19:10:50Z | |
| dc.date.available | 2024-03-21T19:10:50Z | |
| dc.date.issued | 2024-02 | |
| dc.date.submitted | 2024-03-04T16:38:08.247Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/153854 | |
| dc.description.abstract | The use of machine learning models in algorithms design is a rapidly growing f ield, often termed learning-augmented algorithms. A notable advancement in this field is the use of reinforcement learning for algorithm discovery. Developing algorithms in this manner offers certain advantages, novelty and adaptability being chief among them. In this thesis, we put reinforcement learning to the task of discovering an algorithm for the list update problem. The list update problem is a classic problem with applications in caching and databases. In the process of uncovering a new list update algorithm, we also prove a competitive ratio for the transposition heuristic, which is a well-known algorithm for the list update problem. Finally, we discuss key ideas and insights from the reinforcement learning agent that hints towards optimal behavior for the list update problem. | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | In Copyright - Educational Use Permitted | |
| dc.rights | Copyright retained by author(s) | |
| dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
| dc.title | Learning to Update: Using Reinforcement Learning to Discover Policies for List Update | |
| dc.type | Thesis | |
| dc.description.degree | M.Eng. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| mit.thesis.degree | Master | |
| thesis.degree.name | Master of Engineering in Electrical Engineering and Computer Science | |