Efficient Algorithms for Vector Similarities
Author(s)
Silwal, Sandeep B.
DownloadThesis PDF (3.090Mb)
Advisor
Indyk, Piotr
Terms of use
Metadata
Show full item recordAbstract
A key cog in machine learning is the humble embedding: vector representations of real world objects such as text, images, graphs, or molecules whose geometric similarities capture intuitive notions of semantic similarities. It is thus common to curate massive datasets of embeddings by inferencing on a machine learning model of choice. However, the sheer dataset size and large dimensionality is often \emph{the} bottleneck in effectively leveraging and learning from this rich dataset. Inspired by this computational bottleneck in modern machine learning pipelines, we study the following question:
"How can we efficiently compute on large scale high dimensional data?"
In this thesis, we focus on two aspects of this question.
1) Efficient local similarity computation: we give faster algorithms for individual similarity computations, such as calculating notions of similarity between collections of vectors, as well as dimensionality reduction techniques which preserve similarities. In addition to computational efficiency, other resource constraints such as space and privacy are also considered.
2) Efficient global similarity analysis: we study algorithms for analyzing global relationships between vectors encoded in similarity matrices. Our algorithms compute on similarity matrices, such as distance or kernel matrices, without ever initializing them, thus avoiding an infeasible quadratic time bottleneck.
Overall, the main message of this thesis is that sublinear algorithms design principles are instrumental in designing scalable algorithms for big data.
Date issued
2024-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology