Geometric Properties of Learned Representations
Author(s)
Wang, Tongzhou
DownloadThesis PDF (19.23Mb)
Advisor
Isola, Phillip
Torralba, Antonio
Terms of use
Metadata
Show full item recordAbstract
In machine learning, reprensentation learning refers to optimizing a mapping from data to some representation space (usually generic vectors in Rᵈ for some pre-determined 𝑑 much lower than data dimensions). While such training often uses no supervised labels, the learned representations have proved very useful for solving downstream tasks. Such successes sparkled an enormous amount of interests in representation learning methods among both academic researchers and practitioners. Despite the popularity, it is not always clear what the representation learning objectives are optimizing for, and how to design representation learning methods for new domains and tasks (such as reinforcement learning). In this thesis, we consider the structures captured by two geometric properties of learned representations: invariances and distances. From these two perspectives, we start by thoroughly analyzing the widely adopted contrastive representation learning, uncovering that it learns certain structures and relations among data. Then, we describe two new representation learning methods for reinforcement learning and control, where they respectively capture the optimal planning cost (distance) and the information invariant to environment noises.
Date issued
2022-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology