Geometric Properties of Learned Representations
MetadataShow full item record
In machine learning, reprensentation learning refers to optimizing a mapping from data to some representation space (usually generic vectors in Rᵈ for some pre-determined 𝑑 much lower than data dimensions). While such training often uses no supervised labels, the learned representations have proved very useful for solving downstream tasks. Such successes sparkled an enormous amount of interests in representation learning methods among both academic researchers and practitioners. Despite the popularity, it is not always clear what the representation learning objectives are optimizing for, and how to design representation learning methods for new domains and tasks (such as reinforcement learning). In this thesis, we consider the structures captured by two geometric properties of learned representations: invariances and distances. From these two perspectives, we start by thoroughly analyzing the widely adopted contrastive representation learning, uncovering that it learns certain structures and relations among data. Then, we describe two new representation learning methods for reinforcement learning and control, where they respectively capture the optimal planning cost (distance) and the information invariant to environment noises.
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology