Show simple item record

dc.contributor.advisorDaniel, Luca
dc.contributor.authorKo, Ching-Yun
dc.date.accessioned2022-08-29T16:29:44Z
dc.date.available2022-08-29T16:29:44Z
dc.date.issued2022-05
dc.date.submitted2022-06-21T19:25:45.703Z
dc.identifier.urihttps://hdl.handle.net/1721.1/145052
dc.description.abstractAs a seminal tool in self-supervised representation learning, contrastive learning has gained unprecedented attention in recent years. In essence, contrastive learning aims to leverage pairs of positive and negative samples for representation learning, which relates to exploiting neighborhood information in a feature space. However, as a self-supervised learning method, the current contrastive learning method have encoded priors on the downstream classification tasks implicitly. In this thesis, by investigating the connection between contrastive learning and neighborhood component analysis (NCA), we provide a novel stochastic nearest neighbor viewpoint of contrastive learning and subsequently propose a series of contrastive losses that outperform the existing ones. Under our proposed framework, we show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleRevisiting Contrastive Learning through the Lens of Neighborhood Component Analysis
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.orcid0000-0002-8966-8570
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record