Show simple item record

dc.contributor.advisorRus, Daniela L.
dc.contributor.authorPhillips, Jacob
dc.date.accessioned2022-06-15T13:12:39Z
dc.date.available2022-06-15T13:12:39Z
dc.date.issued2022-02
dc.date.submitted2022-02-22T18:32:27.546Z
dc.identifier.urihttps://hdl.handle.net/1721.1/143324
dc.description.abstractTraditional training regimens for time-series models have been shown to encode the biases from their training corpora into the models themselves. We aim to train unbiased time-series models using existing biased datasets. However, most debiasing techniques rely on explicit labels that encapsulate the bias, such as pairs of words along some worrying axis of bias such as race or gender for language models. We propose an unsupervised latent debiasing training regimen based on [2] that simultaneously learns the latent distribution of the dataset and a separate language task; datapoints are selected for training batches by sampling weights inverse to their commonality as determined by their placement in the latent space. We adapt [2] to time-series datasets and show algorithmic improvements to bias identification and bias reduction for models trained on toy and real datasets.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleUnsupervised Latent Debiasing of Time-Series Models
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record