Show simple item record

dc.contributor.advisorWojciech Matusik.en_US
dc.contributor.authorWang, Andy(Andy L.),M. Eng.Massachusetts Institute of Technology.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2019-12-05T18:04:35Z
dc.date.available2019-12-05T18:04:35Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/123120
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 49-51).en_US
dc.description.abstractIn this thesis, we present a novel neural network method to synthesize a person's face imagery with frontal face and neutral expression, given a single unconstrained face photograph. We achieve this by a data-driven approach to train neural networks with a large-scale in-the-wild dataset of face images. The most common way to tackle this is supervised learning, which requires many ground-truth input-output pairs. Moreover, in our problem context, finding clean frontal and neutral expression faces without occlusions leads to other challenging problems. To avoid this, we take a neural knowledge transfer approach, where we first train modular networks for each well-defined sub-task and exploit them to instill semantic senses to train the face decoder, i.e., neutral face synthesizer. For sub-tasks, we utilize face landmark detection and recognition modules, where curated datasets exist. In particular, the face recognition sub-task learns features strongly invariant to lighting, pose, and facial expression variations. Given the recognition feature, we leverage this invariance to train our face decoder to produce consistent frontal and neutral expression faces, while constraining each generated face: 1) to be a forward facing pose using the network trained for the landmark detection, and 2) to preserve the same identity as the input face using the network trained for face recognition. Furthermore, we attempt to boost the realism of the output faces using adversarial loss, in which a discriminator competes with the generator network and guides the generation of higher quality faces. In test time, only the face recognition network and face decoder are used to synthesize neutral faces. Our approach does not require supervised data and further minimizes sensitive data pre-processing pipelines. Compared to competing fully-supervised methods, our method produces comparable or often even favorable face appearances.en_US
dc.description.statementofresponsibilityby Andy Wang.en_US
dc.format.extent51 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleHigh resolution neural frontal face synthesis from face encodings using adversarial lossen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1128187166en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2019-12-05T18:04:33Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record