Show simple item record

dc.contributor.advisorAleksander Ma̧dry.en_US
dc.contributor.authorVenigalla, Abhinav S.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2019-12-05T18:04:47Z
dc.date.available2019-12-05T18:04:47Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/123124
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionThesis: M. Eng. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 59-60).en_US
dc.description.abstractTraining deep neural networks requires large quantities of labeled training data, on the order of thousands of examples per class. These requirements make model training both time-consuming and expensive, which provides an incentive for adversaries to steal, or copy, other users' models. In this work, we examine a recent defense method called neural network watermarking via memorized examples, where an owner intentionally trains his model to mislabel particular inputs. We try to isolate the mechanism by which memorized examples are learned by a model in order to better evaluate their robustness. We find that memorized examples are indeed strongly embedded in trained models and actually transfer to stolen models under one form of model stealing. When access to local input-logit gradient information is used by an attacker, the stolen model also learns to mislabel the memorized examples. We show that this transfer is robust to architecture mismatch and perturbations of the query set used for stealing. We present different possible mechanisms for memorized example transfer and find that local input geometry is insufficient to explain the phenomenon. Finally, we describe a simple method for a model owner to boost the transfer rate of memorized examples, increasing their effectiveness as a defense against model stealing.en_US
dc.description.statementofresponsibilityby Abhinav S. Venigalla.en_US
dc.format.extent60 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleStrongly-transferring memorized examples in deep neural networksen_US
dc.typeThesisen_US
dc.description.degreeM. Eng. in Computer Science and Engineeringen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1128270896en_US
dc.description.collectionM.Eng.inComputerScienceandEngineering Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2019-12-05T18:04:46Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record