Show simple item record

dc.contributor.advisorMadhu Sudan.en_US
dc.contributor.authorSmith, Adam (Adam Davidson), 1977-en_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2005-09-27T18:06:19Z
dc.date.available2005-09-27T18:06:19Z
dc.date.copyright2004en_US
dc.date.issued2004en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/28744
dc.descriptionThesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.en_US
dc.descriptionIncludes bibliographical references (p. 109-115).en_US
dc.description.abstract(cont.) We apply the framework to get new results, creating (a) encryption schemes with very short keys, and (b) hash functions that leak no information about their input, yet-paradoxically-allow testing if a candidate vector is close to the input. One of the technical contributions of this research is to provide new, cryptographic uses of mathematical tools from complexity theory known as randomness extractors.en_US
dc.description.abstractSharing and maintaining long, random keys is one of the central problems in cryptography. This thesis provides about ensuring the security of a cryptographic key when partial information about it has been, or must be, leaked to an adversary. We consider two basic approaches: 1. Extracting a new, shorter, secret key from one that has been partially compromised. Specifically, we study the use of noisy data, such as biometrics and personal information, as cryptographic keys. Such data can vary drastically from one measurement to the next. We would like to store enough information to handle these variations, without having to rely on any secure storage-in particular, without storing the key itself in the clear. We solve the problem by casting it in terms of key extraction. We give a precise definition of what "security" should mean in this setting, and design practical, general solutions with rigorous analyses. Prior to this work, no solutions were known with satisfactory provable security guarantees. 2. Ensuring that whatever is revealed is not actually useful. This is most relevant when the key itself is sensitive-for example when it is based on a person's iris scan or Social Security Number. This second approach requires the user to have some control over exactly what information is revealed, but this is often the case: for example, if the user must reveal enough information to allow another user to correct errors in a corrupted key. How can the user ensure that whatever information the adversary learns is not useful to her? We answer by developing a theoretical framework for separating leaked information from useful information. Our definition strengthens the notion of entropic security, considered before in a few different contexts.en_US
dc.description.statementofresponsibilityby Adam Davison Smith.en_US
dc.format.extent121 p.en_US
dc.format.extent9107724 bytes
dc.format.extent9122773 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleMaintaining secrecy when information leakage is unavoidableen_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc59669706en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record