Advanced Search

Computational disclosure control : a primer on data privacy protection

Research and Teaching Output of the MIT Community

Show simple item record

dc.contributor.advisor Hal Abelson. en_US Sweeney, Latanya en_US
dc.contributor.other Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. en_US 2005-08-23T21:31:24Z 2005-08-23T21:31:24Z 2001 en_US 2001 en_US
dc.description Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001. en_US
dc.description Includes bibliographical references (leaves 213-216) and index. en_US
dc.description.abstract Today's globally networked society places great demand on the dissemination and sharing of person specific data for many new and exciting uses. When these data are linked together, they provide an electronic shadow of a person or organization that is as identifying and personal as a fingerprint even when the information contains no explicit identifiers, such as name and phone number. Other distinctive data, such as birth date and ZIP code, often combine uniquely and can be linked to publicly available information to re-identify individuals. Producing anonymous data that remains specific enough to be useful is often a very difficult task and practice today tends to either incorrectly believe confidentiality is maintained when it is not or produces data that are practically useless. The goal of the work presented in this book is to explore computational techniques for releasing useful information in such a way that the identity of any individual or entity contained in data cannot be recognized while the data remain practically useful. I begin by demonstrating ways to learn information about entities from publicly available information. I then provide a formal framework for reasoning about disclosure control and the ability to infer the identities of entities contained within the data. I formally define and present null-map, k-map and wrong-map as models of protection. Each model provides protection by ensuring that released information maps to no, k or incorrect entities, respectively. The book ends by examining four computational systems that attempt to maintain privacy while releasing electronic information. These systems are: (1) my Scrub System, which locates personally-identifying information in letters between doctors and notes written by clinicians; (2) my Datafly II System, which generalizes and suppresses values in field-structured data sets; (3) Statistics Netherlands' pt-Argus System, which is becoming a European standard for producing public-use data; and, (4) my k-Similar algorithm, which finds optimal solutions such that data are minimally distorted while still providing adequate protection. By introducing anonymity and quality metrics, I show that Datafly II can overprotect data, Scrub and p-Argus can fail to provide adequate protection, but k-similar finds optimal results. en_US
dc.description.statementofresponsibility by Latanya Sweeney. en_US
dc.format.extent 217 leaves en_US
dc.format.extent 19145835 bytes
dc.format.extent 19145596 bytes
dc.format.mimetype application/pdf
dc.format.mimetype application/pdf
dc.language.iso eng en_US
dc.publisher Massachusetts Institute of Technology en_US
dc.rights M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. en_US
dc.subject Electrical Engineering and Computer Science. en_US
dc.title Computational disclosure control : a primer on data privacy protection en_US
dc.type Thesis en_US Ph.D. en_US
dc.contributor.department Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. en_US
dc.identifier.oclc 49279409 en_US

Files in this item

Name Size Format Description
49279409-MIT.pdf 20.86Mb PDF Full printable version

This item appears in the following Collection(s)

Show simple item record