Show simple item record

dc.contributor.authorTegmark, Max Erik
dc.contributor.authorWu, Tailin
dc.date.accessioned2020-05-28T15:16:20Z
dc.date.available2020-05-28T15:16:20Z
dc.date.issued2019-12-19
dc.date.submitted2019-10
dc.identifier.issn1099-4300
dc.identifier.urihttps://hdl.handle.net/1721.1/125546
dc.description.abstractThe goal of lossy data compression is to reduce the storage cost of a data set X while retaining as much information as possible about something (Y) that you care about. For example, what aspects of an image X contain the most information about whether it depicts a cat? Mathematically, this corresponds to finding a mapping X→Z≡f(X) that maximizes the mutual information I(Z,Y) while the entropy H(Z) is kept below some fixed threshold. We present a new method for mapping out the Pareto frontier for classification tasks, reflecting the tradeoff between retained entropy and class information. We first show how a random variable X (an image, say) drawn from a class Y∈{1,…,n} can be distilled into a vector W=f(X)∈Rn−1 losslessly, so that I(W,Y)=I(X,Y) ; for example, for a binary classification task of cats and dogs, each image X is mapped into a single real number W retaining all information that helps distinguish cats from dogs. For the n=2 case of binary classification, we then show how W can be further compressed into a discrete variable Z=gβ(W)∈{1,…,mβ} by binning W into mβ bins, in such a way that varying the parameter β sweeps out the full Pareto frontier, solving a generalization of the discrete information bottleneck (DIB) problem. We argue that the most interesting points on this frontier are “corners” maximizing I(Z,Y) for a fixed number of bins m=2,3,… which can conveniently be found without multiobjective optimization. We apply this method to the CIFAR-10, MNIST and Fashion-MNIST datasets, illustrating how it can be interpreted as an information-theoretically optimal image clustering algorithm. We find that these Pareto frontiers are not concave, and that recently reported DIB phase transitions correspond to transitions between these corners, changing the number of clusters. Keywords: information; bottleneck; compression; classificationen_US
dc.description.sponsorshipTWCF (grant no. 0322)en_US
dc.publisherMultidisciplinary Digital Publishing Instituteen_US
dc.relation.isversionof10.3390/e22010007en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceMultidisciplinary Digital Publishing Instituteen_US
dc.titlePareto-optimal data compression for binary classification tasksen_US
dc.typeArticleen_US
dc.identifier.citationTegmark, Max, and Tailin Wu, "Pareto-optimal data compression for binary classification tasks." Entropy 22, 1 (Dec. 2019): no. 7 doi 10.3390/e22010007 ©2019 Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Physicsen_US
dc.contributor.departmentMIT Kavli Institute for Astrophysics and Space Researchen_US
dc.relation.journalEntropyen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2020-03-02T13:00:09Z
dspace.date.submission2020-03-02T13:00:09Z
mit.journal.volume22en_US
mit.journal.issue1en_US
mit.licensePUBLISHER_CC
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record