MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Pareto-optimal data compression for binary classification tasks

Author(s)
Tegmark, Max Erik; Wu, Tailin
Thumbnail
Downloadentropy-22-00007-v3.pdf (5.378Mb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/
Metadata
Show full item record
Abstract
The goal of lossy data compression is to reduce the storage cost of a data set X while retaining as much information as possible about something (Y) that you care about. For example, what aspects of an image X contain the most information about whether it depicts a cat? Mathematically, this corresponds to finding a mapping X→Z≡f(X) that maximizes the mutual information I(Z,Y) while the entropy H(Z) is kept below some fixed threshold. We present a new method for mapping out the Pareto frontier for classification tasks, reflecting the tradeoff between retained entropy and class information. We first show how a random variable X (an image, say) drawn from a class Y∈{1,…,n} can be distilled into a vector W=f(X)∈Rn−1 losslessly, so that I(W,Y)=I(X,Y) ; for example, for a binary classification task of cats and dogs, each image X is mapped into a single real number W retaining all information that helps distinguish cats from dogs. For the n=2 case of binary classification, we then show how W can be further compressed into a discrete variable Z=gβ(W)∈{1,…,mβ} by binning W into mβ bins, in such a way that varying the parameter β sweeps out the full Pareto frontier, solving a generalization of the discrete information bottleneck (DIB) problem. We argue that the most interesting points on this frontier are “corners” maximizing I(Z,Y) for a fixed number of bins m=2,3,… which can conveniently be found without multiobjective optimization. We apply this method to the CIFAR-10, MNIST and Fashion-MNIST datasets, illustrating how it can be interpreted as an information-theoretically optimal image clustering algorithm. We find that these Pareto frontiers are not concave, and that recently reported DIB phase transitions correspond to transitions between these corners, changing the number of clusters. Keywords: information; bottleneck; compression; classification
Date issued
2019-12-19
URI
https://hdl.handle.net/1721.1/125546
Department
Massachusetts Institute of Technology. Department of Physics; MIT Kavli Institute for Astrophysics and Space Research
Journal
Entropy
Publisher
Multidisciplinary Digital Publishing Institute
Citation
Tegmark, Max, and Tailin Wu, "Pareto-optimal data compression for binary classification tasks." Entropy 22, 1 (Dec. 2019): no. 7 doi 10.3390/e22010007 ©2019 Author(s)
Version: Final published version
ISSN
1099-4300

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.