dc.contributor.author | Polyanskiy, Yury | |
dc.contributor.author | Wu, Yihong | |
dc.date.accessioned | 2021-02-23T21:57:11Z | |
dc.date.available | 2021-02-23T21:57:11Z | |
dc.date.issued | 2017-04 | |
dc.identifier.isbn | 9781493970049 | |
dc.identifier.isbn | 9781493970056 | |
dc.identifier.issn | 0940-6573 | |
dc.identifier.issn | 2198-3224 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/129982 | |
dc.description | Part of the The IMA Volumes in Mathematics and its Applications book series (IMA, volume 161). | en_US |
dc.description.abstract | The data-processing inequality, that is, I(U; Y ) ≤ I(U; X) for a Markov chain U → X → Y, has been the method of choice for proving impossibility (converse) results in information theory and many other disciplines. Various channel-dependent improvements (called strong data-processing inequalities, or SDPIs) of this inequality have been proposed both classically and more recently. In this note we first survey known results relating various notions of contraction for a single channel. Then we consider the basic extension: given SDPI for each constituent channel in a Bayesian network, how to produce an end-to-end SDPI?
Our approach is based on the (extract of the) Evans-Schulman method, which is demonstrated for three different kinds of SDPIs, namely, the usual Ahlswede-Gács type contraction coefficients (mutual information), Dobrushin’s contraction coefficients (total variation), and finally the F I -curve (the best possible non-linear SDPI for a given channel). Resulting bounds on the contraction coefficients are interpreted as probability of site percolation. As an example, we demonstrate how to obtain SDPI for an n-letter memoryless channel with feedback given an SDPI for n = 1.
Finally, we discuss a simple observation on the equivalence of a linear SDPI and comparison to an erasure channel (in the sense of “less noisy” order). This leads to a simple proof of a curious inequality of Samorodnitsky (2015), and sheds light on how information spreads in the subsets of inputs of a memoryless channel. | en_US |
dc.language.iso | en | |
dc.publisher | Springer | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1007/978-1-4939-7005-6_7 | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
dc.source | arXiv | en_US |
dc.title | Strong Data-Processing Inequalities for Channels and Bayesian Networks | en_US |
dc.type | Book | en_US |
dc.identifier.citation | Polyanskiy, Yury and Yihong Wu. "Strong Data-Processing Inequalities for Channels and Bayesian Networks." Convexity and Concentration, IMA Volumes in Mathematics and its Applications, 161, Springer, 2017, 211-249. © 2017 Springer Science+Business Media LLC | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.relation.journal | Convexity and Concentration | en_US |
dc.eprint.version | Author's final manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dc.date.updated | 2020-06-23T14:57:11Z | |
dspace.date.submission | 2020-06-23T14:57:14Z | |
mit.journal.volume | 161 | en_US |
mit.license | OPEN_ACCESS_POLICY | |
mit.metadata.status | Complete | |