Show simple item record

dc.contributor.authorPolyanskiy, Yury
dc.contributor.authorWu, Yihong
dc.date.accessioned2021-02-23T21:57:11Z
dc.date.available2021-02-23T21:57:11Z
dc.date.issued2017-04
dc.identifier.isbn9781493970049
dc.identifier.isbn9781493970056
dc.identifier.issn0940-6573
dc.identifier.issn2198-3224
dc.identifier.urihttps://hdl.handle.net/1721.1/129982
dc.descriptionPart of the The IMA Volumes in Mathematics and its Applications book series (IMA, volume 161).en_US
dc.description.abstractThe data-processing inequality, that is, I(U; Y ) ≤ I(U; X) for a Markov chain U → X → Y, has been the method of choice for proving impossibility (converse) results in information theory and many other disciplines. Various channel-dependent improvements (called strong data-processing inequalities, or SDPIs) of this inequality have been proposed both classically and more recently. In this note we first survey known results relating various notions of contraction for a single channel. Then we consider the basic extension: given SDPI for each constituent channel in a Bayesian network, how to produce an end-to-end SDPI? Our approach is based on the (extract of the) Evans-Schulman method, which is demonstrated for three different kinds of SDPIs, namely, the usual Ahlswede-Gács type contraction coefficients (mutual information), Dobrushin’s contraction coefficients (total variation), and finally the F I -curve (the best possible non-linear SDPI for a given channel). Resulting bounds on the contraction coefficients are interpreted as probability of site percolation. As an example, we demonstrate how to obtain SDPI for an n-letter memoryless channel with feedback given an SDPI for n = 1. Finally, we discuss a simple observation on the equivalence of a linear SDPI and comparison to an erasure channel (in the sense of “less noisy” order). This leads to a simple proof of a curious inequality of Samorodnitsky (2015), and sheds light on how information spreads in the subsets of inputs of a memoryless channel.en_US
dc.language.isoen
dc.publisherSpringeren_US
dc.relation.isversionofhttp://dx.doi.org/10.1007/978-1-4939-7005-6_7en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleStrong Data-Processing Inequalities for Channels and Bayesian Networksen_US
dc.typeBooken_US
dc.identifier.citationPolyanskiy, Yury and Yihong Wu. "Strong Data-Processing Inequalities for Channels and Bayesian Networks." Convexity and Concentration, IMA Volumes in Mathematics and its Applications, 161, Springer, 2017, 211-249. © 2017 Springer Science+Business Media LLCen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalConvexity and Concentrationen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-06-23T14:57:11Z
dspace.date.submission2020-06-23T14:57:14Z
mit.journal.volume161en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record