Capacity-Achieving Guessing Random Additive Noise Decoding
Author(s)
Li, Jiange; M´edard, Muriel
DownloadAccepted version (987.3Kb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We introduce a new algorithm for realizing maximum likelihood (ML) decoding for arbitrary codebooks in discrete channels with or without memory, in which the receiver rank-orders noise sequences from most likely to least likely. Subtracting noise from the received signal in that order, the first instance that results in a member of the codebook is the ML decoding. We name this algorithm GRAND for Guessing Random Additive Noise Decoding. We establish that GRAND is capacity-Achieving when used with random codebooks. For rates below capacity, we identify error exponents, and for rates beyond capacity, we identify success exponents. We determine the scheme's complexity in terms of the number of computations that the receiver performs. For rates beyond capacity, this reveals thresholds for the number of guesses by which, if a member of the codebook is identified, that it is likely to be the transmitted code word. We introduce an approximate ML decoding scheme where the receiver abandons the search after a fixed number of queries, an approach we dub GRANDAB, for GRAND with ABandonment. While not an ML decoder, we establish that the algorithm GRANDAB is also capacity-Achieving for an appropriate choice of abandonment threshold, and characterize its complexity, error, and success exponents. Worked examples are presented for Markovian noise that indicate these decoding schemes substantially outperform the brute force decoding approach.
Date issued
2019-07Department
Massachusetts Institute of Technology. Research Laboratory of ElectronicsJournal
IEEE Transactions on Information Theory
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Duffy, Ken R. et al. “Capacity-Achieving Guessing Random Additive Noise Decoding.” IEEE Transactions on Information Theory, 65, 7 (July 2019): 4023 - 4040 © 2019 The Author(s)
Version: Author's final manuscript
ISSN
0018-9448