Show simple item record

dc.contributor.advisorMohammad Alizadeh.en_US
dc.contributor.authorKhani Shirkoohi, Mehrdad.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2019-10-11T22:11:25Z
dc.date.available2019-10-11T22:11:25Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/122549
dc.descriptionThesis: S.M. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 55-58).en_US
dc.description.abstractMassive Multiple-Input Multiple-Output (MIMO) is a key enabler for fifth generation (5G) cellular communication systems. Massive MIMO gives rise to challenging signal detection problems for which traditional detectors are either impractical or suffer from performance limitations. Recent work has proposed several learning approaches to MIMO detection with promising results on simple channel models (e.g., i.i.d. Gaussian entries). However, we find that the performance of these schemes degrades significantly in real-world scenarios in which the channels of different receivers are spatially correlated. The root of this poor performance is that these schemes either do not exploit the problem structure (requiring models with millions of training parameters), or are overly-constrained to mimic algorithms that require very specific assumptions about the channel matrix. We propose MMNet, a deep learning MIMO detection scheme that significantly outperforms existing approaches on realistic channel matrices with the same or lower computational complexity. MMNet's design builds on the theory of iterative soft-thresholding algorithms to identify the right degree of model complexity, and it uses a novel training algorithm that leverages temporal and frequency locality of channel matrices at a receiver to accelerate training. Together, these innovations allow MMNet to train online for every realization of the channel. On i.i.d. Gaussian channels, MMNet requires 2 orders of magnitude fewer operations than existing deep learning schemes but achieves near-optimal performance. On spatially-correlated realistic channels, MMNet achieves the same error rate as the next-best learning scheme (OAMPNet [1]) at 2.5dB lower Signal-to-Noise Ratio (SNR) and with at least lOx less computational complexity. MMNet is also 4-8dB better overall than a classic linear scheme like the minimum mean square error (MMSE) detector.en_US
dc.description.statementofresponsibilityby Mehrdad Khani Shirkoohi.en_US
dc.format.extent58 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleAdaptive Neural Signal Detection for Massive MIMOen_US
dc.typeThesisen_US
dc.description.degreeS.M. in Computer Science and Engineeringen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1122565262en_US
dc.description.collectionS.M.inComputerScienceandEngineering Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2019-10-11T22:11:24Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record