Show simple item record

dc.contributor.authorGupta, Anjali
dc.contributor.authorKrohn, Maxwell
dc.contributor.authorWalfish, Michael
dc.contributor.otherParallel and Distributed Operating Systems
dc.date.accessioned2005-12-22T01:30:43Z
dc.date.available2005-12-22T01:30:43Z
dc.date.issued2004-05-05
dc.identifier.otherMIT-CSAIL-TR-2004-027
dc.identifier.otherMIT-LCS-TM-643
dc.identifier.urihttp://hdl.handle.net/1721.1/30467
dc.description.abstractThe recently developed rateless erasure codes are a near-optimal channel coding technique that guaranteeslow overhead and fast decoding. The underlying theory, and current implementations, of thesecodes assume that a network transmitter encodes according to a pre-specified probability distribution.In this report, we use basic Machine Learning techniques to try to understand what happens when thisassumption is false. We train several classes of models using certain features that describe the empiricaldistribution realized at a network receiver, and we investigate whether these models can “learn” topredict whether a given encoding will require extra overhead. Our results are mixed.
dc.format.extent15 p.
dc.format.extent20087603 bytes
dc.format.extent791808 bytes
dc.format.mimetypeapplication/postscript
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesMassachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory
dc.titleCan Basic ML Techniques Illuminate Rateless Erasure Codes?


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record