Show simple item record

dc.contributor.advisorSrinivas Devadas and Joel Emer.en_US
dc.contributor.authorYang, Hsin-Jungen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2017-10-30T15:28:37Z
dc.date.available2017-10-30T15:28:37Z
dc.date.copyright2017en_US
dc.date.issued2017en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/112034
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 159-167).en_US
dc.description.abstractFPGA-based accelerators have great potential to achieve better performance and energy-efficiency compared to general-purpose solutions because FPGAs permit the tailoring of hardware to a particular application. This hardware malleability extends to FPGA memory systems: unlike conventional processors, in which the memory system is fixed at design time, cache algorithms and network topologies of FPGA memory hierarchies may all be tuned to improve application performance. As FPGAs have grown in size and capacity, FPGA physical memories have become richer and more diverse in order to support the increased computational capacity of FPGA fabrics. Using these resources, and using them well, has become commensurately more difficult, especially in the context of legacy designs ported from smaller, simpler FPGA systems. This growing complexity necessitates automated build procedures that can make good use of memory resources by performing resource-aware, application-specific optimizations. In this thesis, we leverage the freedom of abstraction to build program-optimized memory hierarchies on behalf of the user, making FPGA programming easier and more efficient. To enable better generation of these memory hierarchies, we first provide a set of easy-to-use memory abstractions and perform several optimization mechanisms under the abstractions to construct various memory building blocks with different performance and cost tradeoffs. Then, we introduce a program introspection mechanism to analyze the runtime memory access characteristics of a given application. Finally, we propose a feedback-directed memory compiler that automatically synthesizes customized memory hierarchies tailored for different FPGA applications and platforms, enabling user programs to take advantage of the increasing memory capabilities of modern FPGAs.en_US
dc.description.statementofresponsibilityby Hsin-Jung Yang.en_US
dc.format.extentxvii, 167 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleAutomatic application-specific optimizations under FPGA memory abstractionsen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1006384698en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record