Show simple item record

dc.contributor.advisorAnant Agarwal.en_US
dc.contributor.authorPsota, James Ryanen_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2007-01-10T16:48:13Z
dc.date.available2007-01-10T16:48:13Z
dc.date.copyright2005en_US
dc.date.issued2006en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/35612
dc.descriptionThesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.en_US
dc.descriptionIncludes bibliographical references (leaves 113-115).en_US
dc.description.abstractNext-generation microprocessors will increasingly rely on parallelism, as opposed to frequency scaling, for improvements in performance. Microprocessor designers are attaining such parallelism by placing multiple processing cores on a single piece of silicon. As the architecture of modern computer systems evolves from single monolithic cores to multiple cores, its programming models continue to evolve. Programming parallel computer systems has historically been quite challenging because the programmer must orchestrate both computation and communication. A number of different models have evolved to help the programmer with this arduous task, from standardized shared memory and message passing application programming interfaces, to automatically parallelizing compilers that attempt to achieve performance and correctness similar to that of hand-coded programs. One of the most widely used standard programming interfaces is the Message Passing Interface (MPI). This thesis contributes rMPI, a robust, deadlock-free, high performance design and implementation of MPI for the Raw tiled architecture.en_US
dc.description.abstract(cont.) rMPIs design constitutes the marriage of the MPI interface and the Raw system, allowing programmers to employ a well understood programming model to a novel high performance parallel computer. rMPI introduces robust, deadlock-free, and high-performance mechanisms to program Raw; offers an interface to Raw that is compatible with current MPI software; gives programmers already familiar with MPI an easy interface with which to program Raw; and gives programmers fine-grain control over their programs when trusting automatic parallelization tools is not desirable. Experimental evaluations show that the resulting library has relatively low overhead, scales well with increasing message sizes for a number of collective algorithms, and enables respectable speedups for real applications.en_US
dc.description.statementofresponsibilityby James Ryan Psota.en_US
dc.format.extent115 leavesen_US
dc.format.extent6753474 bytes
dc.format.extent7102742 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypeapplication/pdf
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titlerMPI : an MPI-compliant message passing library for tiled architecturesen_US
dc.title.alternativeMPI-compliant message passing library for tiled architecturesen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc75293670en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record