dc.contributor.advisor | Hari Balakrishnan. | en_US |
dc.contributor.author | Netravali, Ravi Arun | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2015-07-17T19:12:48Z | |
dc.date.available | 2015-07-17T19:12:48Z | |
dc.date.copyright | 2015 | en_US |
dc.date.issued | 2015 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/97765 | |
dc.description | Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015. | en_US |
dc.description | This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. | en_US |
dc.description | Cataloged from student-submitted PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 77-80). | en_US |
dc.description.abstract | This thesis first presents a measurement toolkit, Mahimahi, that records websites and replays them under emulated network conditions. Mahimahi improves on prior record-and-replay frameworks by emulating the multi-origin nature of Web pages, isolating its network traffic, and enabling evaluations of a larger set of target applications beyond browsers. Using Mahimahi, we perform a case study comparing current multiplexing protocols, HTTP/1.1 and SPDY, and a protocol in development, QUIC, to a hypothetical optimal protocol. We find that all three protocols are significantly suboptimal and their gaps from the optimal only increase with higher link speeds and RTTs. The reason for these trends is the same for each protocol: inherent source-level dependencies between objects on a Web page and browser limits on the number of parallel flows lead to serialized HTTP requests and prevent links from being fully occupied. To mitigate the effect of these dependencies, we built Cumulus, a user-deployable combination of a content-distribution network and a cloud browser that improves page load times when the user is at a significant delay from a Web page's servers. Cumulus contains a "Mini-CDN"-a transparent proxy running on the user's machine-and a "Puppet": a headless browser run by the user on a well-connected public cloud. When the user loads a Web page, the Mini-CDN forwards the user's request to the Puppet, which loads the entire page and pushes all of the page's objects to the Mini-CDN, which caches them locally. Cumulus benefits from the finding that dependency resolution, the process of learning which objects make up a Web page, accounts for a considerable amount of user-perceived wait time. By moving this task to the Puppet, Cumulus can accelerate page loads without modifying existing Web browsers or servers. We find that on cellular, in-flight Wi-Fi, and transcontinental networks, Cumulus accelerated the page loads of Google's Chrome browser by 1.13-2.36×. Performance was 1.19-2.13× faster than Opera Turbo, and 0.99-1.66× faster than Chrome with Google's Data Compression Proxy. | en_US |
dc.description.statementofresponsibility | by Ravi Arun Netravali. | en_US |
dc.format.extent | 80 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Understanding and improving Web page load times on modern networks | en_US |
dc.type | Thesis | en_US |
dc.description.degree | S.M. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.identifier.oclc | 913219297 | en_US |