Show simple item record

dc.contributor.authorKasture, Harshad
dc.contributor.authorSanchez, Daniel
dc.date.accessioned2017-12-19T18:02:01Z
dc.date.available2017-12-19T18:02:01Z
dc.date.issued2016-09
dc.identifier.isbn978-1-5090-3896-1
dc.identifier.isbn978-1-5090-3895-4
dc.identifier.isbn978-1-5090-3897-8
dc.identifier.urihttp://hdl.handle.net/1721.1/112803
dc.description.abstractLatency-critical applications, common in datacenters, must achieve small and predictable tail (e.g., 95th or 99th percentile) latencies. Their strict performance requirements limit utilization and efficiency in current datacenters. These problems have sparked research in hardware and software techniques that target tail latency. However, research in this area is hampered by the lack of a comprehensive suite of latency-critical benchmarks. We present TailBench, a benchmark suite and evaluation methodology that makes latency-critical workloads as easy to run and characterize as conventional, throughput-oriented ones. TailBench includes eight applications that span a wide range of latency requirements and domains, and a harness that implements a robust and statistically sound load-testing methodology. The modular design of the TailBench harness facilitates multiple load-testing scenarios, ranging from multi-node configurations that capture network overheads, to simplified single-node configurations that allow measuring tail latency in simulation. Validation results show that the simplified configurations are accurate for most applications. This flexibility enables rapid prototyping of hardware and software techniques for latency-critical workloads.en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (CCF-1318384)en_US
dc.description.sponsorshipQatar Computing Research Instituteen_US
dc.description.sponsorshipGoogle (Firm) (Google Research Award)en_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/IISWC.2016.7581261en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT Web Domainen_US
dc.titleTailbench: a benchmark suite and evaluation methodology for latency-critical applicationsen_US
dc.typeArticleen_US
dc.identifier.citationKasture, Harshad, and Daniel Sanchez. “Tailbench: a Benchmark Suite and Evaluation Methodology for Latency-Critical Applications.” 2016 IEEE International Symposium on Workload Characterization (IISWC) (September 2016). IEEE, 2016, pp. 1–10.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorKasture, Harshad
dc.contributor.mitauthorSanchez, Daniel
dc.relation.journal2016 IEEE International Symposium on Workload Characterization (IISWC)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsKasture, Harshad; Sanchez, Danielen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-3964-9064
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record