Work-sharing framework for Apache Spark
Author(s)
Yu, Lucy, M. Eng. Massachusetts Institute of Technology
DownloadFull printable version (3.602Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Matei Zaharia.
Terms of use
Metadata
Show full item recordAbstract
Apache Spark is a popular framework for distributed data processing that generalizes the MapReduce model and significantly improves the performance of many use cases. People can use Spark to query enormous data sets faster than before to gain insights for a competitive edge in industry. Often these ad-hoc queries perform similar work, and there is an opportunity to share the work of different queries. This can reduce the total computation time even more. We have developed a Wrapper class which performs such optimizations. In particular, its strategy of lazy evaluation allows duplicate computation to be avoided and multiple related Spark jobs to be executed at the same time, reducing the scheduling overhead. Overall, the system demonstrates significant efficiency gains when compared to default Spark.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (page 39).
Date issued
2016Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.