Work-sharing framework for Apache Spark
Author(s)Yu, Lucy, M. Eng. Massachusetts Institute of Technology
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
MetadataShow full item record
Apache Spark is a popular framework for distributed data processing that generalizes the MapReduce model and significantly improves the performance of many use cases. People can use Spark to query enormous data sets faster than before to gain insights for a competitive edge in industry. Often these ad-hoc queries perform similar work, and there is an opportunity to share the work of different queries. This can reduce the total computation time even more. We have developed a Wrapper class which performs such optimizations. In particular, its strategy of lazy evaluation allows duplicate computation to be avoided and multiple related Spark jobs to be executed at the same time, reducing the scheduling overhead. Overall, the system demonstrates significant efficiency gains when compared to default Spark.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (page 39).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.