A container-based lightweight fault tolerance framework for high performance computing workloads
Author(s)
Sindi, Mohamad(Mohamad Othman)
Download1144931624-MIT.pdf (14.45Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Civil and Environmental Engineering.
Advisor
John R. Williams.
Terms of use
Metadata
Show full item recordAbstract
According to the latest world's top 500 supercomputers list, ~90% of the top High Performance Computing (HPC) systems are based on commodity hardware clusters, which are typically designed for performance rather than reliability. The Mean Time Between Failures (MTBF) for some current petascale systems has been reported to be several days, while studies estimate it may be less than 60 minutes for future exascale systems. One of the largest studies on HPC system failures showed that more than 50% of failures were due to hardware, and that failure rates grew with system size. Hence, running extended workloads on such systems is becoming more challenging as system sizes grow. In this work, we design and implement a lightweight fault tolerance framework to improve the sustainability of running workloads on HPC clusters. The framework mainly includes a fault prediction component and a remedy component. The fault prediction component is implemented using a parallel algorithm that proactively predicts hardware issues with no overhead. This allows remedial actions to be taken before failures impact workloads. The algorithm uses machine learning applied to supercomputer system logs. We test it on actual logs from systems from Sandia National Laboratories (SNL). The massive logs come from three supercomputers and consist of ~750 million logs (~86 GB data). The algorithm is also tested online on our test cluster. We demonstrate the algorithm's high accuracy and performance in predicting cluster nodes with potential issues. The remedy component is implemented using the Linux container technology. Container technology has proven its success in the microservices domain. We adapt it towards HPC workloads to make use of its resilience potential. By running workloads inside containers, we are able to migrate workloads from nodes predicted to have hardware issues, to healthy nodes while workloads are running. This does not introduce any major interruption or performance overhead to the workload, nor require application modification. We test with multiple real HPC applications that use the Message Passing Interface (MPI) standard. Tests are performed on various cluster platforms using different MPI types. Results demonstrate successful migration of HPC workloads, while maintaining integrity of results produced.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019 Cataloged from PDF version of thesis. Includes bibliographical references (pages 122-130).
Date issued
2019Department
Massachusetts Institute of Technology. Department of Civil and Environmental EngineeringPublisher
Massachusetts Institute of Technology
Keywords
Civil and Environmental Engineering.