MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

GPU-Accelerated Machine Learning Inference as a Service for Computing in Neutrino Experiments

Author(s)
Wang, Michael; Yang, Tingjun; Flechas, Maria Acosta; Harris, Philip; Hawks, Benjamin; Holzman, Burt; Knoepfel, Kyle; Krupa, Jeffrey; Pedro, Kevin; Tran, Nhan; ... Show more Show less
Thumbnail
DownloadPublished version (1.443Mb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/
Metadata
Show full item record
Abstract
<jats:p>Machine learning algorithms are becoming increasingly prevalent and performant in the reconstruction of events in accelerator-based neutrino experiments. These sophisticated algorithms can be computationally expensive. At the same time, the data volumes of such experiments are rapidly increasing. The demand to process billions of neutrino events with many machine learning algorithm inferences creates a computing challenge. We explore a computing model in which heterogeneous computing with GPU coprocessors is made available as a web service. The coprocessors can be efficiently and elastically deployed to provide the right amount of computing for a given processing task. With our approach, Services for Optimized Network Inference on Coprocessors (SONIC), we integrate GPU acceleration specifically for the ProtoDUNE-SP reconstruction chain without disrupting the native computing workflow. With our integrated framework, we accelerate the most time-consuming task, track and particle shower hit identification, by a factor of 17. This results in a factor of 2.7 reduction in the total processing time when compared with CPU-only production. For this particular task, only 1 GPU is required for every 68 CPU threads, providing a cost-effective solution.</jats:p>
Date issued
2021
URI
https://hdl.handle.net/1721.1/142103
Department
Massachusetts Institute of Technology. Department of Physics
Journal
Frontiers in Big Data
Publisher
Frontiers Media SA
Citation
Wang, Michael, Yang, Tingjun, Flechas, Maria Acosta, Harris, Philip, Hawks, Benjamin et al. 2021. "GPU-Accelerated Machine Learning Inference as a Service for Computing in Neutrino Experiments." Frontiers in Big Data, 3.
Version: Final published version

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.