Show simple item record

dc.contributor.authorWang, Michael
dc.contributor.authorYang, Tingjun
dc.contributor.authorFlechas, Maria Acosta
dc.contributor.authorHarris, Philip
dc.contributor.authorHawks, Benjamin
dc.contributor.authorHolzman, Burt
dc.contributor.authorKnoepfel, Kyle
dc.contributor.authorKrupa, Jeffrey
dc.contributor.authorPedro, Kevin
dc.contributor.authorTran, Nhan
dc.date.accessioned2022-04-26T15:08:08Z
dc.date.available2022-04-26T15:08:08Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/1721.1/142103
dc.description.abstract<jats:p>Machine learning algorithms are becoming increasingly prevalent and performant in the reconstruction of events in accelerator-based neutrino experiments. These sophisticated algorithms can be computationally expensive. At the same time, the data volumes of such experiments are rapidly increasing. The demand to process billions of neutrino events with many machine learning algorithm inferences creates a computing challenge. We explore a computing model in which heterogeneous computing with GPU coprocessors is made available as a web service. The coprocessors can be efficiently and elastically deployed to provide the right amount of computing for a given processing task. With our approach, Services for Optimized Network Inference on Coprocessors (SONIC), we integrate GPU acceleration specifically for the ProtoDUNE-SP reconstruction chain without disrupting the native computing workflow. With our integrated framework, we accelerate the most time-consuming task, track and particle shower hit identification, by a factor of 17. This results in a factor of 2.7 reduction in the total processing time when compared with CPU-only production. For this particular task, only 1 GPU is required for every 68 CPU threads, providing a cost-effective solution.</jats:p>en_US
dc.language.isoen
dc.publisherFrontiers Media SAen_US
dc.relation.isversionof10.3389/FDATA.2020.604083en_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceFrontiersen_US
dc.titleGPU-Accelerated Machine Learning Inference as a Service for Computing in Neutrino Experimentsen_US
dc.typeArticleen_US
dc.identifier.citationWang, Michael, Yang, Tingjun, Flechas, Maria Acosta, Harris, Philip, Hawks, Benjamin et al. 2021. "GPU-Accelerated Machine Learning Inference as a Service for Computing in Neutrino Experiments." Frontiers in Big Data, 3.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Physics
dc.relation.journalFrontiers in Big Dataen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-04-26T15:02:51Z
dspace.orderedauthorsWang, M; Yang, T; Flechas, MA; Harris, P; Hawks, B; Holzman, B; Knoepfel, K; Krupa, J; Pedro, K; Tran, Nen_US
dspace.date.submission2022-04-26T15:02:53Z
mit.journal.volume3en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record