Show simple item record

dc.contributor.authorCai, Tejin
dc.contributor.authorHerner, Kenneth
dc.contributor.authorYang, Tingjun
dc.contributor.authorWang, Michael
dc.contributor.authorAcosta Flechas, Maria
dc.contributor.authorHarris, Philip
dc.contributor.authorHolzman, Burt
dc.contributor.authorPedro, Kevin
dc.contributor.authorTran, Nhan
dc.date.accessioned2023-10-30T19:57:31Z
dc.date.available2023-10-30T19:57:31Z
dc.date.issued2023-10-27
dc.identifier.urihttps://hdl.handle.net/1721.1/152552
dc.description.abstractAbstract We study the performance of a cloud-based GPU-accelerated inference server to speed up event reconstruction in neutrino data batch jobs. Using detector data from the ProtoDUNE experiment and employing the standard DUNE grid job submission tools, we attempt to reprocess the data by running several thousand concurrent grid jobs, a rate we expect to be typical of current and future neutrino physics experiments. We process most of the dataset with the GPU version of our processing algorithm and the remainder with the CPU version for timing comparisons. We find that a 100-GPU cloud-based server is able to easily meet the processing demand, and that using the GPU version of the event processing algorithm is two times faster than processing these data with the CPU version when comparing to the newest CPUs in our sample. The amount of data transferred to the inference server during the GPU runs can overwhelm even the highest-bandwidth network switches, however, unless care is taken to observe network facility limits or otherwise distribute the jobs to multiple sites. We discuss the lessons learned from this processing campaign and several avenues for future improvements.en_US
dc.publisherSpringer International Publishingen_US
dc.relation.isversionofhttps://doi.org/10.1007/s41781-023-00101-0en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceSpringer International Publishingen_US
dc.titleAccelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processingen_US
dc.typeArticleen_US
dc.identifier.citationComputing and Software for Big Science. 2023 Oct 27;7(1):11en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Physics
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2023-10-29T04:15:43Z
dc.language.rfc3066en
dc.rights.holderThis is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply
dspace.embargo.termsN
dspace.date.submission2023-10-29T04:15:43Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record