Show simple item record

dc.contributor.authorHayrapetyan, A.
dc.contributor.authorTumasyan, A.
dc.contributor.authorAdam, W.
dc.contributor.authorAndrejkovic, J. W.
dc.contributor.authorBergauer, T.
dc.contributor.authorChatterjee, S.
dc.contributor.authorDamanakis, K.
dc.contributor.authorDragicevic, M.
dc.contributor.authorHussain, P. S.
dc.contributor.authorJeitler, M.
dc.contributor.authorKrammer, N.
dc.contributor.authorLi, A.
dc.contributor.authorLiko, D.
dc.contributor.authorMikulec, I.
dc.contributor.authorSchieck, J.
dc.contributor.authorSchöfbeck, R.
dc.contributor.authorSchwarz, D.
dc.date.accessioned2024-09-11T18:34:29Z
dc.date.available2024-09-11T18:34:29Z
dc.date.issued2024-09-04
dc.identifier.urihttps://hdl.handle.net/1721.1/156703
dc.description.abstractComputing demands for large scientific experiments, such as the CMS experiment at the CERN LHC, will increase dramatically in the next decades. To complement the future performance increases of software running on central processing units (CPUs), explorations of coprocessor usage in data processing hold great potential and interest. Coprocessors are a class of computer processors that supplement CPUs, often improving the execution of certain functions due to architectural design choices. We explore the approach of Services for Optimized Network Inference on Coprocessors (SONIC) and study the deployment of this as-a-service approach in large-scale data processing. In the studies, we take a data processing workflow of the CMS experiment and run the main workflow on CPUs, while offloading several machine learning (ML) inference tasks onto either remote or local coprocessors, specifically graphics processing units (GPUs). With experiments performed at Google Cloud, the Purdue Tier-2 computing center, and combinations of the two, we demonstrate the acceleration of these ML algorithms individually on coprocessors and the corresponding throughput improvement for the entire workflow. This approach can be easily generalized to different types of coprocessors and deployed on local CPUs without decreasing the throughput performance. We emphasize that the SONIC approach enables high coprocessor usage and enables the portability to run workflows on different types of coprocessors.en_US
dc.publisherSpringer International Publishingen_US
dc.relation.isversionofhttps://doi.org/10.1007/s41781-024-00124-1en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceSpringer International Publishingen_US
dc.titlePortable Acceleration of CMS Computing Workflows with Coprocessors as a Serviceen_US
dc.typeArticleen_US
dc.identifier.citationThe CMS Collaboration., Hayrapetyan, A., Tumasyan, A. et al. Portable Acceleration of CMS Computing Workflows with Coprocessors as a Service. Comput Softw Big Sci 8, 17 (2024).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Physics
dc.relation.journalComputing and Software for Big Scienceen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-09-08T03:08:37Z
dc.language.rfc3066en
dc.rights.holderThe Author(s)
dspace.embargo.termsN
dspace.date.submission2024-09-08T03:08:37Z
mit.journal.volume8en_US
mit.journal.issue17en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record