Notice

This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/131804.2

Show simple item record

dc.contributor.authorSudarsanam, Nandan
dc.contributor.authorPitchai Kannu, Balaji
dc.contributor.authorFrey, Daniel D
dc.date.accessioned2021-09-20T17:30:20Z
dc.date.available2021-09-20T17:30:20Z
dc.date.issued2019-02-22
dc.identifier.urihttps://hdl.handle.net/1721.1/131804
dc.description.abstractAbstract This paper explores the use of designed experiments in an online environment. Motivated by real-world examples, we model a scenario where the practitioner is given a finite set of units and needs to select a subset of these which are expended toward a one-shot, multi-factor designed experiment. Following this phase, the designer is left with the remaining set of unused units to implement any learnings from the experiments. With this setting, we answer the key design question of how much to experiment, which translates to choosing the number of replicates for a given design. We construct a Bayesian framework that captures the expected cumulative gain across the entire set of units. We derive theoretical results for the optimal number of replicates for all two-level, full and fractional factorial designs with seven factors or fewer. We conduct simulations that serve as validation of the theoretical results, as well as enabling us to explore scenarios and techniques of analysis that are not captured in the theoretical studies. Our overall results indicate that the optimal allocation of units for experimentation varies from 1 to $$20\%$$ 20 % of the total units available, which is mainly governed by the experimental environment and the total number of units. We conclude that experimenting with the optimal number of replicates recommended by our study can lead to a cumulative improvement which is 80–95% greater than the expected cumulative improvement gained when a practitioner chooses the number of replicates randomly.en_US
dc.publisherSpringer Londonen_US
dc.relation.isversionofhttps://doi.org/10.1007/s00163-019-00311-xen_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceSpringer Londonen_US
dc.titleOptimal replicates for designed experiments under the online frameworken_US
dc.typeArticleen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2020-09-24T20:41:39Z
dc.language.rfc3066en
dc.rights.holderSpringer-Verlag London Ltd., part of Springer Nature
dspace.embargo.termsY
dspace.date.submission2020-09-24T20:41:39Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version