Notice

This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/132287.2

Show simple item record

dc.contributor.authorHu, Kevin
dc.contributor.authorDemiralp, Çağatay
dc.contributor.authorGaikwad, Snehalkumar 'Neil' S
dc.contributor.authorHulsebos, Madelon
dc.contributor.authorBakker, Michiel A
dc.contributor.authorZgraggen, Emanuel
dc.contributor.authorHidalgo, César
dc.contributor.authorKraska, Tim
dc.contributor.authorLi, Guoliang
dc.contributor.authorSatyanarayan, Arvind
dc.date.accessioned2021-09-20T18:21:40Z
dc.date.available2021-09-20T18:21:40Z
dc.identifier.urihttps://hdl.handle.net/1721.1/132287
dc.description.abstract© 2019 Copyright held by the owner/author(s). Researchers currently rely on ad hoc datasets to train automated visualization tools and evaluate the efectiveness of visualization designs. These exemplars often lack the characteristics of real-world datasets, and their one-of nature makes it difcult to compare diferent techniques. In this paper, we present VizNet: a large-scale corpus of over 31 million datasets compiled from open data repositories and online visualization galleries. On average, these datasets comprise 17 records over 3 dimensions and across the corpus, we fnd 51% of the dimensions record categorical data, 44% quantitative, and only 5% temporal. VizNet provides the necessary common baseline for comparing visualization design techniques, and developing benchmark models and algorithms for automating visual analysis. To demonstrate VizNet’s utility as a platform for conducting online crowdsourced experiments at scale, we replicate a prior study assessing the infuence of user task and data distribution on visual encoding efectiveness, and extend it by considering an additional task: outlier detection. To contend with running such studies at scale, we demonstrate how a metric of perceptual efectiveness can be learned from experimental results, and show its predictive power across test datasets.en_US
dc.language.isoen
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.isversionof10.1145/3290605.3300892en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleVizNet: Towards A Large-Scale Visualization Learning and Benchmarking Repositoryen_US
dc.typeArticleen_US
dc.relation.journalConference on Human Factors in Computing Systems - Proceedingsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-01-11T17:35:50Z
dspace.orderedauthorsHu, K; Demiralp, Ç; Gaikwad, SNS; Hulsebos, M; Bakker, MA; Zgraggen, E; Hidalgo, C; Kraska, T; Li, G; Satyanarayan, Aen_US
dspace.date.submission2021-01-11T17:35:59Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version