Notice
This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/132287.2
VizNet: Towards A Large-Scale Visualization Learning and Benchmarking Repository
dc.contributor.author | Hu, Kevin | |
dc.contributor.author | Demiralp, Çağatay | |
dc.contributor.author | Gaikwad, Snehalkumar 'Neil' S | |
dc.contributor.author | Hulsebos, Madelon | |
dc.contributor.author | Bakker, Michiel A | |
dc.contributor.author | Zgraggen, Emanuel | |
dc.contributor.author | Hidalgo, César | |
dc.contributor.author | Kraska, Tim | |
dc.contributor.author | Li, Guoliang | |
dc.contributor.author | Satyanarayan, Arvind | |
dc.date.accessioned | 2021-09-20T18:21:40Z | |
dc.date.available | 2021-09-20T18:21:40Z | |
dc.identifier.uri | https://hdl.handle.net/1721.1/132287 | |
dc.description.abstract | © 2019 Copyright held by the owner/author(s). Researchers currently rely on ad hoc datasets to train automated visualization tools and evaluate the efectiveness of visualization designs. These exemplars often lack the characteristics of real-world datasets, and their one-of nature makes it difcult to compare diferent techniques. In this paper, we present VizNet: a large-scale corpus of over 31 million datasets compiled from open data repositories and online visualization galleries. On average, these datasets comprise 17 records over 3 dimensions and across the corpus, we fnd 51% of the dimensions record categorical data, 44% quantitative, and only 5% temporal. VizNet provides the necessary common baseline for comparing visualization design techniques, and developing benchmark models and algorithms for automating visual analysis. To demonstrate VizNet’s utility as a platform for conducting online crowdsourced experiments at scale, we replicate a prior study assessing the infuence of user task and data distribution on visual encoding efectiveness, and extend it by considering an additional task: outlier detection. To contend with running such studies at scale, we demonstrate how a metric of perceptual efectiveness can be learned from experimental results, and show its predictive power across test datasets. | en_US |
dc.language.iso | en | |
dc.publisher | Association for Computing Machinery (ACM) | en_US |
dc.relation.isversionof | 10.1145/3290605.3300892 | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
dc.source | MIT web domain | en_US |
dc.title | VizNet: Towards A Large-Scale Visualization Learning and Benchmarking Repository | en_US |
dc.type | Article | en_US |
dc.relation.journal | Conference on Human Factors in Computing Systems - Proceedings | en_US |
dc.eprint.version | Author's final manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dc.date.updated | 2021-01-11T17:35:50Z | |
dspace.orderedauthors | Hu, K; Demiralp, Ç; Gaikwad, SNS; Hulsebos, M; Bakker, MA; Zgraggen, E; Hidalgo, C; Kraska, T; Li, G; Satyanarayan, A | en_US |
dspace.date.submission | 2021-01-11T17:35:59Z | |
mit.license | OPEN_ACCESS_POLICY | |
mit.metadata.status | Authority Work and Publication Information Needed |