Show simple item record

dc.contributor.authorLundgard, Alan
dc.contributor.authorSatyanarayan, Arvind
dc.date.accessioned2022-07-19T15:38:20Z
dc.date.available2022-07-19T15:38:20Z
dc.date.issued2022
dc.identifier.urihttps://hdl.handle.net/1721.1/143862
dc.description.abstractNatural language descriptions sometimes accompany visualizations to better communicate and contextualize their insights, and to improve their accessibility for readers with disabilities. However, it is difficult to evaluate the usefulness of these descriptions, and how effectively they improve access to meaningful information, because we have little understanding of the semantic content they convey, and how different readers receive this content. In response, we introduce a conceptual model for the semantic content conveyed by natural language descriptions of visualizations. Developed through a grounded theory analysis of 2,147 sentences, our model spans four levels of semantic content: enumerating visualization construction properties (e.g., marks and encodings); reporting statistical concepts and relations (e.g., extrema and correlations); identifying perceptual and cognitive phenomena (e.g., complex trends and patterns); and elucidating domain-specific insights (e.g., social and political context). To demonstrate how our model can be applied to evaluate the effectiveness of visualization descriptions, we conduct a mixed-methods evaluation with 30 blind and 90 sighted readers, and find that these reader groups differ significantly on which semantic content they rank as most useful. Together, our model and findings suggest that access to meaningful information is strongly reader-specific, and that research in automatic visualization captioning should orient toward descriptions that more richly communicate overall trends and statistics, sensitive to reader preferences. Our work further opens a space of research on natural language as a data interface coequal with visualization.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionof10.1109/TVCG.2021.3114770en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleAccessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Contenten_US
dc.typeArticleen_US
dc.identifier.citationLundgard, Alan and Satyanarayan, Arvind. 2022. "Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content." IEEE Transactions on Visualization and Computer Graphics, 28 (1).
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.relation.journalIEEE Transactions on Visualization and Computer Graphicsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-07-19T15:35:50Z
dspace.orderedauthorsLundgard, A; Satyanarayan, Aen_US
dspace.date.submission2022-07-19T15:35:51Z
mit.journal.volume28en_US
mit.journal.issue1en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record