| dc.contributor.author | Lundgard, Alan | |
| dc.contributor.author | Satyanarayan, Arvind | |
| dc.date.accessioned | 2022-07-19T15:38:20Z | |
| dc.date.available | 2022-07-19T15:38:20Z | |
| dc.date.issued | 2022 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/143862 | |
| dc.description.abstract | Natural language descriptions sometimes accompany visualizations to better
communicate and contextualize their insights, and to improve their
accessibility for readers with disabilities. However, it is difficult to
evaluate the usefulness of these descriptions, and how effectively they improve
access to meaningful information, because we have little understanding of the
semantic content they convey, and how different readers receive this content.
In response, we introduce a conceptual model for the semantic content conveyed
by natural language descriptions of visualizations. Developed through a
grounded theory analysis of 2,147 sentences, our model spans four levels of
semantic content: enumerating visualization construction properties (e.g.,
marks and encodings); reporting statistical concepts and relations (e.g.,
extrema and correlations); identifying perceptual and cognitive phenomena
(e.g., complex trends and patterns); and elucidating domain-specific insights
(e.g., social and political context). To demonstrate how our model can be
applied to evaluate the effectiveness of visualization descriptions, we conduct
a mixed-methods evaluation with 30 blind and 90 sighted readers, and find that
these reader groups differ significantly on which semantic content they rank as
most useful. Together, our model and findings suggest that access to meaningful
information is strongly reader-specific, and that research in automatic
visualization captioning should orient toward descriptions that more richly
communicate overall trends and statistics, sensitive to reader preferences. Our
work further opens a space of research on natural language as a data interface
coequal with visualization. | en_US |
| dc.language.iso | en | |
| dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
| dc.relation.isversionof | 10.1109/TVCG.2021.3114770 | en_US |
| dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
| dc.source | MIT web domain | en_US |
| dc.title | Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Lundgard, Alan and Satyanarayan, Arvind. 2022. "Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content." IEEE Transactions on Visualization and Computer Graphics, 28 (1). | |
| dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | |
| dc.relation.journal | IEEE Transactions on Visualization and Computer Graphics | en_US |
| dc.eprint.version | Author's final manuscript | en_US |
| dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
| eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
| dc.date.updated | 2022-07-19T15:35:50Z | |
| dspace.orderedauthors | Lundgard, A; Satyanarayan, A | en_US |
| dspace.date.submission | 2022-07-19T15:35:51Z | |
| mit.journal.volume | 28 | en_US |
| mit.journal.issue | 1 | en_US |
| mit.license | OPEN_ACCESS_POLICY | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |