Show simple item record

dc.contributor.authorMei⁎, Catherine
dc.contributor.authorPollock⁎, Josh
dc.contributor.authorHajas, Daniel
dc.contributor.authorZong, Jonathan
dc.contributor.authorSatyanarayan, Arvind
dc.date.accessioned2025-11-26T16:47:21Z
dc.date.available2025-11-26T16:47:21Z
dc.date.issued2025-10-22
dc.identifier.urihttps://hdl.handle.net/1721.1/164074
dc.descriptionASSETS ’25, Denver, CO, USAen_US
dc.description.abstractGraphical representations — such as charts and diagrams — have a visual structure that communicates the relationship between visual elements. For instance, we might consider two elements to be connected when there is a line or arrow between them, or for there to be a part-to-whole relationship when one element is contained within the other. Yet, existing screen reader solutions rarely surface this structure for blind and low-vision readers. Recent approaches explore hierarchical trees or adjacency graphs, but these structures capture only parts of the visual structure — containment or direct connections, respectively. In response, we present Benthic, a system that supports perceptually congruent screen reader structures, which align screen reader navigation with a graphic’s visual structure. Benthic models graphical representations as hypergraphs: a relaxed tree structure that allows a single hyperedge to connect a parent to a set of children nodes. In doing so, Benthic is able to capture both hierarchical and adjacent visual relationships in a manner that is domain-agnostic and enables fluid (i.e., concise and reversible) traversal. To evaluate Benthic, we conducted a study with 15 blind participants who were asked to explore two kinds of graphical representations that have previously been studied with sighted readers. We find that Benthic’s perceptual congruence enabled flexible, goal-driven exploration and supported participants in building a clear understanding of each diagram’s structure.en_US
dc.language.isoen
dc.publisherACM|Proceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibilityen_US
dc.relation.isversionof10.1145/3663547.3746342en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleBenthic: Perceptually Congruent Structures for Accessible Charts and Diagramsen_US
dc.typeArticleen_US
dc.identifier.citationCatherine Mei, Josh Pollock, Daniel Hajas, Jonathan Zong, and Arvind Satyanarayan. 2025. Benthic: Perceptually Congruent Structures for Accessible Charts and Diagrams. In The 27th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’25), October 26–29, 2025, Denver, CO, USA. ACM, New York, NY, USA, 17 pagesen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-11-26T16:40:03Z
dspace.orderedauthorsMei⁎, C; Pollock⁎, J; Hajas, D; Zong, J; Satyanarayan, Aen_US
dspace.date.submission2025-11-26T16:40:08Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record