Show simple item record

dc.contributor.authorZhang, Zhuohao (Jerry)
dc.contributor.authorLi, Haichang
dc.contributor.authorYu, Chun Meng
dc.contributor.authorFaruqi, Faraz
dc.contributor.authorXie, Junan
dc.contributor.authorKim, Gene
dc.contributor.authorFan, Mingming
dc.contributor.authorForbes, Angus
dc.contributor.authorWobbrock, Jacob
dc.contributor.authorGuo, Anhong
dc.contributor.authorHe, Liang
dc.date.accessioned2025-11-26T16:52:49Z
dc.date.available2025-11-26T16:52:49Z
dc.date.issued2025-10-22
dc.identifier.isbn979-8-4007-0676-9
dc.identifier.urihttps://hdl.handle.net/1721.1/164075
dc.descriptionASSETS ’25, Denver, CO, USAen_US
dc.description.abstractBuilding 3-D models is challenging for blind and low-vision (BLV) users due to the inherent complexity of 3-D models and the lack of support for non-visual interaction in existing tools. To address this issue, we introduce A11yShape, a novel system designed to help BLV users who possess basic programming skills understand, modify, and iterate on 3-D models. A11yShape leverages LLMs and integrates with OpenSCAD, a popular open-source editor that generates 3-D models from code. Key functionalities of A11yShape include accessible descriptions of 3-D models, version control to track changes in models and code, and a hierarchical representation of model components. Most importantly, A11yShape employs a cross-representation highlighting mechanism to synchronize semantic selections across all model representations—code, semantic hierarchy, AI description, and 3-D rendering. We conducted a multi-session user study with four BLV programmers, where, after an initial tutorial session, participants independently completed 12 distinct models across two testing sessions, achieving results that aligned with their own satisfaction. The result demonstrates that participants were able to comprehend provided 3-D models, as well as independently create and modify 3-D models—tasks that were previously impossible without assistance from sighted individuals.en_US
dc.publisherACM|The 27th International ACM SIGACCESS Conference on Computers and Accessibilityen_US
dc.relation.isversionofhttps://doi.org/10.1145/3663547.3746362en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleA11yShape: AI-Assisted 3-D Modeling for Blind and Low-Vision Programmersen_US
dc.typeArticleen_US
dc.identifier.citationZhuohao (Jerry) Zhang, Haichang Li, Chun Meng Yu, Faraz Faruqi, Junan Xie, Gene S-H Kim, Mingming Fan, Angus Forbes, Jacob O. Wobbrock, Anhong Guo, and Liang He. 2025. A11yShape: AI-Assisted 3-D Modeling for Blind and Low-Vision Programmers. In The 27th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’25), October 26–29, 2025, Denver, CO, USA. ACM, New York, NY, USA, 20 pages.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-11-01T07:47:00Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-11-01T07:47:01Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record