Show simple item record

dc.contributor.authorBigham, Jeffrey P.
dc.contributor.authorYeh, Tom
dc.contributor.authorJayant, Chandrika
dc.contributor.authorJi, Hanjie
dc.contributor.authorMiller, Andrew
dc.contributor.authorWhite, Brandyn
dc.contributor.authorWhite, Samuel
dc.contributor.authorLittle, Danny Greg
dc.contributor.authorMiller, Robert C
dc.contributor.authorTatarowicz, Aubrey L
dc.date.accessioned2017-04-20T20:54:55Z
dc.date.available2017-04-20T20:54:55Z
dc.date.issued2010-04
dc.identifier.issn978-1-4503-0045-2
dc.identifier.urihttp://hdl.handle.net/1721.1/108326
dc.description.abstractThe lack of access to visual information like text labels, icons,and colors can cause frustration and decrease independence for blind people. Current access technology uses automatic approaches to address some problems in this space, but the technology is error-prone, limited in scope, and quite expensive. In this paper, we introduce VizWiz, a talking application for mobile phones that offers a new alternative to answering visual questions in nearly real-time—asking multiple people on the web. To support answering questions quickly, we introduce a general approach for intelligently recruiting human workers in advance called quikTurkit so that workers are available when new questions arrive. A field deployment with 11 blind participants illustrates that blind people can effectively use VizWiz to cheaply answer questions in their everyday lives, highlighting issues that automatic approaches will need to address to be useful. Finally, we illustrate the potential of using VizWiz as part of the participatory design of advanced tools by using it to build and evaluate VizWiz::LocateIt, an interactive mobile tool that helps blind people solve general visual search problems.en_US
dc.language.isoen_US
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1145/1805986.1806020en_US
dc.rightsAttribution-Noncommercial-Share Alike 3.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/en_US
dc.sourceRobert C, Milleren_US
dc.titleVizWizen_US
dc.typeArticleen_US
dc.identifier.citationBigham, Jeffrey P.; Yeh, Tom; Jayant, Chandrika; Ji, Hanjie; Little, Greg; Miller, Andrew; Miller, Robert C.; Tatarowicz, Aubrey; White, Brandyn and White, Samuel. “VizWiz.” Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A), April 26-27 2010, Raleigh, North Carolina, Association for Computing Machinery (ACM), April 2010.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.approverMiller, Roberten_US
dc.contributor.mitauthorLittle, Danny Greg
dc.contributor.mitauthorMiller, Robert C
dc.contributor.mitauthorTatarowicz, Aubrey L
dc.relation.journalProceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A) - W4A '10en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsBigham, Jeffrey P.; Yeh, Tom; Jayant, Chandrika; Ji, Hanjie; Little, Greg; Miller, Andrew; Miller, Robert C.; Tatarowicz, Aubrey; White, Brandyn; White, Samuelen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-0442-691X
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record