Show simple item record

dc.contributor.advisorFrédo Durand.en_US
dc.contributor.authorAlsheikh, Sami Thabeten_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2017-12-20T17:24:26Z
dc.date.available2017-12-20T17:24:26Z
dc.date.copyright2017en_US
dc.date.issued2017en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/112830
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 63-65).en_US
dc.description.abstractWhen a person views a data visualization (graph, chart, infographic, etc.), they read the text and process the images to quickly understand the communicated message. This research works toward emulating this ability in computers. In pursuing this goal, we have explored three primary research objectives: 1) extracting and ranking the most relevant keywords in a data visualization 2) predicting a sensible topic and multiple subtopics for a data visualization, and 3) extracting relevant pictographs from a data visualization. For the first task, we create an automatic text extraction and ranking system which we evaluate on 202 MASSVIS data visualizations. For the last two objectives, we curate a more diverse and complex dataset, Visually. We devise a computational approach that automatically outputs textual and visual elements predicted representative of the data visualization content. Concretely, from the curated Visually dataset of 29K large infographic images sampled across 26 categories and 391 tags, we present an automated two step approach: first, we use extracted text to predict the text tags indicative of the infographic content, and second, we use these predicted text tags to localize the most diagnostic visual elements (what we have called "visual tags"). We report performances on a categorization and multi-label tag prediction problem and compare the results to human annotations. Our results show promise for automated human-like understanding of data visualizations.en_US
dc.description.statementofresponsibilityby Sami Thabet Alsheikh.en_US
dc.format.extent65 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleAutomated understanding of data visualizationsen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1015183375en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record