Generating annotations for how-to videos using crowdsourcing
Author(s)
Nguyen, Phu
DownloadAccepted version (2.013Mb)
Terms of use
Metadata
Show full item recordAbstract
How-to videos can be valuable for learning, but searching for and following along with them can be difficult. Having labeled events such as the tools used in how-to videos could improve video indexing, searching, and browsing. We introduce a crowdsourcing annotation tool for Photoshop how-to videos with a three-stage method that consists of: (1) gathering timestamps of important events, (2) labeling each event, and (3) capturing how each event affects the task of the tutorial. Our ultimate goal is to generalize our method to be applied to other domains of how-to videos. We evaluate our annotation tool with Amazon Mechanical Turk workers to investigate the accuracy, costs, and feasibility of our three-stage method for annotating large numbers of video tutorials. Improvements can be made for stages 1 and 3, but stage 2 produces accurate labels over 90% of the time using majority voting. We have observed that changes in the instructions and interfaces of each task can improve the accuracy of the results significantly.
Date issued
2013-05Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryJournal
Proceeding CHI EA '13 CHI '13 Extended Abstracts on Human Factors in Computing Systems
Publisher
Association for Computing Machinery (ACM)
Citation
Nguyen, Phu. "Generating annotations for how-to videos using crowdsourcing." In Proceeding CHI EA '13 CHI '13 Extended Abstracts on Human Factors in Computing Systems, Paris, France, April 27- May 02, 2013, Pages 835-840.
Version: Author's final manuscript
ISBN
978-1-4503-1952-2