Grounding spatial prepositions for video search
Author(s)
Tellex, Stefanie A.; Roy, Deb K.
DownloadRoy_Grounding Spatial.pdf (2.614Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Spatial language video retrieval is an important real-world problem that forms a test bed for evaluating semantic structures for natural language descriptions of motion on naturalistic data. Video search by natural language query requires that linguistic input be converted into structures that operate on video in order to find clips that match a query. This paper describes a framework for grounding the meaning of spatial prepositions in video. We present a library of features that can be used to automatically classify a video clip based on whether it matches a natural language query. To evaluate these features, we collected a corpus of natural language descriptions about the motion of people in video clips. We characterize the language used in the corpus, and use it to train and test models for the meanings of the spatial prepositions "to," "across," "through," "out," "along," "towards," and "around." The classifiers can be used to build a spatial language video retrieval system that finds clips matching queries such as "across the kitchen."
Date issued
2009-11Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Media LaboratoryJournal
Proceedings of the 2009 international conference on Multimodal interfaces
Publisher
Association for Computing Machinery
Citation
Tellex, Stefanie, and Deb Roy. “Grounding Spatial Prepositions for Video Search.” Proceedings of the 2009 International Conference on Multimodal Interfaces - ICMI-MLMI ’09. Cambridge, Massachusetts, USA, 2009. 253.
Version: Author's final manuscript
ISBN
978-1-60558-772-1