Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays
Author(s)
Gillian, Nicholas Edward; Pfenninger, Sara; Paradiso, Joseph A.; Russell, Spencer Franklin
DownloadParadiso_Gestures everywhere.pdf (3.740Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Gestures Everywhere is a dynamic framework for multimodal sensor fusion, pervasive analytics and gesture recognition. Our framework aggregates the real-time data from approximately 100 sensors that include RFID readers, depth cameras and RGB cameras distributed across 30 interactive displays that are located in key public areas of the MIT Media Lab. Gestures Everywhere fuses the multimodal sensor data using radial basis function particle filters and performs real-time analysis on the aggregated data. This includes key spatio-temporal properties such as presence, location and identity; in addition to higher-level analysis including social clustering and gesture recognition. We describe the algorithms and architecture of our system and discuss the lessons learned from the systems deployment.
Date issued
2014-06Department
Massachusetts Institute of Technology. Media Laboratory; Massachusetts Institute of Technology. Responsive Environments Group; Program in Media Arts and Sciences (Massachusetts Institute of Technology)Journal
Proceedings of The International Symposium on Pervasive Displays (PerDis '14)
Publisher
Association for Computing Machinery (ACM)
Citation
Nicholas Gillian, Sara Pfenninger, Spencer Russell, and Joseph A. Paradiso. 2014. Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays. In Proceedings of The International Symposium on Pervasive Displays (PerDis '14), Sven Gehring (Ed.). ACM, New York, NY, USA, Pages 98, 6 pages.
Version: Author's final manuscript
ISBN
9781450329521