Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays
Author(s)Gillian, Nicholas Edward; Pfenninger, Sara; Paradiso, Joseph A.; Russell, Spencer Franklin
MetadataShow full item record
Gestures Everywhere is a dynamic framework for multimodal sensor fusion, pervasive analytics and gesture recognition. Our framework aggregates the real-time data from approximately 100 sensors that include RFID readers, depth cameras and RGB cameras distributed across 30 interactive displays that are located in key public areas of the MIT Media Lab. Gestures Everywhere fuses the multimodal sensor data using radial basis function particle filters and performs real-time analysis on the aggregated data. This includes key spatio-temporal properties such as presence, location and identity; in addition to higher-level analysis including social clustering and gesture recognition. We describe the algorithms and architecture of our system and discuss the lessons learned from the systems deployment.
DepartmentMassachusetts Institute of Technology. Media Laboratory; Massachusetts Institute of Technology. Responsive Environments Group; Program in Media Arts and Sciences (Massachusetts Institute of Technology)
Proceedings of The International Symposium on Pervasive Displays (PerDis '14)
Association for Computing Machinery (ACM)
Nicholas Gillian, Sara Pfenninger, Spencer Russell, and Joseph A. Paradiso. 2014. Gestures Everywhere: A Multimodal Sensor Fusion and Analysis Framework for Pervasive Displays. In Proceedings of The International Symposium on Pervasive Displays (PerDis '14), Sven Gehring (Ed.). ACM, New York, NY, USA, Pages 98, 6 pages.
Author's final manuscript