FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adapative Presentation for Video Conferencing
Author(s)
Yao, Lining; DeVincenzi, Anthony; Pereira, Anna; Ishii, Hiroshi
DownloadIshii_FocalSpace.pdf (2.782Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users' focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing.
Date issued
2013-07Department
Massachusetts Institute of Technology. Media Laboratory; Program in Media Arts and Sciences (Massachusetts Institute of Technology)Journal
Proceedings of the 1st symposium on Spatial user interaction (SUI '13)
Publisher
Association for Computing Machinery (ACM)
Citation
Lining Yao, Anthony DeVincenzi, Anna Pereira, and Hiroshi Ishii. 2013. FocalSpace: multimodal activity tracking, synthetic blur and adaptive presentation for video conferencing. In Proceedings of the 1st symposium on Spatial user interaction (SUI '13). ACM, New York, NY, USA, 73-76.
Version: Author's final manuscript
ISBN
9781450321419