Connecting Touch and Vision via Cross-Modal Prediction
Author(s)
Li, Yunzhu; Zhu, Jun-Yan; Tedrake, Russ; Torralba, Antonio
DownloadAccepted version (9.490Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2019 IEEE. Humans perceive the world using multi-modal sensory inputs such as vision, audition, and touch. In this work, we investigate the cross-modal connection between vision and touch. The main challenge in this cross-domain modeling task lies in the significant scale discrepancy between the two: While our eyes perceive an entire visual scene at once, humans can only feel a small region of an object at any given moment. To connect vision and touch, we introduce new tasks of synthesizing plausible tactile signals from visual inputs as well as imagining how we interact with objects given tactile data as input. To accomplish our goals, we first equip robots with both visual and tactile sensors and collect a large-scale dataset of corresponding vision and tactile image sequences. To close the scale gap, we present a new conditional adversarial model that incorporates the scale and location information of the touch. Human perceptual studies demonstrate that our model can produce realistic visual images from tactile data and vice versa. Finally, we present both qualitative and quantitative experimental results regarding different system designs, as well as visualizing the learned representations of our model.
Date issued
2019-06Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence LaboratoryJournal
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Li, Yunzhu, Zhu, Jun-Yan, Tedrake, Russ and Torralba, Antonio. 2019. "Connecting Touch and Vision via Cross-Modal Prediction." Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019-June.
Version: Author's final manuscript