Learning to Act Properly: Predicting and Explaining Affordances from Images
Author(s)
Chuang, Ching-Yao; Li, Jiaman; Torralba, Antonio; Fidler, Sanja
DownloadAccepted version (5.203Mb)
Terms of use
Metadata
Show full item recordAbstract
We address the problem of affordance reasoning in diverse scenes that appear in the real world. Affordances relate the agent's actions to their effects when taken on the surrounding objects. In our work, we take the egocentric view of the scene, and aim to reason about action-object affordances that respect both the physical world as well as the social norms imposed by the society. We also aim to teach artificial agents why some actions should not be taken in certain situations, and what would likely happen if these actions would be taken. We collect a new dataset that builds upon ADE20k [32], referred to as ADE-Affordance, which contains annotations enabling such rich visual reasoning. We propose a model that exploits Graph Neural Networks to propagate contextual information from the scene in order to perform detailed affordance reasoning about each object. Our model is showcased through various ablation studies, pointing to successes and challenges in this complex task. Keywords: cognition; visualization; neural networks; knowledge based systems; task analysis; data collection; robots
Date issued
2018-12-17Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Publisher
IEEE
Citation
Chuang, Ching-Yao et al. "Learning to Act Properly: Predicting and Explaining Affordances from Images." 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 18-23, 2018, Salt Lake City, Utah, USA, IEEE, 2018
Version: Author's final manuscript
ISBN
9781538664209
9781538664216
ISSN
2575-7075
1063-6919