Show simple item record

dc.contributor.authorWang, Yu-Siang
dc.contributor.authorLiu, Chenxi
dc.contributor.authorZeng, Xiaohui
dc.contributor.authorYuille, Alan L.
dc.date.accessioned2018-05-15T15:59:52Z
dc.date.available2018-05-15T15:59:52Z
dc.date.issued2018-05-10
dc.identifier.urihttp://hdl.handle.net/1721.1/115375
dc.description.abstractIn this paper, we study the problem of parsing structured knowledge graphs from textual descrip- tions. In particular, we consider the scene graph representation that considers objects together with their attributes and relations: this representation has been proved useful across a variety of vision and language applications. We begin by introducing an alternative but equivalent edge-centric view of scene graphs that connect to dependency parses. Together with a careful redesign of label and action space, we combine the two-stage pipeline used in prior work (generic dependency parsing followed by simple post-processing) into one, enabling end-to-end training. The scene graphs generated by our learned neural dependency parser achieve an F-score similarity of 49.67% to ground truth graphs on our evaluation set, surpassing best previous approaches by 5%. We further demonstrate the effective- ness of our learned parser on image retrieval applications.en_US
dc.description.sponsorshipThis material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.en_US
dc.language.isoen_USen_US
dc.publisherCenter for Brains, Minds and Machines (CBMM)en_US
dc.relation.ispartofseriesCBMM Memo Series;082
dc.titleScene Graph Parsing as Dependency Parsingen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US
dc.typeOtheren_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record