Show simple item record

dc.contributor.authorYamada, Moyuru
dc.contributor.authorD'Amario, Vanessa
dc.contributor.authorTakemoto, Kentaro
dc.contributor.authorBoix, Xavier
dc.contributor.authorSasaki, Tomotake
dc.date.accessioned2022-02-03T18:30:49Z
dc.date.available2022-02-03T18:30:49Z
dc.date.issued2022-02-03
dc.identifier.urihttps://hdl.handle.net/1721.1/139843
dc.description.abstractTransformer-based models achieve great performance on Visual Question Answering (VQA). How- ever, when we evaluate them on systematic generalization, i.e., handling novel combinations of known concepts, their performance degrades. Neural Module Networks (NMNs) are a promising approach for systematic generalization that consists on composing modules, i.e., neural networks that tackle a sub-task. Inspired by Transformers and NMNs, we propose Transformer Module Network (TMN), a novel Transformer-based model for VQA that dynamically composes modules into a question-specific Transformer network. TMNs achieve state-of-the-art systematic generalization performance in three VQA datasets, namely, CLEVR-CoGenT, CLOSURE and GQA-SGL, in some cases improving more than 30% over standard Transformers.en_US
dc.description.sponsorshipThis material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.en_US
dc.publisherCenter for Brains, Minds and Machines (CBMM)en_US
dc.relation.ispartofseriesCBMM Memo;121
dc.titleTransformer Module Networks for Systematic Generalization in Visual Question Answeringen_US
dc.typeArticleen_US
dc.typeTechnical Reporten_US
dc.typeOtheren_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record