Now showing items 1-4 of 4

    • Do Neural Networks for Segmentation Understand Insideness? 

      Villalobos, Kimberly; Štih, Vilim; Ahmadinejad, Amineh; Sundaram, Shobhita; Dozier, Jamell; e.a. (Center for Brains, Minds and Machines (CBMM), 2020-04-04)
      The insideness problem is an image segmentation modality that consists of determining which pixels are inside and outside a region. Deep Neural Networks (DNNs) excel in segmentation benchmarks, but it is unclear that they ...
    • On the Capability of Neural Networks to Generalize to Unseen Category-Pose Combinations 

      Madan, Spandan; Henry, Timothy; Dozier, Jamell; Ho, Helen; Bhandari, Nishchal; e.a. (Center for Brains, Minds and Machines (CBMM), 2020-07-17)
      Recognizing an object’s category and pose lies at the heart of visual understanding. Recent works suggest that deep neural networks (DNNs) often fail to generalize to category-pose combinations not seen during training. ...
    • Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations 

      Sakai, Akira; Sunagawa, Taro; Madan, Spandan; Suzuki, Kanata; Katoh, Takashi; e.a. (Center for Brains, Minds and Machines (CBMM), 2022-01-26)
      The training data distribution is often biased towards objects in certain orientations and illumination conditions. While humans have a remarkable capability of recognizing objects in out-of-distribution (OoD) orientations ...
    • Transformer Module Networks for Systematic Generalization in Visual Question Answering 

      Yamada, Moyuru; D'Amario, Vanessa; Takemoto, Kentaro; Boix, Xavier; Sasaki, Tomotake (Center for Brains, Minds and Machines (CBMM), 2022-02-03)
      Transformer-based models achieve great performance on Visual Question Answering (VQA). How- ever, when we evaluate them on systematic generalization, i.e., handling novel combinations of known concepts, their performance ...