Detecting Semantic Parts on Partially Occluded Objects
Author(s)
Wang, Jianyu; Xe, Cihang; Zhang, Zhishuai; Zhu, Jun; Xie, Lingxi; Yuille, Alan L.; ... Show more Show less
DownloadCBMM-Memo-078.pdf (1.739Mb)
Metadata
Show full item recordAbstract
In this paper, we address the task of detecting semantic parts on partially occluded objects. We consider a scenario where the model is trained using non-occluded images but tested on occluded images. The motivation is that there are infinite number of occlusion patterns in real world, which cannot be fully covered in the training data. So the models should be inherently robust and adaptive to occlusions instead of fitting / learning the occlusion patterns in the training data. Our approach detects semantic parts by accumulating the confidence of local visual cues. Specifically, the method uses a simple voting method, based on log-likelihood ratio tests and spatial constraints, to combine the evidence of local cues. These cues are called visual concepts, which are derived by clustering the internal states of deep networks. We evaluate our voting scheme on the VehicleSemanticPart dataset with dense part annotations. We randomly place two, three or four irrelevant objects onto the target object to generate testing images with various occlusions. Experiments show that our algorithm outperforms several competitors in semantic part detection when occlusions are present.
Date issued
2017-09-04Publisher
Center for Brains, Minds and Machines (CBMM)
Series/Report no.
CBMM Memo Series;078