Segmentation according to natural examples: Learning static segmentation from motion segmentation
Author(s)Kaelbling, Leslie P.; Ross, Michael G.
DownloadRoss-2009-Segmentation According to Natural Examples Learning Static Segmentation from Motion Segmentation.pdf (2.331Mb)
MetadataShow full item record
The segmentation according to natural examples (SANE) algorithm learns to segment objects in static images from video training data. SANE uses background subtraction to find the segmentation of moving objects in videos. This provides object segmentation information for each video frame. The collection of frames and segmentations forms a training set that SANE uses to learn the image and shape properties of the observed motion boundaries. When presented with new static images, the trained model infers segmentations similar to the observed motion segmentations. SANE is a general method for learning environment-specific segmentation models. Because it can automatically generate training data from video, it can adapt to a new environment and new objects with relative ease, an advantage over untrained segmentation methods or those that require human-labeled training data. By using the local shape information in the training data, it outperforms a trained local boundary detector. Its performance is competitive with a trained top-down segmentation algorithm that uses global shape. The shape information it learns from one class of objects can assist the segmentation of other classes.
DepartmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
IEEE Transactions on Pattern Analysis and Machine Intelligence
Institute of Electrical and Electronics Engineers
Ross, M.G., and L.P. Kaelbling. “Segmentation According to Natural Examples: Learning Static Segmentation from Motion Segmentation.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 31.4 (2009): 661-676. ©2009 IEEE.
Final published version
INSPEC Accession Number: 10476222