dc.contributor.advisor | John W. Fisher, III. | en_US |
dc.contributor.author | Chang, Jason, Ph. D. Massachusetts Institute of Technology | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2010-05-25T20:53:51Z | |
dc.date.available | 2010-05-25T20:53:51Z | |
dc.date.copyright | 2009 | en_US |
dc.date.issued | 2009 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/55147 | |
dc.description | Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009. | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (p. 109-112). | en_US |
dc.description.abstract | The work in this thesis focuses on two main computer vision research topic: image segmentation and texture modeling. Information theoretic measures have been applied to image segmentation algorithms for the past decade. In previous work, common measures such as mutual information or J divergence have been used. Algorithms typically differ by the measure they use and the features they use to segment an image. When both the information measure and the features change, it is difficult to compare which algorithm actually performs better and for what reason. Though we do not provide a solution to this problem, we do compare and contrast three distances under two different measures. This thesis considers two forms of information theoretic based image segmentation algorithms that have previously been considered. We denote them here as the label method and the conditional method. Gradient ascent velocities are derived for a general Ali-Silvey distance for both methods, and a unique bijective mapping is shown to exist between the two methods when the Ali-Silvey distance takes on a specific form. While the conditional method is more commonly considered, it is implicitly limited by a two-region segmentation by construction. Using the derived mapping, one can easily extend a binary segmentation algorithm based on the conditional method to a multiregion segmentation algorithm based on the label method. The importance of initializations and local extrema is also considered, and a method of multiple random initializations is shown to produce better results. | en_US |
dc.description.abstract | (cont.) Additionally, segmentation results and methods for comparing the utility of the different measures are presented. This thesis also considers a novel texture model for representing textured regions with smooth variations in orientation and scale. By utilizing the steerable pyramid of Simoncelli and Freeman, the textured regions of natural images are decomposed into explicit local attributes of contrast, bias, scale, and orientation. Once found, smoothness in these attributes are imposed via estimation of Markov random fields. This combination allows for demonstrable improvements in common scene analysis applications including segmentation, reflectance and shading estimation, and estimation of the radiometric response function from a single grayscale image. | en_US |
dc.description.statementofresponsibility | by Jason Chang. | en_US |
dc.format.extent | 112 p. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by
copyright. They may be viewed from this source for any purpose, but
reproduction or distribution in any format is prohibited without written
permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Extracting orientation and scale from smoothly varying textures with application to segmentation | en_US |
dc.type | Thesis | en_US |
dc.description.degree | S.M. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.identifier.oclc | 599958937 | en_US |