Show simple item record

dc.contributor.advisorH. Sebastian Seung, Frédo Durand and Nir Shavit.en_US
dc.contributor.authorZlateski, Aleksandaren_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2016-12-22T15:16:21Z
dc.date.available2016-12-22T15:16:21Z
dc.date.copyright2016en_US
dc.date.issued2016en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/105955
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 139-145).en_US
dc.description.abstractI present a set of fast and scalable algorithms for segmenting very large 3D images of brain tissue. Currently, light and electron microscopy can now produce terascale 3D images within hours. Extracting the information about the shapes and connectivity of the neurons require fast and accurate image segmentation algorithms. Due to the sheer size of the problem, traditional approaches might be computationally infeasible. I focus on an segmentation pipeline that breaks up the segmentation problem into multiple stages, each of which can be improved independently. In the first step of the pipeline, convolutional neural networks are used to predict segment boundaries. Watershed transform is then used to obtain an over-segmentation, which is then reduced using agglomerative clustering algorithms. Finally, manual or computer-assisted proof reading is done by experts. In this thesis, I revisit the traditional approaches for training and applying convolutional neural networks, and propose: - A fast and scalable 3D convolutional network training algorithm suited for multi-core and many-core shared memory machines. The two main quantities of the algorithm are: (1) minimizing the required computation by using FFT-based convolution with memoization, and (2) parallelization approach that can utilize large number of CPUs while minimizing any required synchronization. - A high throughput inference algorithm that can utilize all available computational resources, CPUs and GPUs. I introduce a set of highly parallel algorithms for different layer types and architectures, and show how to combine them to achieve very high throughput. Additionally, I study the theoretical properties of the watershed transform of edge- weighed graphs and propose a liner-time algorithm. I propose a set of modification to the standard algorithm and a quasi-linear agglomerative clustering algorithm that can greatly reduce the over-segmentation produced by the standard watershed algorithm.en_US
dc.description.statementofresponsibilityby Aleksandar Zlateski.en_US
dc.format.extent145 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleScalable algorithms for semi-automatic segmentation of electron microscopy images of the brain tissueen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc965446890en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record