Show simple item record

dc.contributor.advisorAleksander Ma̧dry.en_US
dc.contributor.authorTrinh, Loc Quang.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2019-12-05T18:04:59Z
dc.date.available2019-12-05T18:04:59Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/123128
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 61-63).en_US
dc.description.abstractLayerwise training presents an alternative approach to end-to-end back-propagation for training deep convolutional neural networks. Although previous work was unsuccessful in demonstrating the viability of layerwise training, especially on large-scale datasets such as ImageNet, recent work has shown that layerwise training on specific architectures can yield highly competitive performances. On ImageNet, the layerwise trained networks can perform comparably to many state-of-the-art end-to-end trained networks. In this thesis, we compare the performance gap between the two training procedures across a wide range of network architectures and further analyze the possible limitations of layerwise training. Our results show that layerwise training quickly saturates after a certain critical layer, due to the overfitting of early layers within the networks. We discuss several approaches we took to address this issue and help layerwise training improve across multiple architectures. From a fundamental standpoint, this study emphasizes the need to open the blackbox that is modern deep neural networks and investigate the layerwise interactions between intermediate hidden layers within deep networks, all through the lens of layerwise training.en_US
dc.description.statementofresponsibilityby Loc Quang Trinh.en_US
dc.format.extent63 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleGreedy layerwise training of convolutional neural networksen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1128279897en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2019-12-05T18:04:58Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record