MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Exploring neural network architectures for acoustic modeling

Author(s)
Zhang, Yu, Ph. D. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Thumbnail
DownloadFull printable version (10.09Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
James R. Glass.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Deep neural network (DNN)-based acoustic models (AMs) have significantly improved automatic speech recognition (ASR) on many tasks. However, ASR performance still suffers from speaker and environment variability, especially under low-resource, distant microphone, noisy, and reverberant conditions. The goal of this thesis is to explore novel neural architectures that can effectively improve ASR performance. In the first part of the thesis, we present a well-engineered, efficient open-source framework to enable the creation of arbitrary neural networks for speech recognition. We first design essential components to simplify the creation of a neural network with recurrent loops. Next, we propose several algorithms to speed up neural network training based on this framework. We demonstrate the flexibility and scalability of the toolkit across different benchmarks. In the second part of the thesis, we propose several new neural models to reduce ASR word error rates (WERs) using the toolkit we created. First, we formulate a new neural architecture loosely inspired by humans to process low-resource languages. Second, we demonstrate a way to enable very deep neural network models by adding more non-linearities and expressive power while keeping the model optimizable and generalizable. Experimental results demonstrate that our approach outperforms several ASR baselines and model variants, yielding a 10% relative WER gain. Third, we incorporate these techniques into an end-to-end recognition model. We experiment with the Wall Street Journal ASR task and achieve 10.5% WER without any dictionary or language model, an 8.5% absolute improvement over the best published result.
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 121-132).
 
Date issued
2017
URI
http://hdl.handle.net/1721.1/113981
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.