dc.contributor.advisor | Fredo Durand. | en_US |
dc.contributor.author | Punwaney, Nikhil Narendra | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2018-12-18T19:47:19Z | |
dc.date.available | 2018-12-18T19:47:19Z | |
dc.date.copyright | 2018 | en_US |
dc.date.issued | 2018 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/119723 | |
dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. | en_US |
dc.description | This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. | en_US |
dc.description | Cataloged from student-submitted PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (page 53). | en_US |
dc.description.abstract | In the seventeenth century, Philosophers such as Leibniz and Descartes put forward proposal for codes to relate words between languages. The first patents for "translating machines" were applied for in the mid-1930s. Up to the 1980s, most Natural Language Processing (NLP) systems were based on complex sets of hand-written rules. At that time however, the introduction of machine learning algorithms for language processing revolutionized NLP.[5] In 2008, Collobert and Weston exhibited the power of pre-trained word embed- dings in a paper called A unified architecture for natural language processing. Here, word embeddings is highlight for its ability in downstream tasks. They also discuss a neural network architecture that many of todays approaches are built upon. In 2013, Mikolov created word2vec, a toolkit that enabled the training and use of pre-trained embeddings. In 2014, Pennington introduced GloVe, a competitive set of pre-trained embeddings. Starting off, a single word or group of words can be converted into a vector. This vector can be created using the Skip gram method, which predicts the possible words nearby, the LSTM-RNN method, which forms semantic representations of sentences by learning more about the sentence as it iterates through a sentence, using single convolution neural networks, and several other methods. Using these theories, we are trying to build a Similarity Engine which provides machine learning based content search and classification of data. | en_US |
dc.description.statementofresponsibility | by Nikhil Narendra Punwaney. | en_US |
dc.format.extent | 53 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Building a similarity engine | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M. Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.identifier.oclc | 1078639078 | en_US |