Show simple item record

dc.contributor.advisorFredo Durand.en_US
dc.contributor.authorPunwaney, Nikhil Narendraen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2018-12-18T19:47:19Z
dc.date.available2018-12-18T19:47:19Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/119723
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (page 53).en_US
dc.description.abstractIn the seventeenth century, Philosophers such as Leibniz and Descartes put forward proposal for codes to relate words between languages. The first patents for "translating machines" were applied for in the mid-1930s. Up to the 1980s, most Natural Language Processing (NLP) systems were based on complex sets of hand-written rules. At that time however, the introduction of machine learning algorithms for language processing revolutionized NLP.[5] In 2008, Collobert and Weston exhibited the power of pre-trained word embed- dings in a paper called A unified architecture for natural language processing. Here, word embeddings is highlight for its ability in downstream tasks. They also discuss a neural network architecture that many of todays approaches are built upon. In 2013, Mikolov created word2vec, a toolkit that enabled the training and use of pre-trained embeddings. In 2014, Pennington introduced GloVe, a competitive set of pre-trained embeddings. Starting off, a single word or group of words can be converted into a vector. This vector can be created using the Skip gram method, which predicts the possible words nearby, the LSTM-RNN method, which forms semantic representations of sentences by learning more about the sentence as it iterates through a sentence, using single convolution neural networks, and several other methods. Using these theories, we are trying to build a Similarity Engine which provides machine learning based content search and classification of data.en_US
dc.description.statementofresponsibilityby Nikhil Narendra Punwaney.en_US
dc.format.extent53 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleBuilding a similarity engineen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1078639078en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record