dc.contributor.advisor | Cynthia Breazeal. | en_US |
dc.contributor.author | Moreno, Felipe(Felipe I.) | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2021-05-24T19:52:20Z | |
dc.date.available | 2021-05-24T19:52:20Z | |
dc.date.copyright | 2021 | en_US |
dc.date.issued | 2021 | en_US |
dc.identifier.uri | https://hdl.handle.net/1721.1/130700 | |
dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021 | en_US |
dc.description | Cataloged from the official PDF of thesis. | en_US |
dc.description | Includes bibliographical references (pages 95-102). | en_US |
dc.description.abstract | We have developed a framework for Analyzing Facial Videos and applying it to Automatic Depression Detection. We also developed a video based models We have developed a framework to analyze the decisions of Deep Neural Networks trained on facial videos. We test this framework on Automatic Depression Detection. We first train Deep Convolutional Neural Networks (DCNN) pre-trained on Action Recognition datasets and fine-tune on the facial videos. We interpret the model's saliency maps by analyzing face regions and temporal expression semantics. Our framework generates both visual and quantitative explanations on the model's decision. Simultaneously, our video based modeling has improved previous single-face benchmarks of visual Automatic Depression Detection (ADD). We conclude successfully that we have developed the ability to generate hypotheses from a facial model's decisions, and improved Automatic Depression Detection's predictive performance. | en_US |
dc.description.statementofresponsibility | by Felipe Moreno. | en_US |
dc.format.extent | 102 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Expresso-AI : a framework for explainable video based deep learning models through gestures and expressions | en_US |
dc.title.alternative | Framework for explainable video based deep learning models through gestures and expressions | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M. Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.identifier.oclc | 1251800404 | en_US |
dc.description.collection | M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
dspace.imported | 2021-05-24T19:52:20Z | en_US |
mit.thesis.degree | Master | en_US |
mit.thesis.department | EECS | en_US |