Show simple item record

dc.contributor.advisorCynthia Breazeal.en_US
dc.contributor.authorMoreno, Felipe(Felipe I.)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2021-05-24T19:52:20Z
dc.date.available2021-05-24T19:52:20Z
dc.date.copyright2021en_US
dc.date.issued2021en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/130700
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 95-102).en_US
dc.description.abstractWe have developed a framework for Analyzing Facial Videos and applying it to Automatic Depression Detection. We also developed a video based models We have developed a framework to analyze the decisions of Deep Neural Networks trained on facial videos. We test this framework on Automatic Depression Detection. We first train Deep Convolutional Neural Networks (DCNN) pre-trained on Action Recognition datasets and fine-tune on the facial videos. We interpret the model's saliency maps by analyzing face regions and temporal expression semantics. Our framework generates both visual and quantitative explanations on the model's decision. Simultaneously, our video based modeling has improved previous single-face benchmarks of visual Automatic Depression Detection (ADD). We conclude successfully that we have developed the ability to generate hypotheses from a facial model's decisions, and improved Automatic Depression Detection's predictive performance.en_US
dc.description.statementofresponsibilityby Felipe Moreno.en_US
dc.format.extent102 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleExpresso-AI : a framework for explainable video based deep learning models through gestures and expressionsen_US
dc.title.alternativeFramework for explainable video based deep learning models through gestures and expressionsen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1251800404en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2021-05-24T19:52:20Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record