Show simple item record

dc.contributor.authorTavabi, N.
dc.contributor.authorStück, D.
dc.contributor.authorSignorini, A.
dc.contributor.authorKarjadi, C.
dc.contributor.authorHanai, T. A.
dc.contributor.authorSandoval, M.
dc.contributor.authorLemke, C.
dc.contributor.authorGlass, J.
dc.contributor.authorHardy, S.
dc.contributor.authorLavallee, M.
dc.contributor.authorWasserman, B.
dc.contributor.authorAng, T. F. A.
dc.contributor.authorNowak, C. M.
dc.contributor.authorKainkaryam, R.
dc.contributor.authorFoschini, L.
dc.contributor.authorAu, Rhoda
dc.date.accessioned2022-07-18T12:03:45Z
dc.date.available2022-07-18T12:03:45Z
dc.date.issued2022-07-13
dc.identifier.urihttps://hdl.handle.net/1721.1/143781
dc.description.abstractAbstract Background Although patients with Alzheimer’s disease and other cognitive-related neurodegenerative disorders may benefit from early detection, development of a reliable diagnostic test has remained elusive. The penetration of digital voice-recording technologies and multiple cognitive processes deployed when constructing spoken responses might offer an opportunity to predict cognitive status. Objective To determine whether cognitive status might be predicted from voice recordings of neuropsychological testing Design Comparison of acoustic and (para)linguistic variables from low-quality automated transcriptions of neuropsychological testing (n = 200) versus variables from high-quality manual transcriptions (n = 127). We trained a logistic regression classifier to predict cognitive status, which was tested against actual diagnoses. Setting Observational cohort study. Participants 146 participants in the Framingham Heart Study. Measurements Acoustic and either paralinguistic variables (e.g., speaking time) from automated transcriptions or linguistic variables (e.g., phrase complexity) from manual transcriptions. Results Models based on demographic features alone were not robust (area under the receiver-operator characteristic curve [AUROC] 0.60). Addition of clinical and standard acoustic features boosted the AUROC to 0.81. Additional inclusion of transcription-related features yielded an AUROC of 0.90. Conclusions The use of voice-based digital biomarkers derived from automated processing methods, combined with standard patient screening, might constitute a scalable way to enable early detection of dementia.en_US
dc.publisherSpringer International Publishingen_US
dc.relation.isversionofhttps://doi.org/10.14283/jpad.2022.66en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceSpringer International Publishingen_US
dc.titleCognitive Digital Biomarkers from Automated Transcription of Spoken Languageen_US
dc.typeArticleen_US
dc.identifier.citationTavabi, N., Stück, D., Signorini, A., Karjadi, C., Hanai, T. A. et al. 2022. "Cognitive Digital Biomarkers from Automated Transcription of Spoken Language."
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-07-17T03:16:12Z
dc.language.rfc3066en
dc.rights.holderThe Author(s)
dspace.embargo.termsN
dspace.date.submission2022-07-17T03:16:11Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record