Show simple item record

dc.contributor.authorZeulner, Tobias
dc.contributor.authorHagerer, Gerhard Johann
dc.contributor.authorMüller, Moritz
dc.contributor.authorVazquez, Ignacio
dc.contributor.authorGloor, Peter A.
dc.date.accessioned2024-04-26T13:36:29Z
dc.date.available2024-04-26T13:36:29Z
dc.date.issued2024-04-12
dc.identifier.issn2078-2489
dc.identifier.urihttps://hdl.handle.net/1721.1/154293
dc.description.abstractCurrent methods for assessing individual well-being in team collaboration at the workplace often rely on manually collected surveys. This limits continuous real-world data collection and proactive measures to improve team member workplace satisfaction. We propose a method to automatically derive social signals related to individual well-being in team collaboration from raw audio and video data collected in teamwork contexts. The goal was to develop computational methods and measurements to facilitate the mirroring of individuals’ well-being to themselves. We focus on how speech behavior is perceived by team members to improve their well-being. Our main contribution is the assembly of an integrated toolchain to perform multi-modal extraction of robust speech features in noisy field settings and to explore which features are predictors of self-reported satisfaction scores. We applied the toolchain to a case study, where we collected videos of 20 teams with 56 participants collaborating over a four-day period in a team project in an educational environment. Our audiovisual speaker diarization extracted individual speech features from a noisy environment. As the dependent variable, team members filled out a daily PERMA (positive emotion, engagement, relationships, meaning, and accomplishment) survey. These well-being scores were predicted using speech features extracted from the videos using machine learning. The results suggest that the proposed toolchain was able to automatically predict individual well-being in teams, leading to better teamwork and happier team members.en_US
dc.publisherMDPI AGen_US
dc.relation.isversionof10.3390/info15040217en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceMultidisciplinary Digital Publishing Instituteen_US
dc.titlePredicting Individual Well-Being in Teamwork Contexts Based on Speech Featuresen_US
dc.typeArticleen_US
dc.identifier.citationZeulner, T.; Hagerer, G.J.; Müller, M.; Vazquez, I.; Gloor, P.A. Predicting Individual Well-Being in Teamwork Contexts Based on Speech Features. Information 2024, 15, 217.en_US
dc.contributor.departmentSystem Design and Management Program.
dc.contributor.departmentMassachusetts Institute of Technology. Center for Collective Intelligence
dc.relation.journalInformationen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-04-26T13:09:06Z
dspace.date.submission2024-04-26T13:09:06Z
mit.journal.volume15en_US
mit.journal.issue4en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record