Show simple item record

dc.contributor.authorChen, Wenqiang
dc.contributor.authorHu, Yexin
dc.contributor.authorSong, Wei
dc.contributor.authorLiu, Yingcheng
dc.contributor.authorTorralba, Antonio
dc.contributor.authorMatusik, Wojciech
dc.date.accessioned2024-02-01T15:13:27Z
dc.date.available2024-02-01T15:13:27Z
dc.date.issued2024-01-12
dc.identifier.issn2474-9567
dc.identifier.urihttps://hdl.handle.net/1721.1/153446
dc.description.abstractHuman mesh reconstruction is essential for various applications, including virtual reality, motion capture, sports performance analysis, and healthcare monitoring. In healthcare contexts such as nursing homes, it is crucial to employ plausible and non-invasive methods for human mesh reconstruction that preserve privacy and dignity. Traditional vision-based techniques encounter challenges related to occlusion, viewpoint limitations, lighting conditions, and privacy concerns. In this research, we present CAvatar, a real-time human mesh reconstruction approach that innovatively utilizes pressure maps recorded by a tactile carpet as input. This advanced, non-intrusive technology obviates the need for cameras during usage, thereby safeguarding privacy. Our approach addresses several challenges, such as the limited spatial resolution of tactile sensors, extracting meaningful information from noisy pressure maps, and accommodating user variations and multiple users. We have developed an attention-based deep learning network, complemented by a discriminator network, to predict 3D human pose and shape from 2D pressure maps with notable accuracy. Our model demonstrates promising results, with a mean per joint position error (MPJPE) of 5.89 cm and a per vertex error (PVE) of 6.88 cm. To the best of our knowledge, we are the first to generate 3D mesh of human activities solely using tactile carpet signals, offering a novel approach that addresses privacy concerns and surpasses the limitations of existing vision-based and wearable solutions.en_US
dc.publisherACMen_US
dc.relation.isversionofhttps://doi.org/10.1145/3631424en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleCAvatar: Real-time Human Activity Mesh Reconstruction via Tactile Carpetsen_US
dc.typeArticleen_US
dc.identifier.citationChen, Wenqiang, Hu, Yexin, Song, Wei, Liu, Yingcheng, Torralba, Antonio et al. 2024. "CAvatar: Real-time Human Activity Mesh Reconstruction via Tactile Carpets." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 7 (4).
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.relation.journalProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologiesen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-02-01T08:45:35Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-02-01T08:45:35Z
mit.journal.volume7en_US
mit.journal.issue4en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record