Show simple item record

dc.contributor.authorKubota, Alyssa
dc.contributor.authorIqbal, Tariq
dc.contributor.authorShah, Julie A
dc.contributor.authorRiek, Laurel D.
dc.date.accessioned2020-06-19T18:25:06Z
dc.date.available2020-06-19T18:25:06Z
dc.date.issued2019
dc.identifier.isbn978-1-5386-6027-0
dc.identifier.issn2577-087X
dc.identifier.urihttps://hdl.handle.net/1721.1/125890
dc.description.abstractIn safety-critical environments, robots need to reliably recognize human activity to be effective and trust-worthy partners. Since most human activity recognition (HAR) approaches rely on unimodal sensor data (e.g. motion capture or wearable sensors), it is unclear how the relationship between the sensor modality and motion granularity (e.g. gross or fine) of the activities impacts classification accuracy. To our knowledge, we are the first to investigate the efficacy of using motion capture as compared to wearable sensor data for recognizing human motion in manufacturing settings. We introduce the UCSD-MIT Human Motion dataset, composed of two assembly tasks that entail either gross or fine-grained motion. For both tasks, we compared the accuracy of a Vicon motion capture system to a Myo armband using three widely used HAR algorithms. We found that motion capture yielded higher accuracy than the wearable sensor for gross motion recognition (up to 36.95%), while the wearable sensor yielded higher accuracy for fine-grained motion (up to 28.06%). These results suggest that these sensor modalities are complementary, and that robots may benefit from systems that utilize multiple modalities to simultaneously, but independently, detect gross and fine-grained motion. Our findings will help guide researchers in numerous fields of robotics including learning from demonstration and grasping to effectively choose sensor modalities that are most suitable for their applications.en_US
dc.description.sponsorshipNational Science Foundation (grant nos. IIS-1724982 and IIS-1734482)en_US
dc.language.isoen
dc.publisherIEEEen_US
dc.relation.isversionof10.1109/ICRA.2019.8793954en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceother univ websiteen_US
dc.titleActivity recognition in manufacturing: the roles of motion capture and sEMG+inertial wearables in detecting fine vs gross motionen_US
dc.typeArticleen_US
dc.identifier.citationKubota, Alyssa, et al., "Activity recognition in manufacturing: the roles of motion capture and sEMG+inertial wearables in detecting fine vs gross motion." 2019 International Conference on Robotics and Automation (ICRA), May 20-24, 2019, Montreal, QC, IEEE, 2019: p. 6533-39 doi 10.1109/ICRA.2019.8793954 ©2019 Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.relation.journalInternational Conference on Robotics and Automation (ICRA)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-11-01T13:04:39Z
dspace.date.submission2019-11-01T13:04:47Z
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record