Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator
Author(s)
Zhang, Annan
DownloadThesis PDF (3.976Mb)
Advisor
Rus, Daniela
Terms of use
Metadata
Show full item recordAbstract
Developing reliable control strategies for soft robots requires advances in soft robot perception. Due to their near-infinite degrees of freedom, obtaining useful sensory feedback from soft robots remains a long-standing challenge. Moreover, sensorization methods must be co-developed with more robust approaches to soft robotic actuation. However, current soft robotic sensors pose many performance limitations, and available materials and manufacturing techniques complicate the design of sensorized soft robots. To address these needs, we introduce a vision-based method to sensorize robust, electrically-driven soft robotic actuators constructed from a new class of architected materials. Specifically, we position cameras within the hollow interiors of actuators based on handed shearing auxetics (HSA) to record their deformation. Using external motion capture data as ground truth, we train a convolutional neural network (CNN) that maps the visual feedback to the pose of the actuator’s tip. Our model provides predictions of tip pose with sub-millimeter accuracy from only six minutes of training data, while remaining lightweight with 300,000 parameters and an inference time of 18 milliseconds per frame on a single-board computer. We also develop a model that additionally predicts the horizontal tip force acting on the actuator and demonstrate its ability to generalize to previously unseen forces. Overall, our methods present a reliable vision-based approach for designing sensorized soft robots built from electrically-actuated, architected materials.
Date issued
2022-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology