dc.contributor.author | Ronfard, Samuel | |
dc.contributor.author | Harris, Paul L. | |
dc.contributor.author | DeSteno, David | |
dc.contributor.author | Kory Westlund, Jacqueline Marie | |
dc.contributor.author | Jeong, Sooyeon | |
dc.contributor.author | Park, Hae Won | |
dc.contributor.author | Adhikari, Aradhana | |
dc.contributor.author | Breazeal, Cynthia Lynn | |
dc.date.accessioned | 2018-03-23T23:03:05Z | |
dc.date.available | 2018-03-23T23:03:05Z | |
dc.date.issued | 2017-06 | |
dc.date.submitted | 2016-10 | |
dc.identifier.issn | 1662-5161 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/114283 | |
dc.description.abstract | Prior research with preschool children has established that dialogic or active book reading is an effective method for expanding young children’s vocabulary. In this exploratory study, we asked whether similar benefits are observed when a robot engages in dialogic reading with preschoolers. Given the established effectiveness of active reading, we also asked whether this effectiveness was critically dependent on the expressive characteristics of the robot. For approximately half the children, the robot’s active reading was expressive; the robot’s voice included a wide range of intonation and emotion (Expressive). For the remaining children, the robot read and conversed with a flat voice, which sounded similar to a classic text-to-speech engine and had little dynamic range (Flat). The robot’s movements were kept constant across conditions. We performed a verification study using Amazon Mechanical Turk (AMT) to confirm that the Expressive robot was viewed as significantly more expressive, more emotional, and less passive than the Flat robot. We invited 45 preschoolers with an average age of 5 years who were either English Language Learners (ELL), bilingual, or native English speakers to engage in the reading task with the robot. The robot narrated a story from a picture book, using active reading techniques and including a set of target vocabulary words in the narration. Children were post-tested on the vocabulary words and were also asked to retell the story to a puppet. A subset of 34 children performed a second story retelling 4–6 weeks later. Children reported liking and learning from the robot a similar amount in the Expressive and Flat conditions. However, as compared to children in the Flat condition, children in the Expressive condition were more concentrated and engaged as indexed by their facial expressions; they emulated the robot’s story more in their story retells; and they told longer stories during their delayed retelling. Furthermore, children who responded to the robot’s active reading questions were more likely to correctly identify the target vocabulary words in the Expressive condition than in the Flat condition. Taken together, these results suggest that children may benefit more from the expressive robot than from the flat robot. | en_US |
dc.description.sponsorship | National Science Foundation (U.S.) (Grant IIS-11228860) | en_US |
dc.description.sponsorship | National Science Foundation (U.S.) (Grant IIS-1123085) | en_US |
dc.description.sponsorship | National Science Foundation (U.S.) (Grant IIS-1122845) | en_US |
dc.publisher | Frontiers Research Foundation | en_US |
dc.relation.isversionof | http://dx.doi.org/10.3389/FNHUM.2017.00295 | en_US |
dc.rights | Attribution 4.0 International (CC BY 4.0) | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
dc.source | Frontiers | en_US |
dc.title | Flat vs. Expressive Storytelling: Young Children’s Learning and Retention of a Social Robot’s Narrative | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Kory Westlund, Jacqueline M., Sooyeon Jeong, Hae W. Park, Samuel Ronfard, Aradhana Adhikari, Paul L. Harris, David DeSteno, and Cynthia L. Breazeal. “Flat Vs. Expressive Storytelling: Young Children’s Learning and Retention of a Social Robot’s Narrative.” Frontiers in Human Neuroscience 11 (June 7, 2017). © 2017 Kory Westlund, Jeong, Park, Ronfard, Adhikari, Harris, DeSteno and Breazeal | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Media Laboratory | en_US |
dc.contributor.department | Program in Media Arts and Sciences (Massachusetts Institute of Technology) | en_US |
dc.contributor.mitauthor | Kory Westlund, Jacqueline Marie | |
dc.contributor.mitauthor | Jeong, Sooyeon | |
dc.contributor.mitauthor | Park, Hae Won | |
dc.contributor.mitauthor | Adhikari, Aradhana | |
dc.contributor.mitauthor | Breazeal, Cynthia L. | |
dc.relation.journal | Frontiers in Human Neuroscience | en_US |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
dc.date.updated | 2018-02-16T19:25:07Z | |
dspace.orderedauthors | Kory Westlund, Jacqueline M.; Jeong, Sooyeon; Park, Hae W.; Ronfard, Samuel; Adhikari, Aradhana; Harris, Paul L.; DeSteno, David; Breazeal, Cynthia L. | en_US |
dspace.embargo.terms | N | en_US |
dc.identifier.orcid | https://orcid.org/0000-0002-0418-4674 | |
dc.identifier.orcid | https://orcid.org/0000-0003-0085-8130 | |
dc.identifier.orcid | https://orcid.org/0000-0002-0587-2065 | |
mit.license | PUBLISHER_CC | en_US |