Abstracts

2024

Özçalışkan, Ş., Lucero, C., & Goldin‐Meadow, S. (2024). Is vision necessary for the timely acquisition of language‐specific patterns in co‐speech gesture and their lack in silent gesture? Developmental Science. https://doi.org/10.1111/desc.13507

Blind adults display language-specificity in their packaging and ordering of events in speech. These differences affect the representation of events in co-speech gesture–gesturing with speech–but not in silent gesture–gesturing without speech. Here we examine when in development blind children begin to show adult-like patterns in co-speech and silent gesture. We studied speech and gestures produced by 30 blind and 30 sighted children learning Turkish, equally divided into 3 age groups: 5-6, 7-8, 9-10 years. The children were asked to describe three-dimensional spatial event scenes (e.g., running out of a house) first with speech, and then without speech using only their hands. We focused on physical motion events, which, in blind adults, elicit cross-linguistic differences in speech and co-speech gesture, but cross-linguistic similarities in silent gesture. Our results showed an effect of language on gesture when it was accompanied by speech (co-speech gesture), but not when it was used without speech (silent gesture) across both blind and sighted learners. The language-specific co-speech gesture pattern for both packaging and ordering semantic elements was present at the earliest ages we tested the blind and sighted children. The silent gesture pattern appeared later for blind children than sighted children for both packaging and ordering. Our findings highlight gesture as a robust and integral aspect of the language acquisition process at the early ages and provide insight into when language does and does not have an effect on gesture, even in blind children who lack visual access to gesture. RESEARCH HIGHLIGHTS: Gestures, when produced with speech (i.e., co-speech gesture), follow language-specific patterns in event representation in both blind and sighted children. Gestures, when produced without speech (i.e., silent gesture), do not follow the language-specific patterns in event representation in both blind and sighted children. Language-specific patterns in speech and co-speech gestures are observable at the same time in blind and sighted children. The cross-linguistic similarities in silent gestures begin slightly later in blind children than in sighted children.

Tamis‐LeMonda, C. S., Kachergis, G., Masek, L. R., Gonzalez, S. L., Soska, K. C., Herzberg, O., Xu, M., Adolph, K. E., Gilmore, R. O., Bornstein, M. H., Casasola, M., Fausey, C. M., Frank, M. C., Goldin‐Meadow, S., Gros‐Louis, J., Hirsh‐Pasek, K., Iverson, J., Lew‐Williams, C., MacWhinney, B., … Yurovsky, D. (2024). Comparing apples to Manzanas and oranges to Naranjas: A new measure of English‐Spanish vocabulary for dual language learners. Infancy, 29(3), 302–326. https://doi.org/10.1111/infa.12571

The valid assessment of vocabulary development in dual-language-learning infants is critical to developmental science. We developed the Dual Language Learners English-Spanish (DLL-ES) Inventories to measure vocabularies of U.S. English-Spanish DLLs. The inventories provide translation equivalents for all Spanish and English items on Communicative Development Inventory (CDI) short forms; extended inventories based on CDI long forms; and Spanish language-variety options. Item-Response Theory analyses applied to Wordbank and Web-CDI data (n = 2603, 12-18 months; n = 6722, 16-36 months; half female; 1% Asian, 3% Black, 2% Hispanic, 30% White, 64% unknown) showed near-perfect associations between DLL-ES and CDI long-form scores. Interviews with 10 Hispanic mothers of 18- to 24-month-olds (2 White, 1 Black, 7 multi-racial; 6 female) provide a proof of concept for the value of the DLL-ES for assessing the vocabularies of DLLs.

2023

Gomes, V., Doherty, R., Smits, D., Goldin-Meadow, S., Trueswell, J. C., & Feiman, R. (2023). It’s not just what we don’t know: The mapping problem in the acquisition of negation. Cognitive Psychology, 145, 101592. doi:10.1016/j.cogpsych.2023.101592

How do learners learn what no and not mean when they are only presented with what is? Given its complexity, abstractness, and roles in logic, truth-functional negation might be a conceptual accomplishment. As a result, young children’s gradual acquisition of negation words might be due to their undergoing a gradual conceptual change that is necessary to represent those words’ logical meaning. However, it’s also possible that linguistic expressions of negation take time to learn because of children’s gradually increasing grasp of their language. To understand what no and not mean, children might first need to understand the rest of the sentences in which those words are used. We provide experimental evidence that conceptually equipped learners (adults) face the same acquisition challenges that children do when their access to linguistic information is restricted, which simulates how much language children understand at different points in acquisition. When watching a silenced video of naturalistic uses of negators by parents speaking to their children, adults could tell when the parent was prohibiting the child and struggled with inferring that negators were used to express logical negation. However, when provided with additional information about what else the parent said, guessing that the parent had expressed logical negation became easy for adults. Though our findings do not rule out that young learners also undergo conceptual change, they show that increasing understanding of language alone, with no accompanying conceptual change, can account for the gradual acquisition of negation words.

Carrazza, C., Wakefield, E. M., Hemani-Lopez, N., Plath, K., & Goldin-Meadow, S. (2021). Children integrate speech and gesture across a wider temporal window than speech and action when learning a math concept. Cognition, 210, 104604. doi:10.1016/j.cognition.2021.104604

It is well established that gesture facilitates learning, but understanding the best way to harness gesture and how gesture helps learners are still open questions. Here, we consider one of the properties that may make gesture a powerful teaching tool: its temporal alignment with spoken language. Previous work shows that the simultaneity of speech and gesture matters when children receive instruction from a teacher (Congdon et al., 2017). In Study 1, we ask whether simultaneity also matters when children themselves are the ones who produce speech and gesture strategies. Third-graders (N = 75) were taught to produce one strategy in speech and one strategy in gesture for correctly solving mathematical equivalence problems; they were told to produce these strategies either simultaneously (S + G) or sequentially (SG; GS) during a training session. Learning was assessed immediately after training, at a 24-h follow-up, and at a 4-week follow-up. Children showed evidence of learning and retention across all three conditions. Study 2 was conducted to explore whether it was the special relationship between speech and gesture that helped children learn. Third-graders (N = 87) were taught an action strategy instead of a gesture strategy; all other aspects of the design were the same. Children again learned across all three conditions. But only children who produced simultaneous speech and action retained what they had learned at the follow-up sessions. Results have implications for why gesture is beneficial to learners and, taken in relation to previous literature, reveal differences in the mechanisms by which doing versus seeing gesture facilitates learning.

 

Goldin-Meadow, S. & Mylander, C. Gestural Communication in Deaf Children: The Effects and Non-Effects of Parental Input on Early Language Development. Monographs of the Society for Research in Child Development, 1984, 49 (3), No. 207.

We all know people who talk with their hands—but do they know what they’re saying with them? Our gestures can reveal and contradict us, and express thoughts we may not even know we’re thinking. In Thinking with Your Hands, esteemed cognitive psychologist Susan Goldin-Meadow argues that gesture is vital to how we think, learn, and communicate. She shows us, for instance, how the height of our gestures can reveal unconscious bias, or how the shape of a student’s gestures can track their mastery of a new concept—even when they’re still giving wrong answers. She compels us to rethink everything from how we set child development milestones, to what’s admissible in a court of law, to whether Zoom is an adequate substitute for in-person conversation.  Sweeping and ambitious, Thinking with Your Hands promises to transform the way we think about language and communication.

Alda, A. (Host). (2023, April 26). Susan Goldin-Meadow: Thinking with your Hands [Audio podcast episode]. In Clear+Vivid with Alan Alda. Apple Podcasts. https://podcasts.apple.com/us/podcast/clear-vivid-with-alan-alda/id1400082430?i=1000610499411

Decades spent studying the way we use our hands when we talk has convinced Susan Goldin-Meadow that not only do gestures help our listeners understand us; gestures help us understand ourselves. They help us think, and as children, even to learn.

Rissman, L., Horton, L., & Goldin-Meadow, S. (2023). Universal constraints on linguistic event categories: A cross-cultural study of child homesign. Psychological Science. https://doi.org/10.1177/09567976221140328

Languages carve up conceptual space in varying ways—for example, English uses the verb cut both for cutting with a knife and for cutting with scissors, but other languages use distinct verbs for these events. We asked whether, despite this variability, there are universal constraints on how languages categorize events involving tools (e.g., knife-cutting). We analyzed descriptions of tool events from two groups: (a) 43 hearing adult speakers of English, Spanish, and Chinese and (b) 10 deaf child homesigners ages 3 to 11 (each of whom has created a gestural language without input from a conventional language model) in five different countries (Guatemala, Nicaragua, United States, Taiwan, Turkey). We found alignment across these two groups—events that elicited tool-prominent language among the spoken-language users also elicited tool-prominent language among the homesigners. These results suggest ways of conceptualizing tool events that are so prominent as to constitute a universal constraint on how events are categorized in language.

2022

Novack, M., & Goldin-Meadow, S. (2022). Harnessing Gesture to Understand and Support Healthy Development. Reference Module In Biomedical Sciences. https://doi.org/10.1016/b978-0-12-818872-9.00075-3

Communication is a critical skill in development—it allows children to convey the contents of their minds, and gain access to the thoughts of those around them. When we think of early communication, we may think mostly of children’s first words. But, in fact, early communication is led not by the mouth, but by the hands. We gesture from early in development and throughout the lifespan. For instance, babies point to objects they want a parent to see, school-aged children use their hands to describe their reasoning about complex concepts such as conservation or mathematical equivalence, and adults use gestures when talking to each other, when talking to their children, and even when they are by themselves thinking through a problem. Most incredibly from a developmental perspective, gestures lead the way in communicative and language development, and tie specifically to cognitive advancements. In this chapter, we discuss the role of gestures as they contribute to developmental outcomes. We begin by reviewing how children gain the ability to both produce and comprehend gestures, and then discuss how gesture is linked to communicative development more broadly. We then discuss how gesture, combined with language, has the unique ability to shed light on cognitive advancements, both by providing a window onto children’s conceptual state and also by playing a functional role in the learning process itself. Next, we review the role of gesture in cases of atypical development, outline how gesture can be used as a diagnostic tool and discuss its potential in intervention. We emphasize the benefit of considering gesture as part of neurodevelopmental evaluations, and discuss evidence suggesting that delays in gesture production might indicate greater concerns for language or cognitive development. Finally, we offer recommendations to parents, teachers and clinicians regarding the importance of paying attention to gesture in developmental populations.

Fay, N., Walker, B., Ellison, T. M., Blundell, Z., De Kleine, N., Garde, M., Lister, C. J., & Goldin-Meadow, S. (2022). Gesture is the primary modality for language creation. Proceedings of the Royal Society B: Biological Sciences, 289(1970). https://doi.org/10.1098/rspb.2022.0066

How language began is one of the oldest questions in science, but theories remain speculative due to a lack of direct evidence. Here, we report two experiments that generate empirical evidence to inform gesture-first and vocal-first theories of language origin; in each, we tested modern humans’ ability to communicate a range of meanings (995 distinct words) using either gesture or non-linguistic vocalization. Experiment 1 is a cross-cultural study, with signal Producers sampled from Australia (n = 30, Mage = 32.63, s.d. = 12.42) and Vanuatu (n = 30, Mage = 32.40, s.d. = 11.76). Experiment 2 is a cross-experiential study in which Producers were either sighted (n = 10, Mage = 39.60, s.d. = 11.18) or severely vision-impaired (n = 10, Mage = 39.40, s.d. = 10.37). A group of undergraduate student Interpreters guessed the meaning of the signals created by the Producers (n = 140). Communication success was substantially higher in the gesture modality than the vocal modality (twice as high overall; 61.17% versus 29.04% success). This was true within cultures, across cultures and even for the signals produced by severely vision-impaired participants. The success of gesture is attributed in part to its greater universality (i.e. similarity in form across different Producers). Our results support the hypothesis that gesture is the primary modality for language creation.

Motamedi, Y., Montemurro, K., Abner, N., Flaherty, M., Kirby, S., & Goldin-Meadow, S. (2022). The Seeds of the Noun–Verb Distinction in the Manual Modality: Improvisation and Interaction in the Emergence of Grammatical Categories. Languages, (Vol. 7, Issue 2). Multidisciplinary Digital Publishing Institute (MDPI). https://doi.org/10.3390/languages7020095 

The noun–verb distinction has long been considered a fundamental property of human language, and has been found in some form even in the earliest stages of language emergence, including homesign and the early generations of emerging sign languages. We present two experimental studies that use silent gesture to investigate how noun–verb distinctions develop in the manual modality through two key processes: (i) improvising using novel signals by individuals, and (ii) using those signals in the interaction between communicators. We operationalise communicative interaction in two ways: a setting in which members of the dyad were in separate booths and were given a comprehension test after each stimulus vs. a more naturalistic face-to-face conversation without comprehension checks. There were few differences between the two conditions, highlighting the robustness of the paradigm. Our findings from both experiments reflect patterns found in naturally emerging sign languages. Some formal distinctions arise in the earliest stages of improvisation and do not require interaction to develop. However, the full range of formal distinctions between nouns and verbs found in naturally emerging language did not appear with either improvisation or interaction, suggesting that transmitting the language to a new generation of learners might be necessary for these properties to emerge. 

2021

Flaherty, M., Hunsicker, D., & Goldin-Meadow, S. (2021). Structural biases that children bring to language learning: A cross-cultural look at gestural input to homesign. Cognition211, 104608. https://doi.org/10.1016/j.cognition.2021.104608

Linguistic input has an immediate effect on child language, making it difficult to discern whatever biases children may bring to language-learning. To discover these biases, we turn to deaf children who cannot acquire spoken language and are not exposed to sign language. These children nevertheless produce gestures, called homesigns, which have structural properties found in natural language. We ask whether these properties can be traced to gestures produced by hearing speakers in Nicaragua, a gesture-rich culture, and in the USA, a culture where speakers rarely gesture without speech. We studied 7 homesigning children and hearing family members in Nicaragua, and 4 in the USA. As expected, family members produced more gestures without speech, and longer gesture strings, in Nicaragua than in the USA. However, in both cultures, homesigners displayed more structural complexity than family members, and there was no correlation between individual homesigners and family members with respect to structural complexity. The findings replicate previous work showing that the gestures hearing speakers produce do not offer a model for the structural aspects of homesign, thus suggesting that children bring biases to construct, or learn, these properties to language-learning. The study also goes beyond the current literature in three ways. First, it extends homesign findings to Nicaragua, where homesigners received a richer gestural model than USA homesigners. Moreover, the relatively large numbers of gestures in Nicaragua made it possible to take advantage of more sophisticated statistical techniques than were used in the original homesign studies. Second, the study extends the discovery of complex noun phrases to Nicaraguan homesign. The almost complete absence of complex noun phrases in the hearing family members of both cultures provides the most convincing evidence to date that homesigners, and not their hearing family members, are the ones who introduce structural properties into homesign. Finally, by extending the homesign phenomenon to Nicaragua, the study offers insight into the gestural precursors of an emerging sign language. The findings shed light on the types of structures that an individual can introduce into communication before that communication is shared within a community of users, and thus sheds light on the roots of linguistic structure.

Goldin-Meadow, S. (2021). Gesture is an intrinsic part of modern-day human communication and may always have been so. The Oxford Handbook of Human Symbolic Evolution. https://doi.org/10.1093/oxfordhb/9780198813781.013.12

This chapter reviews three types of evidence from current-day languages consistent with the view that human language has always drawn upon both the manual and oral modalities, a view that is contra the gesture-first theory of language evolution. First, gesture and speech form a single system, with speech using a categorical format and gesture a mimetic format. Second, when this system is disrupted, as when speech is not possible, the manual modality takes over the categorical forms typical of speech. Finally, when the manual modality assumes a categorical format, as in sign languages of the Deaf, mimetic forms do not disappear but arise in the gestures signers produce as they sign. This picture of modern-day language is consistent with the view that gesture and speech have both been part of human language from the beginning.

Frausel, R. R., Vollman, E., Muzard, A., Richland, L. E., Goldin‐Meadow, S., & Levine, S. C. (2021). Developmental Trajectories of Early Higher‐Order Thinking Talk Differ for Typically Developing Children and Children With Unilateral Brain Injuries. Mind, Brain, and Education16(2), 153–166. https://doi.org/10.1111/mbe.12301

The use of higher-order thinking talk (HOTT), where speakers identify relations between representations (e.g., comparison, causality, abstraction) is examined in the spontaneous language produced by 64 typically developing (TD) and 46 brain-injured children, observed from 14–58 months at home. HOTT is less frequent in lower-income children and children with brain injuries, but effects differed depending on HOTT complexity and type of brain injury. Controlling for income, children with larger and later-occurring cerebrovascular infarcts produce fewer surface (where relations are more perceptual) and structure (where relations are more abstract) HOTT utterances than TD children. In contrast, children with smaller and earlier occurring periventricular lesions produce HOTT at comparable rates to TD children. This suggests that examining HOTT development may be an important tool for understanding the impacts of brain injury in children. Theoretically, these data reveal that both neurological (size and timing of brain injury) and environmental (family income) factors contribute to these skills.

Padilla-Iglesias, C., Woodward, A.L., Goldin-Meadow, S., & Shneidman, L. A. (2021). Changing language input following market integration in a Yucatec Mayan community. PLOS One, (Vol. 16, Issue 6, p. e0252926). Public Library of Science (PLoS). https://doi.org/10.1371/journal.pone.0252926

Like many indigenous populations worldwide, Yucatec Maya communities are rapidly undergoing change as they become more connected with urban centers and access to formal education, wage labour, and market goods became more accessible to their inhabitants. However, little is known about how these changes affect children’s language input. Here, we provide the first systematic assessment of the quantity, type, source, and language of the input received by 29 Yucatec Maya infants born six years apart in communities where increased contact with urban centres has resulted in a greater exposure to the dominant surrounding language, Spanish. Results show that infants from the second cohort received less directed input than infants in the first and, when directly addressed, most of their input was in Spanish. To investigate the mechanisms driving the observed patterns, we interviewed 126 adults from the communities. Against common assumptions, we showed that reductions in Mayan input did not simply result from speakers devaluing the Maya language. Instead, changes in input could be attributed to changes in childcare practices, as well as caregiver ethnotheories regarding the relative acquisition difficulty of each of the languages. Our study highlights the need for understanding the drivers of individual behaviour in the face of socio-demographic and economic changes as it is key for determining the fate of linguistic diversity.

Novack, M., Brentari, D., Goldin-Meadow, S., & Waxman, S. (2021). Sign language, like spoken language, promotes object categorization in young hearing infants. Cognition, (Vol. 215). Elsevier BV. https://doi.org/10.1016/j.cognition.2021.104845

The link between language and cognition is unique to our species and emerges early in infancy. Here, we provide the first evidence that this precocious language-cognition link is not limited to spoken language, but is instead sufficiently broad to include sign language, a language presented in the visual modality. Four- to six-month-old hearing infants, never before exposed to sign language, were familiarized to a series of category exemplars, each presented by a woman who either signed in American Sign Language (ASL) while pointing and gazing toward the objects, or pointed and gazed without language (control). At test, infants viewed two images: one, a new member of the now-familiar category; and the other, a member of an entirely new category. Four-month-old infants who observed ASL distinguished between the two test objects, indicating that they had successfully formed the object category; they were as successful as age-mates who listened to their native (spoken) language. Moreover, it was specifically the linguistic elements of sign language that drove this facilitative effect: infants in the control condition, who observed the woman only pointing and gazing failed to form object categories. Finally, the cognitive advantages of observing ASL quickly narrow in hearing infants: by 5- to 6-months, watching ASL no longer supports categorization, although listening to their native spoken language continues to do so. Together, these findings illuminate the breadth of infants’ early link between language and cognition and offer insight into how it unfolds.

Demir-Lira, O.E., Asaridou, S., Nolte, C., Small, S., & Goldin-Meadow, S. Parent language input prior to school forecasts change in children’s language-related cortical structures during mid-adolescence. Frontiers Human Neuroscience, (Vol. 15). Frontiers Media SA. https://doi.org/10.3389/fnhum.2021.650152

Children differ widely in their early language development, and this variability has important implications for later life outcomes. Parent language input is a strong experiential factor predicting the variability in children’s early language skills. However, little is known about the brain or cognitive mechanisms that underlie the relationship. In addressing this gap, we used longitudinal data spanning 15 years to examine the role of early parental language input that children receive during preschool years in the development of brain structures that support language processing during school years. Using naturalistic parent–child interactions, we measured parental language input (amount and complexity) to children between the ages of 18 and 42 months (n = 23). We then assessed longitudinal changes in children’s cortical thickness measured at five time points between 9 and 16 years of age. We focused on specific regions of interest (ROIs) that have been shown to play a role in language processing. Our results support the view that, even after accounting for important covariates such as parental intelligence quotient (IQ) and education, the amount and complexity of language input to a young child prior to school forecasts the rate of change in cortical thickness during the 7-year period from 51/2 to 121/2 years later. Examining the proximal correlates of change in brain and cognitive differences has the potential to inform targets for effective prevention and intervention strategies.

Abner, N., Namboodiripad, S., Spaepen, E., & Goldin-Meadow, S. (2021). Emergent morphology in child homesign: Evidence from number language. Language Learning and Development, 18:1, 16-40. https://doi.org/10.1080/15475441.2021.1922281

Human languages, signed and spoken, can be characterized by the structural patterns they use to associate communicative forms with meanings. One such pattern is paradigmatic morphology, where complex words are built from the systematic use and re-use of sub-lexical units. Here, we provide evidence of emergent paradigmatic morphology akin to number inflection in a communication system developed without input from a conventional language, homesign. We study the communication systems of four deaf child homesigners (mean age 8;02). Although these idiosyncratic systems vary from one another, we nevertheless find that all four children use handshape and movement devices productively to express cardinal and non-cardinal number information, and that their number expressions are consistent in both form and meaning. Our study shows, for the first time, that all four homesigners not only incorporate number devices into representational devices used as predicates , but also into gestures functioning as nominals, including deictic gestures. In other words, the homesigners express number by systematically combining and re-combining additive markers for number (qua inflectional morphemes) with representational and deictic gestures (qua bases). The creation of new, complex forms with predictable meanings across gesture types and linguistic functions constitutes evidence for an inflectional morphological paradigm in homesign and expands our understanding of the structural patterns of language that are, and are not, dependent on linguistic input.

Wakefield, E., & Goldin-Meadow, S. (2021). How gesture helps learning: Exploring the benefits of gesture within an embodied framework. In S. Stolz (ed.), The body, embodiment, and education: An interdisciplinary approach. Routledge. https://doi.org/10.4324/9781003142010

Notions of the body and embodiment have become prominent across a number of established discipline areas, like philosophy, sociology, and psychology. While there has been a paradigmatic shift towards this topic, there is a notable gap in the literature as it relates to education and educational research. The Body, Embodiment and Education addresses the gap between embodiment and education by exploring conceptualisations of the body and embodiment from interdisciplinary perspectives. With contributions from international experts in philosophy, sociology, and psychology, as well as emerging areas in related fields, such as embodied cognition, neuroscience, cognitive science, this book sets a new research agenda in education and educational research. Each chapter makes a case for expanding the field and adds to the call for further exploration. The Body, Embodiment and Education will be of great interest to academics, researchers and postgraduate students who are interested in the body and embodiment and/or its relationship with education or educational research.

Frausel, R. R., Vollman, E., Muzard, A., Richland, L. E., Goldin‐Meadow, S., & Levine, S. C. (2021). Developmental Trajectories of Early Higher‐Order Thinking Talk Differ for Typically Developing Children and Children With Unilateral Brain Injuries. Mind, Brain, and Education. Wiley. https://doi.org/10.1111/mbe.12301

The use of higher-order thinking talk (HOTT), where speakers identify relations between representations (e.g., comparison, causality, abstraction) is examined in the spontaneous language produced by 64 typically developing (TD) and 46 brain-injured children, observed from 14–58 months at home. HOTT is less frequent in lower-income children and children with brain injuries, but effects differed depending on HOTT complexity and type of brain injury. Controlling for income, children with larger and later-occurring cerebrovascular infarcts produce fewer surface (where relations are more perceptual) and structure (where relations are more abstract) HOTT utterances than TD children. In contrast, children with smaller and earlier occurring periventricular lesions produce HOTT at comparable rates to TD children. This suggests that examining HOTT development may be an important tool for understanding the impacts of brain injury in children. Theoretically, these data reveal that both neurological (size and timing of brain injury) and environmental (family income) factors contribute to these skills.

Kısa, Y. D., Goldin-Meadow, S., & Casasanto, D. (2021). Do gestures really facilitate speech production? Journal of Experimental Psychology: General. Advance online publication. https://doi.org/10.1037/xge0001135

Why do people gesture when they speak? According to one influential proposal, the Lexical Retrieval Hypothesis (LRH), gestures serve a cognitive function in speakers’ minds by helping them find the right spatial words. Do gestures also help speakers find the right words when they talk about abstract concepts that are spatialized metaphorically? If so, then preventing people from gesturing should increase the rate of disfluencies during speech about both literal and metaphorical space. Here, we sought to conceptually replicate the finding that preventing speakers from gesturing increases disfluencies in speech with literal spatial content (e.g., the rocket went up), which has been interpreted as evidence for the LRH, and to extend this pattern to speech with metaphorical spatial content (e.g., my grades went up). Across three measures of speech disfluency (disfluency rate, speech rate, and rate of nonjuncture filled pauses), we found no difference in disfluency between speakers who were allowed to gesture freely and speakers who were not allowed to gesture, for any category of speech (literal spatial content, metaphorical spatial content, and no spatial content). This large dataset (7,969 phrases containing 2,075 disfluencies) provided no support for the idea that gestures help speakers find the right words, even for speech with literal spatial content. Upon reexamining studies cited as evidence for the LRH and related proposals over the past 5 decades, we conclude that there is, in fact, no reliable evidence that preventing gestures impairs speaking. Together, these findings challenge long-held beliefs about why people gesture when they speak. (PsycInfo Database Record (c) 2022 APA, all rights reserved)

Vilà-Giménez, I., Dowling, N., Demir-Lira, Ö.E., Prieto, P. and Goldin-Meadow, S. (2021), The Predictive Value of Non-Referential Beat Gestures: Early Use in Parent–Child Interactions Predicts Narrative Abilities at 5 Years of Age. Child Dev, 92: 2335-2355. https://doi.org/10.1111/cdev.13583

A longitudinal study with 45 children (Hispanic, 13%; non-Hispanic, 87%) investigated whether the early production of non-referential beat and flip gestures, as opposed to referential iconic gestures, in parent–child naturalistic interactions from 14 to 58 months old predicts narrative abilities at age 5. Results revealed that only non-referential beats significantly (p < .01) predicted later narrative productions. The pragmatic functions of the children’s speech that accompany these gestures were also analyzed in a representative sample of 18 parent-child dyads, revealing that beats were typically associated with biased assertions or questions. These findings show that the early use of beats predicts narrative abilities later in development, and suggest that this relation is likely due to the pragmatic–structuring function that beats reflect in early discourse.

Goldin-Meadow, S., & Karmiloff-Smith, A. (2021). The cognitive underpinnings of relative clause comprehension in children. In Taking development SERIOUSLY: A festschrift for ANNETTE Karmiloff-Smith: NEUROCONSTRUCTIVISM and the Multi-disciplinary approach to understanding the emergence of mind (pp. 16–32). essay, Routledge.

Frausel, R. R., Richland, L. E., Levine, S. C., & Goldin-Meadow, S. (2021). Personal narrative as a “breeding ground” for higher-order thinking talk in early parent–child interactions. Developmental Psychology, 57(4), 519–534. https://doi.org/10.1037/dev0001166

Personal narrative is decontextualized talk where individuals recount stories of personal experience about past or future events. As an everyday discursive speech type, narrative potentially invites parents and children to explicitly link together, generalize from, and make inferences about representations—that is, to engage in higher-order thinking talk (HOTT). Here we ask whether narratives in early parent–child interactions include proportionally more HOTT than other forms of everyday home language. Sixty-four children (31 girls; 36 White, 14 Black, 8 Hispanic, 6 mixed/other race) and their primary caregiver(s), (Mincome = $61,000) were recorded in 90-minute spontaneous home interactions every 4 months from 14–58 months. Speech was transcribed and coded for narrative and HOTT. We found that parents at all visits and children after 38 months used more HOTT in narrative than non-narrative, and more HOTT than expected by chance. At 38 and 50 months, we examined HOTT in a related but distinct form of decontextualized talk—pretend, or talk during imaginary episodes of interaction—as a control to test whether other forms of decontextualized talk also relate to HOTT. While pretend contained more HOTT than other (non-narrative/non-pretend) talk, it generally contained less HOTT than narrative. Additionally, unlike HOTT during narrative, the amount of HOTT during pretend did not exceed the amount expected by chance, suggesting narrative serves as a particularly rich “breeding ground” for HOTT in parent–child interactions. These findings provide insight into the nature of narrative discourse, and suggest narrative potentially may be used as a lever to increase children’s higher-order thinking. (PsycInfo Database Record (c) 2021 APA, all rights reserved)

Brown, A. R., Pouw, W., Brentari, D., & Goldin-Meadow, S. (2021). People Are Less Susceptible to Illusion When They Use Their Hands to Communicate Rather Than Estimate. Psychological Science. https://doi.org/10.1177/0956797621991552

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.

Demir-Lira, Ö. E., Asaridou, S. S., Nolte, C., Small, S. L., & Goldin-Meadow, S. (2021). Parent Language Input Prior to School Forecasts Change in Children’s Language-Related Cortical Structures During Mid-Adolescence. Frontiers in Human Neuroscience, 15. https://doi.org/10.3389/fnhum.2021.650152 

Children differ widely in their early language development, and this variability has important implications for later life outcomes. Parent language input is a strong experiential factor predicting the variability in children’s early language skills. However, little is known about the brain or cognitive mechanisms that underlie the relationship. In addressing this gap, we used longitudinal data spanning 15 years to examine the role of early parental language input that children receive during preschool years in the development of brain structures that support language processing during school years. Using naturalistic parent–child interactions, we measured parental language input (amount and complexity) to children between the ages of 18 and 42 months (n = 23). We then assessed longitudinal changes in children’s cortical thickness measured at five time points between 9 and 16 years of age. We focused on specific regions of interest (ROIs) that have been shown to play a role in language processing. Our results support the view that, even after accounting for important covariates such as parental intelligence quotient (IQ) and education, the amount and complexity of language input to a young child prior to school forecasts the rate of change in cortical thickness during the 7-year period from 5½ to 12½ years later. Examining the proximal correlates of change in brain and cognitive differences has the potential to inform targets for effective prevention and intervention strategies.

Cooperrider, K., Fenlon, J., Keane, J., Brentari, D., & Goldin-Meadow, S. (2021). How Pointing is Integrated into Language: Evidence From Speakers and Signers. Frontiers in Communication, 6. https://doi.org/10.3389/fcomm.2021.567774

When people speak or sign, they not only describe using words but also depict and indicate. How are these different methods of communication integrated? Here, we focus on pointing and, in particular, on commonalities and differences in how pointing is integrated into language by speakers and signers. One aspect of this integration is semantic—how pointing is integrated with the meaning conveyed by the surrounding language. Another aspect is structural—how pointing as a manual signal is integrated with other signals, vocal in speech, or manual in sign. We investigated both of these aspects of integration in a novel pointing elicitation task. Participants viewed brief live-action scenarios and then responded to questions about the locations and objects involved. The questions were designed to elicit utterances in which pointing would serve different semantic functions, sometimes bearing the full load of reference (‘load-bearing points’) and other times sharing this load with lexical resources (‘load-sharing points’). The elicited utterances also provided an opportunity to investigate issues of structural integration. We found that, in both speakers and signers, pointing was produced with greater arm extension when it was load bearing, reflecting a common principle of semantic integration. However, the duration of the points patterned differently in the two groups. Speakers’ points tended to span across words (or even bridge over adjacent utterances), whereas signers’ points tended to slot in between lexical signs. Speakers and signers thus integrate pointing into language according to common principles, but in a way that reflects the differing structural constraints of their language. These results shed light on how language users integrate gradient, less conventionalized elements with those elements that have been the traditional focus of linguistic inquiry.

Novack, M., Brentari, D., Goldin-Meadow, S., Waxman, S. (2021). Sign language, like spoken language, promotes object categorization in young hearing infants, Cognition, Volume 215, DOI: https://doi.org/10.1016/j.cognition.2021.104845

The link between language and cognition is unique to our species and emerges early in infancy. Here, we provide the first evidence that this precocious language-cognition link is not limited to spoken language, but is instead sufficiently broad to include sign language, a language presented in the visual modality. Four- to six-month-old hearing infants, never before exposed to sign language, were familiarized to a series of category exemplars, each presented by a woman who either signed in American Sign Language (ASL) while pointing and gazing toward the objects, or pointed and gazed without language (control). At test, infants viewed two images: one, a new member of the now-familiar category; and the other, a member of an entirely new category. Four-month-old infants who observed ASL distinguished between the two test objects, indicating that they had successfully formed the object category; they were as successful as age-mates who listened to their native (spoken) language. Moreover, it was specifically the linguistic elements of sign language that drove this facilitative effect: infants in the control condition, who observed the woman only pointing and gazing failed to form object categories. Finally, the cognitive advantages of observing ASL quickly narrow in hearing infants: by 5- to 6-months, watching ASL no longer supports categorization, although listening to their native spoken language continues to do so. Together, these findings illuminate the breadth of infants’ early link between language and cognition and offer insight into how it unfolds.

Abner, N., Namboodiripad, S., Spaepen, E., & Goldin-Meadow, S. (2021). Emergent morphology in child homesign: Evidence from number languagueLanguage Learning and Development, 1-25, DOI: 10.1080/15475441.2021.1922281 

Human languages, signed and spoken, can be characterized by the structural patterns they use to associate communicative forms with meanings. One such pattern is paradigmatic morphology, where complex words are built from the systematic use and re-use of sub-lexical units. Here, we provide evidence of emergent paradigmatic morphology akin to number inflection in a communication system developed without input from a conventional language, homesign. We study the communication systems of four deaf child homesigners (mean age 8;02). Although these idiosyncratic systems vary from one another, we nevertheless find that all four children use handshape and movement devices productively to express cardinal and noncardinal number information, and that their number expressions are consistent in both form and meaning. Our study shows, for the first time, that all four homesigners not only incorporate number devices into representational devices used as predicates , but also into gestures functioning as nominals, including deictic gestures. In other words, the homesigners express number by systematically combining and re-combining additive markers for number (qua inflectional morphemes) with representational and deictic gestures (qua bases). The creation of new, complex forms with predictable meanings across gesture types and linguistic functions constitutes evidence for an inflectional morphological paradigm in homesign and expands our understanding of the structural patterns of language that are, and are not, dependent on linguistic input.

Silvey, C., Demir-Lira, Ö. E., Goldin-Meadow, S., & Raudenbush, S. W. (2021) Effects of time-varying parent input on children’s language outcomes differ for vocabulary and syntax, Psychological Science, 1-13, DOI: https://doi.org/10.1177/0956797620970559 

Early linguistic input is a powerful predictor of children’s language outcomes. We investigated two novel questions about this relationship: Does the impact of language input vary over time, and does the impact of time-varying language input on child outcomes differ for vocabulary and for syntax? Using methods from epidemiology to account for baseline and time-varying confounding, we predicted 64 children’s outcomes on standardized tests of vocabulary and syntax in kindergarten from their parents’ vocabulary and syntax input when the children were 14 and 30 months old. For vocabulary, children whose parents provided diverse input earlier as well as later in development were predicted to have the highest outcomes. For syntax, children whose parents’ input substantially increased in syntactic complexity over time were predicted to have the highest outcomes. The optimal sequence of parents’ linguistic input for supporting children’s language acquisition thus varies for vocabulary and for syntax.

Flaherty, M., Hunsicker, D., & Goldin-Meadow, S. (2021). Structural biases that children bring to language learning: A cross-cultural look at gestural input to homesign, Cognition, 104608.

Linguistic input has an immediate effect on child language, making it difficult to discern whatever biases children may bring to language-learning. To discover these biases, we turn to deaf children who cannot acquire spoken language and are not exposed to sign language. These children nevertheless produce gestures, called homesigns, which have structural properties found in natural language. We ask whether these properties can be traced to gestures produced by hearing speakers in Nicaragua, a gesture-rich culture, and in the USA, a culture where speakers rarely gesture without speech. We studied 7 homesigning children and hearing family members in Nicaragua, and 4 in the USA. As expected, family members produced more gestures without speech, and longer gesture strings, in Nicaragua than in the USA. However, in both cultures, homesigners displayed more structural complexity than family members, and there was no correlation between individual homesigners and family members with respect to structural complexity. The findings replicate previous work showing that the gestures hearing speakers produce do not offer a model for the structural aspects of homesign, thus suggesting that children bring biases to construct, or learn, these properties to language-learning. The study also goes beyond the current literature in three ways. First, it extends homesign findings to Nicaragua, where homesigners received a richer gestural model than USA homesigners. Moreover, the relatively large numbers of gestures in Nicaragua made it possible to take advantage of more sophisticated statistical techniques than were used in the original homesign studies. Second, the study extends the discovery of complex noun phrases to Nicaraguan homesign. The almost complete absence of complex noun phrases in the hearing family members of both cultures provides the most convincing evidence to date that homesigners, and not their hearing family members, are the ones who introduce structural properties into homesign. Finally, by extending the homesign phenomenon to Nicaragua, the study offers insight into the gestural precursors of an emerging sign language. The findings shed light on the types of structures that an individual can introduce into communication before that communication is shared within a community of users, and thus sheds light on the roots of linguistic structure.

Carrazza, C., Wakefield, E., Hemani-Lopez, N., Plath, K., & Goldin-Meadow, S. (2021). Children integrate speech and gesture across a wider temporal window than speech and action when learning a math conceptCognition, 104604.

It is well established that gesture facilitates learning, but understanding the best way to harness gesture and how gesture helps learners are still open questions. Here, we consider one of the properties that may make gesture a powerful teaching tool: its temporal alignment with spoken language. Previous work shows that the simultaneity of speech and gesture matters when children receive instruction from a teacher (Congdon et al., 2017). In Study 1, we ask whether simultaneity also matters when children themselves are the ones who produce speech and gesture strategies. Third-graders (N = 75) were taught to produce one strategy in speech and one strategy in gesture for correctly solving mathematical equivalence problems; they were told to produce these strategies either simultaneously (S + G) or sequentially (S➔G; G➔S) during a training session. Learning was assessed immediately after training, at a 24-h follow-up, and at a 4-week follow-up. Children showed evidence of learning and retention across all three conditions. Study 2 was conducted to explore whether it was the special relationship between speech and gesture that helped children learn. Third-graders (N = 87) were taught an action strategy instead of a gesture strategy; all other aspects of the design were the same. Children again learned across all three conditions. But only children who produced simultaneous speech and action retained what they had learned at the follow-up sessions. Results have implications for why gesture is beneficial to learners and, taken in relation to previous literature, reveal differences in the mechanisms by which doing versus seeing gesture facilitates learning.

Brentari, D., Horton, L., & Goldin-Meadow, S. (2021). Crosslinguistic similarity and variation in the simultaneous morphology of sign languagesThe Linguistic Review1(ahead-of-print).

Two differences between signed and spoken languages that have been widely discussed in the literature are: the degree to which morphology is expressed simultaneously (rather than sequentially), and the degree to which iconicity is used, particularly in predicates of motion and location, often referred to as classifier predicates. In this paper we analyze a set of properties marking agency and number in four sign languages for their crosslinguistic similarities and differences regarding simultaneity and iconicity. Data from American Sign Language (ASL), Italian Sign Language (LIS), British Sign Language (BSL), and Hong Kong Sign Language (HKSL) are analyzed. We find that iconic, cognitive, phonological, and morphological factors contribute to the distribution of these properties. We conduct two analyses—one of verbs and one of verb phrases. The analysis of classifier verbs shows that, as expected, all four languages exhibit many common formal and iconic properties in the expression of agency and number. The analysis of classifier verb phrases (VPs)—particularly, multiple-verb predicates—reveals (a) that it is grammatical in all four languages to express agency and number within a single verb, but also (b) that there is crosslinguistic variation in expressing agency and number across the four languages. We argue that this variation is motivated by how each language prioritizes, or ranks, several constraints. The rankings can be captured in Optimality Theory. Some constraints in this account, such as a constraint to be redundant, are found in all information systems and might be considered non-linguistic; however, the variation in constraint ranking in verb phrases reveals the grammatical and arbitrary nature of linguistic systems.

Ping, R., Church, R. B., Decatur, M., Larson, S. W., Zinchenko, E., & Goldin-Meadow, S. (2021). Unpacking the gestures of chemistry learners: What the hands tell us about correct and incorrect conceptions of stereochemistry, Discourse Processes, 58(3), 213-232, DOI: 10.1080/0163853X.2020.1839343.

In this study, adults naïve to organic chemistry drew stereoisomers of molecules and explained their drawings. From these explanations, we identified nine strategies that participants expressed during those explanations. Five of the nine strategies referred to properties of the molecule that were explanatorily irrelevant to solving the problem; the remaining four referred to properties that were explanatorily relevant to the solution. For each problem, we tallied which of the nine strategies were expressed within the explanation for that problem and determined whether the strategy was expressed in speech only, gesture only, or in both speech and gesture within the explanation. After these explanations, all participants watched the experimenter deliver a 2-minute training module on stereoisomers. Following the training, participants repeated the drawing + explanation task on six new problems. The number of relevant strategies that participants expressed in speech (alone or with gesture) before training did not predict their post-training scores. However, the number of relevant strategies participants expressed in gesture only before training did predict their post-training scores. Conveying relevant information about stereoisomers uniquely in gesture prior to a brief training is thus a good index of who is most likely to learn from the training. We suggest that gesture reveals explanatorily relevant implicit knowledge that reflects (and perhaps even promotes) acquisition of new understanding.

2019

Fenlon, J., Cooperrider, K., Keane, J., Brentari, D., & Goldin-Meadow, S. (2019). Comparing sign language and gesture: Insights and pointing. Glossa: A Journal of General Linguistics, 4(1): 2, 1-26, DOI:10.5334/gjgl.499.

How do the signs of sign language differ from the gestures that speakers produce when they talk? We address this question by focusing on pointing. Pointing signs play an important role in sign languages, with some types functioning like pronouns in spoken language (e.g., Sandler & Lillo-Martin 2006). Pointing gestures, in contrast, are not usually described in linguistic terms even though they play an important role in everyday communication. Researchers have focused on the similarities between pointing in signers and speakers (e.g., Cormier et al. 2013), but no studies to date have directly compared the two at a fine-grained level. In this paper, we compare the formational features of 574 pointing signs produced by British Sign Language signers (BSL Corpus) and 543 pointing gestures produced by American English speakers (Tavis Smiley Corpus) with respect to three characteristics typically associated with language systems: conventionalization, reduction, and integration. We find that, although pointing signs and pointing gestures both exhibit regularities of form, pointing signs are more consistent across uses, more reduced, and more integrated into prosodic structure than pointing gestures. Pointing is thus constrained differently when it is produced along with a signed language vs. when it is produced along with a spoken language; we discuss possible sources of these constraints.


Zhen, A., Van Hedger, S., Heald, S., Goldin-Meadow, S., & Tian, X. (2019). Manual directional gestures facilitate cross-modal perceptual learning. Cognition, 187, 178-187, DOI:10.1016/j.cognition.2019.03.004.

Action and perception interact in complex ways to shape how we learn. In the context of language acquisition, for example, hand gestures can facilitate learning novel sound-to-meaning mappings that are critical to successfully understanding a second language. However, the mechanisms by which motor and visual information influence auditory learning are still unclear. We hypothesize that the extent to which cross-modal learning occurs is directly related to the common representational format of perceptual features across motor, visual, and auditory domains (i.e., the extent to which changes in one domain trigger similar changes in another). Furthermore, to the extent that information across modalities can be mapped onto a common representation, training in one domain may lead to learning in another domain. To test this hypothesis, we taught native English speakers Mandarin tones using directional pitch gestures. Watching or performing gestures that were congruent with pitch direction (e.g., an up gesture moving up, and a down gesture moving down, in the vertical plane) significantly enhanced tone category learning, compared to auditory-only training. Moreover, when gestures were rotated (e.g., an up gesture moving away from the body, and a down gesture moving toward the body, in the horizontal plane), performing the gestures resulted in significantly better learning, compared to watching the rotated gestures. Our results suggest that when a common representational mapping can be established between motor and sensory modalities, auditory perceptual learning is likely to be enhanced.


Gleitman, L., Senghas, A., Flaherty, M., Coppola, M., & Goldin-Meadow, S. (2019). The emergence of the formal category “symmetry” in a new sign language. Proceedings of the National Academy of Sciences, 116(24), 11705-11711, DOI: 10.1073/pnas.1819872116.

Logical properties such as negation, implication, and symmetry, despite the fact that they are foundational and threaded through the vocabulary and syntax of known natural languages, pose a special problem for language learning. Their meanings are much harder to identify and isolate in the child’s everyday interaction with referents in the world than concrete things (like spoons and horses) and happenings and acts (like running and jumping) that are much more easily identified, and thus more easily linked to their linguistic labels (spoon, horse, run, jump). Here we concentrate attention on the category of symmetry [a relation R is symmetrical if and only if (iff) for all x, y: if R(x,y), then R(y,x)], expressed in English by such terms as similar, marry, cousin, and near. After a brief introduction to how symmetry is expressed in English and other well-studied languages, we discuss the appearance and maturation of this category in Nicaraguan Sign Language (NSL). NSL is an emerging language used as the primary, daily means of communication among a population of deaf individuals who could not acquire the surrounding spoken language because they could not hear it, and who were not exposed to a preexisting sign language because there was none available in their community. Remarkably, these individuals treat symmetry, in both semantic and syntactic regards, much as do learners exposed to a previously established language. These findings point to deep human biases in the structures underpinning and constituting human language.


Wakefield, E. M., Congdon, E. L., Novack, M. A., Goldin-Meadow, S., & James, K. H. (2019). Learning math by hand: The neural effects of gesture-based instruction in 8-year-old children. Attention, Perception, & Psychophysics. pp 1-11, DOI: 10.3758/s13414-019-01755-y.

Producing gesture can be a powerful tool for facilitating learning. This effect has been replicated across a variety of academic domains, including algebra, chemistry, geometry, and word learning. Yet the mechanisms underlying the effect are poorly understood. Here we address this gap using functional magnetic resonance imaging (fMRI). We examine the neural correlates underlying how children solve mathematical equivalence problems learned with the help of either a speech + gesture strategy, or a speech-alone strategy. Children who learned through a speech + gesture were more likely to recruit motor regions when subsequently solving problems during a scan than children who learned through speech alone. This suggests that gesture promotes learning, at least in part, because it is a type of action. In an exploratory analysis, we also found that children who learned through speech + gesture showed subthreshold activation in regions outside the typical action-learning network, corroborating behavioral findings suggesting that the mechanisms supporting learning through gesture and action are not identical. This study is one of the first to explore the neural mechanisms of learning through gesture.


Wakefield, E. M., Foley, A. E., Ping, R., Villarreal, J. N., Goldin-Meadow, S., & Levine, S. C. (2019). Breaking down gesture and action in mental rotation: Understanding the components of movement that promote learning. Developmental Psychology, 1-13, DOI: 10.1037/dev0000697.

Past research has shown that children’s mental rotation skills are malleable and can be improved through action experience—physically rotating objects— or gesture experience—showing how objects could rotate (e.g., Frick, Ferrara, & Newcombe, 2013; Goldin-Meadow et al., 2012; Levine, Goldin-Meadow, Carlson, & Hemani-Lopez, 2018). These two types of movements both involve rotation, but differ on a number of components. Here, we break down action and gesture into components—feeling an object during rotation, using a grasping handshape during rotation, tracing the trajectory of rotation, and seeing the outcome of rotation—and ask, in two studies, how training children on a mental rotation task through different combinations of these components impacts learning gains across a delay. Our results extend the literature by showing that, although all children benefit from training experiences, some training experiences are more beneficial than others, and the pattern differs by sex. Not seeing the outcome of rotation emerged as a crucial training component for both males and females. However, not seeing the outcome turned out to be the only necessary component for males (who showed equivalent gains when imagining or gesturing object rotation). Females, in contrast, only benefitted from not seeing the outcome when it involved producing a relevant motor movement (i.e., when gesturing the rotation of the object and not simply imagining the rotation of the object). Results are discussed in relation to potential mechanisms driving these effects and practical implications.


Rissman, L., Woodward, A, & Goldin-Meadow, S. (2019). Occluding the face diminishes the conceptual accessibility of an animate agent. Language, Cognition, and Neuroscience, 34(3), 271-288, DOI: 10.1080/23273798.2018.1525495.

The language that people use to describe events reflects their perspective on the event. This linguistic encoding is influenced by conceptual accessibility, particularly whether individuals in the event are animate or agentive – animates are more likely than inanimates to appear as Subject of a sentence, and agents are more likely than patients to appear as Subject. We tested whether perceptual aspects of a scene can override these two conceptual biases when they are aligned: whether a visually prominent inanimate patient will be selected as Subject when pitted against a visually backgrounded animate agent. We manipulated visual prominence by contrasting scenes in which the face/torso/hand of the agent were visible vs. scenes in which only the hand was visible. Events with only a hand were more often associated with passive descriptions, in both production and comprehension tasks. These results highlight the power of visual prominence to guide how people conceptualise events.

2018

Gibson, D. J., Gunderson, E. A., Spaepen, E., Levine, S. C., & Goldin-Meadow, S. (2018). Number gestures predict learning of number words. Developmental Science, 22(3), 1-14, DOI: 10.1111/desc.12791.

When asked to explain their solutions to a problem, children often gesture and, at times, these gestures convey information that is different from the information conveyed in speech. Children who produce these gesture‐speech “mismatches” on a particular task have been found to profit from instruction on that task. We have recently found that some children produce gesture‐speech mismatches when identifying numbers at the cusp of their knowledge, for example, a child incorrectly labels a set of two objects with the word “three” and simultaneously holds up two fingers. These mismatches differ from previously studied mismatches (where the information conveyed in gesture has the potential to be integrated with the information conveyed in speech) in that the gestured response contradicts the spoken response. Here, we ask whether these contradictory number mismatches predict which learners will profit from number‐word instruction. We used the Give‐a‐Number task to measure number knowledge in 47 children (Mage = 4.1 years, SD = 0.58), and used the What’s on this Card task to assess whether children produced gesture‐speech mismatches above their knower level. Children who were early in their number learning trajectories (“one‐knowers” and “two‐knowers”) were then randomly assigned, within knower level, to one of two training conditions: a Counting condition in which children practiced counting objects; or an Enriched Number Talk condition containing counting, labeling set sizes, spatial alignment of neighboring sets, and comparison of these sets. Controlling for counting ability, we found that children were more likely to learn the meaning of new number words in the Enriched Number Talk condition than in the Counting condition, but only if they had produced gesture‐speech mismatches at pretest. The findings suggest that numerical gesture‐speech mismatches are a reliable signal that a child is ready to profit from rich number instruction and provide evidence, for the first time, that cardinal number gestures have a role to play in number‐learning.


Demir-Lira, Ö. E., Applebaum, L. R., Goldin-Meadow, S., & Levine, S. C. (2018). Parents’ early book reading to children: Relation to children’s later language and literacy outcomes controlling for other parent language input. Developmental Science, e12764, 1-16, DOI: 10.1111/desc.12764.

It is widely believed that reading to preschool children promotes their language and literacy skills. Yet, whether early parent–child book reading is an index of generally rich linguistic input or a unique predictor of later outcomes remains unclear. To address this question, we asked whether naturally occurring parent–child book reading interactions between 1 and 2.5 years-of-age predict elementary school language and literacy outcomes, controlling for the quantity of other talk parents provide their children, family socioeconomic status, and children’s own early language skill. We find that the quantity of parent–child book reading interactions predicts children’s later receptive vocabulary, reading comprehension, and internal motivation to read (but not decoding, external motivation to read, or math skill), controlling for these other factors. Importantly, we also find that parent language that occurs during book reading interactions is more sophisticated than parent language outside book reading interactions in terms of vocabulary diversity and syntactic complexity.


Novack, M., Filippi, C. A., Goldin-Meadow, S., & Woodward, A. L. (2018). Actions speak louder than gestures when you are 2 years old. Developmental Psychology, 54(10), 1809-1821, DOI: 10.1037/dev0000553.

Interpreting iconic gestures can be challenging for children. Here, we explore the features and functions of iconic gestures that make them more challenging for young children to interpret than instrumental actions. In Study 1, we show that 2.5-year-olds are able to glean size information from handshape in a simple gesture, although their performance is significantly worse than 4-year-olds’. Studies 2 to 4 explore the boundary conditions of 2.5-year-olds’ gesture understanding. In Study 2, 2.5-year-old children have an easier time interpreting size information in hands that reach than in hands that gesture. In Study 3, we tease apart the perceptual features and functional objectives of reaches and gestures. We created a context in which an action has the perceptual features of a reach (extending the hand toward an object) but serves the function of a gesture (the object is behind a barrier and not obtainable; the hand thus functions to represent, rather than reach for, the object). In this context, children struggle to interpret size information in the hand, suggesting that gesture’s representational function (rather than its perceptual features) is what makes it hard for young children to interpret. A distance control (Study 4) in which a person holds a box in gesture space (close to the body) demonstrates that children’s difficulty interpreting static gesture cannot be attributed to the physical distance between a gesture and its referent. Together, these studies provide evidence that children’s struggle to interpret iconic gesture may stem from its status as representational action.


Wakefield, E., Hall, C., James, J., & Goldin-Meadow, S. (2018). Gesture for generalization: Gesture facilitates flexible learning of words for actions on objects. Developmental Science, 21(5), DOI:10.1111/desc.12656.

 

Verb learning is difficult for children (Gentner, 1982), partially because children have a bias to associate a novel verb not only with the action it represents, but also with the object on which it is learned (Kersten & Smith, 2002). Here we investigate how well 4- and 5-year-old children (N = 48) generalize novel verbs for actions on objects after doing or seeing the action (e.g., twisting a knob on an object) or after doing or seeing a gesture for the action (e.g., twisting in the air near an object). We find not only that children generalize more effectively through gesture experience, but also that this ability to generalize persists after a 24-hour delay.


Goldin-Meadow, S. (2018). Taking a hands-on approach to learning. Policy Insights from the Behavioral and Brain Sciences, 5(2), 163–170, DOI: 10.1177/2372732218785393.

When people talk, they gesture. These gestures often convey substantive information that is related, but not always identical, to the information conveyed in speech. Gesture thus offers listeners insight into a speaker’s unspoken cognition. But gesture can do more than reflect cognition—it can play a role in changing cognition and, as a result, contribute to learning. This article has two goals: (a) to make the case that gesture can promote growth early in development when children are learning language and also later in development when children learn about math, and (b) to explore the implications of these findings for practice—how gesture can be recruited in everyday teaching situations by parents and teachers. Because our hands are always with us and require little infrastructure to implement in teaching situations, gesture has the potential to boost learning in all children and thus perhaps reduce social inequalities in achievement in language and math.


Lu, J. & Goldin-Meadow, S. (2018). Creating images with the stroke of a hand: Depiction of size and shape in sign language. Frontiers in Psychology, 9(1276), 1-15, DOI: 10.3389/fpsyg.2018.01276.

In everyday communication, not only do speakers describe, but they also depict. When depicting, speakers take on the role of other people and quote their speech or imitate their actions. In previous work, we developed a paradigm to elicit depictions in speakers. Here we apply this paradigm to signers to explore depiction in the manual modality, with a focus on depiction of the size and shape of objects. We asked signers to describe two objects that could easily be characterized using lexical signs (Descriptive Elicitation), and objects that were more difficult to distinguish using lexical signs, thus encouraging the signers to depict (Depictive Elicitation). We found that signers used two types of depicting constructions (DCs), conventional DCs and embellished DCs. Both conventional and embellished DCs make use of categorical handshapes to identify objects. But embellished DCs also capture imagistic aspects of the objects, either by adding a tracing movement to gradiently depict the contours of the object, or by adding a second handshape to depict the configuration of the object. Embellished DCs were more frequent in the Depictive Elicitation context than in the Descriptive Elicitation context; lexical signs showed the reverse pattern; and conventional DCs were equally like in the two contexts. In addition, signers produced iconic mouth movements, which are temporally and semantically integrated with the signs they accompany and depict the size and shape of objects, more often with embellished DCs than with either lexical signs or conventional DCs. Embellished DCs share a number of properties with embedded depictions, constructed action, and constructed dialog in signed and spoken languages. We discuss linguistic constraints on these gradient depictions, focusing on how handshape constrains the type of depictions that can be formed, and the function of depiction in everyday discourse.


Spaepen, E., Gunderson, E., Gibson, D., Goldin-Meadow, S., & Levine, S. (2018). Meaning before order: Cardinal principle knowledge predicts improvement in understanding the successor principle and exact ordering. Cognition, 180, 59-81, DOI: 10.1016/j.cognition.2018.06.012.

Learning the cardinal principle (the last word reached when counting a set represents the size of the whole set) is a major milestone in early mathematics. But researchers disagree about the relationship between cardinal principle knowledge and other concepts, including how counting implements the successor function (for each number word N representing a cardinal value, the next word in the count list represents the cardinal value N + 1) and exact ordering (cardinal values can be ordered such that each is one more than the value before it and one less than the value after it). No studies have investigated acquisition of the successor principle and exact ordering over time, and in relation to cardinal principle knowledge. An open question thus remains: Is the cardinal principle a “gatekeeper” concept children must acquire before learning about succession and exact ordering, or can these concepts develop separately? Preschoolers (N = 127) who knew the cardinal principle (CP-knowers) or who knew the cardinal meanings of number words up to “three” or “four” (3–4-knowers) completed succession and exact ordering tasks at pretest and posttest. In between, children completed one of two trainings: counting only versus counting, cardinal labeling, and comparison. CP-knowers started out better than 3–4-knowers on succession and exact ordering. Controlling for this disparity, we found that CP-knowers improved over time on succession and exact ordering; 3–4-knowers did not. Improvement did not differ between the two training conditions. We conclude that children can learn the cardinal principle without understanding succession or exact ordering and hypothesize that children must understand the cardinal principle before learning these concepts.


Cooperrider, K., Abner, N., & Goldin-Meadow, S. (2018). The palm-up puzzle: Meanings and origins of a widespread form in gesture and sign. Frontiers in Communication, 3(23), 1-16, DOI: 10.3389/fcomm.2018.00023.

During communication, speakers commonly rotate their forearms so that their palms turn upward. Yet despite more than a century of observations of such palm-up gestures, their meanings and origins have proven difficult to pin down. We distinguish two gestures within the palm-up form family: the palm-up presentational and the palm-up epistemic. The latter is a term we introduce to refer to a variant of the palm-up that prototypically involves lateral separation of the hands. This gesture—our focus—is used in speaking communities around the world to express a recurring set of epistemic meanings, several of which seem quite distinct. More striking, a similar palm-up form is used to express the same set of meanings in many established sign languages and in emerging sign systems. Such observations present a two-part puzzle: the first part is how this set of seemingly distinct meanings for the palm-up epistemic are related, if indeed they are; the second is why the palm-up form is so widely used to express just this set of meanings. We propose a network connecting the different attested meanings of the palm-up epistemic, with a kernel meaning of absence of knowledge, and discuss how this proposal could be evaluated through additional developmental, corpus-based, and experimental research. We then assess two contrasting accounts of the connection between the palm-up form and this proposed meaning network, and consider implications for our understanding of the palm-up form family more generally. By addressing the palm-up puzzle, we aim, not only to illuminate a widespread form found in gesture and sign, but also to provide insights into fundamental questions about visual-bodily communication: where communicative forms come from, how they take on new meanings, and how they become integrated into language in signing communities.


Wakefield, E., Novack, M.A., Congdon, E.L., Franconeri, S., & Goldin-Meadow, S. (2018). Gesture helps learners learn, but not merely by guiding their visual attention. Developmental Science, 21(6), DOI: 10.1111/desc.12664.

Teaching a new concept through gestures—hand movements that accompany speech—facilitates learning above‐and‐beyond instruction through speech alone (e.g., Singer & Goldin‐Meadow, 2005). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism—gesture’s ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture—they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor’s speech (i.e., follow along with speech) than children who watch the no‐gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learning—following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture’s beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech.


Gunderson, E. A., Sorhagen, N., Gripshover, S. J., Dweck C.S., Goldin-Meadow, S. & Levine, S. C. (2018). Parent praise to toddlers predicts fourth grade academic achievement via children’s incremental mindsets. Developmental Psychology, 54(3), 397-409. DOI: 10.1037/dev0000444.

In a previous study, parent–child praise was observed in natural interactions at home when children were 1, 2, and 3 years of age. Children who received a relatively high proportion of process praise (e.g., praise for effort and strategies) showed stronger incremental motivational frameworks, including a belief that intelligence can be developed and a greater desire for challenge, when they were in 2nd or 3rd grade (Gunderson et al., 2013). The current study examines these same children’s (n 53) academic achievement 1 to 2 years later, in 4th grade. Results provide the first evidence that process praise to toddlers predicts children’s academic achievement (in math and reading comprehension) 7 years later, in elementary school, via their incremental motivational frameworks. Further analysis of these motivational frameworks shows that process praise had its effect on fourth grade achievement through children’s trait beliefs (e.g., believing that intelligence is fixed vs. malleable), rather than through their learning goals (e.g., preference for easy vs. challenging tasks). Implications for the socialization of motivation are discussed.


Brooks, N., Barner, D., Frank, M., & Goldin-Meadow, S. (2018). The role of gesture in supporting mental representations: The case of mental abacus arithmetic. Cognitive Science, 42(2), 554-575. DOI: 10.1111/cogs.12527.

People frequently gesture when problem-solving, particularly on tasks that require spatial transformation. Gesture often facilitates task performance by interacting with internal mental representations, but how this process works is not well understood. We investigated this question by exploring the case of mental abacus (MA), a technique in which users not only imagine moving beads on an abacus to compute sums, but also produce movements in gestures that accompany the calculations. Because the content of MA is transparent and readily manipulated, the task offers a unique window onto how gestures interface with mental representations. We find that the size and number of MA gestures reflect the length and difficulty of math problems. Also, by selectively interfering with aspects of gesture, we find that participants perform significantly worse on MA under motor interference, but that perceptual feedback is not critical for success on the task. We conclude that premotor processes involved in the planning of gestures are critical to mental representation in MA.


Levine, S., Goldin-Meadow, S., Carlson, M., & Hemani-Lopez, N. (2018). Mental transformation skill in young children: The role of concrete and abstract motor training. Cognitive Science, 42, 1207–1228, DOI: 10.1111/cogs.12603.

We examined the effects of three different training conditions, all of which involve the motor system, on kindergarteners’ mental transformation skill. We focused on three main questions. First, we asked whether training that involves making a motor movement that is relevant to the mental transformation— either concretely through action (action training) or more abstractly through gestural movements that represent the action (move-gesture training)— resulted in greater gains than training using motor movements irrelevant to the mental transformation (point-gesture training). We tested children prior to training, immediately after training (posttest), and 1 week after training (retest), and we found greater improvement in mental transformation skill in both the action and move-gesture training conditions than in the point-gesture condition, at both posttest and retest. Second, we asked whether the total gain made by retest differed depending on the abstractness of the movement-relevant training (action vs. move-gesture), and we found that it did not. Finally, we asked whether the time course of improvement differed for the two movementrelevant conditions, and we found that it did— gains in the action condition were realized immediately at posttest, with no further gains at retest; gains in the move-gesture condition were realized throughout, with comparable gains from pretest-to-posttest and from posttest-to-retest. Training that involves movement, whether concrete or abstract, can thus benefit children’s mental transformation skill. However, the benefits unfold differently over time— the benefits of concrete training unfold immediately after training (online learning); the benefits of more abstract training unfold in equal steps immediately after training (online learning) and during the intervening week with no additional training (offline learning). These findings have implications for the kinds of instruction that can best support spatial learning.


Wakefield, E., Hall, C., James, J., & Goldin-Meadow, S. (2018). Gesture of generalization: Gesture facilitates flexible learning of words for actions on objects. Developmental Science, 21(5), 1-14, DOI:10.1111/desc.12656.

Verb learning is difficult for children (Gentner, 1982), partially because children have a bias to associate a novel verb not only with the action it represents, but also with the object on which it is learned (Kersten & Smith, 2002). Here we investigate how well 4-and 5-year-old children (N = 48) generalize novel verbs for actions on objects after doing or seeing the action (e.g., twisting a knob on an object) or after doing or seeing a gesture for the action (e.g., twisting in the air near an object). We find not only that children generalize more effectively through gesture experience, but also that this ability to generalize persists after a 24-hour delay.


Congdon, E., Novack, M., & Goldin-Meadow, S. (2018). Gesture in Experimental Studies: How Videotape Technology Can Advance Psychological Theory. Organizational Research Methods, 21(2), 489-499, DOI:10.1177/1094428116654548.

Video recording technology allows for the discovery of psychological phenomena that might otherwise go unnoticed. We focus here on gesture as an example of such a phenomenon. Gestures are movements of the hands or body that people spontaneously produce while speaking or thinking through a difficult problem. Despite their ubiquity, speakers are not always aware that they are gesturing, and listeners are not always aware that they are observing gesture. We review how video technology has facilitated major insights within the field of gesture research by allowing researchers to capture, quantify, and better understand these transient movements. We propose that gesture, which can be easily missed if it is not a researcher’s focus, has the potential to affect thinking and learning in the people who produce it, as well as in the people who observe it, and that it can alter the communicative context of an experiment or social interaction. Finally, we discuss the challenges of using video technology to capture gesture in psychological studies, and we discuss opportunities and suggestions for making use of this rich source of information both within the field of developmental psychology and within the field of organizational psychology. 


Uccelli, P., Demir-Lira, O.E., Rowe, M., Levine, S. & Goldin-Meadow, S. (2018). Children’s Early Decontextualized Talk Predicts Academic Language Profiency in Midadolescence. Child Development, DOI:10.1111/cdev.13034.

This study examines whether children’s decontextualized talk—talk about nonpresent events, explanations, or pretend—at 30 months predicts seventh-grade academic language proficiency (age 12). Academic language (AL) refers to the language of school texts. AL proficiency has been identified as an important predictor of adolescent text comprehension. Yet research on precursors to AL proficiency is scarce. Child decontextualized talk is known to be a predictor of early discourse development, but its relation to later language outcomes remains unclear. Forty-two children and their caregivers participated in this study. The proportion of child talk that was decontextualized emerged as a significant predictor of seventh-grade AL proficiency, even after controlling for socioeconomic status, parent decontextualized talk, child total words, child vocabulary, and child syntactic comprehension.


Demir-Lira, O.E., Asaridou, S., Beharelle, A.R., Holt, A., Goldin-Meadow, S., & Small, S. (2018). Functional neuroanatomy of gesture-speech integration in children varies with individual differences in gesture processing. Developmental Science, 21(5), DOI:10.1111/desc.12648.

Gesture is an integral part of children’s communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8- to 10-year-old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture–speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., “pet” + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., “bird” + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post-test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture– speech integration in children overlaps with—but is broader than—the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children’s increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration.

2017
Goldin-Meadow, S. & Brentari, D. Gesture, sign, and language: The coming of age of sign language and gesture studies. Cambridge Core in Behavioral and Brain Sciences, 2017. Doi: 10.1017/S0140525X1600039X

How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plusgesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders btween sign and gesture.


Cartmill, E., Rissman, L., Novack, M., & Goldin-Meadow, S. The development if iconicity in children’s co-speech gesture and homesign. Language, Ineraction and Acquisition, 2017, 8(1). Doi:10.1075/lia.8.1.03car

Gesture can illustrate objects and events in the world by iconically reproducing elements of those objects and events. Children do not begin to express ideas iconically, however, until after they have begun to use conventional forms. In this paper, we investigate how children’s use of iconic resources in gesture relates to the developing structure of their communicative systems. Using longitudinal video corpora, we compare the emergence of manual iconicity in hearing children who are learning a spoken language (co-speech gesture) to the emergence of manual iconicity in a deaf child who is creating a manual system of communication homesign). We focus on one particular element of iconic gesture – the shape of the hand (handshape). We ask how handshape is used as an iconic resource in 1–5-year-olds, and how it relates to the semantic content of children’s communicative acts. We find that patterns of handshape development are broadly similar between co-speech gesture and homesign, suggesting that the building blocks underlying children’s ability to iconically map manual forms to meaning are shared across different communicative systems: those where gesture is produced alongside speech, and those where gesture is the primary mode of communication.


Brentari, D. & Goldin-Meadow, S. Language emergenceAnnual Review of Linguistic, 2017, 3, 363–88. Doi: 10.1146/annurev-linguistics-011415-040743

Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.


Cooperrider, K. & Goldin-Meadow, S. When gesture becomes analogy. Topics in Cognitive Science, 2017, 1-17. Doi: 10.1111/tops.12276

Analogy researchers do not often examine gesture, and gesture researchers do not often borrow ideas from the study of analogy. One borrowable idea from the world of analogy is the importance of distinguishing between attributes and relations. Gentner (1983, 1988) observed that some metaphors highlight attributes and others highlight relations, and called the latter analogies. Mirroring this logic, we observe that some metaphoric gestures represent attributes and others represent relations, and propose to call the latter analogical gestures. We provide examples of such analogical gestures and show how they relate to the categories of iconic and metaphoric gestures described previously. Analogical gestures represent different types of relations and different degrees of relational complexity, and sometimes cohere into larger analogical models. Treating analogical gestures as a distinct phenomenon prompts new questions and predictions, and illustrates one way that the study of gesture and the study of analogy can be mutually informative.


Ozcaliskan, S., Lucero, C., & Goldin-Meadow, S. Blind speakers show language-specific patterns in co-speech gesture but not silent gesture. Cognitive Science, 2017, 1-14. Doi: 10.1111/cogs.12502

Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co-speech gesture), not without speech (silent gesture). We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three-dimensional motion scenes. We found an effect of language on co-speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.


Brookshire, G., Lu, J., Nusbaum, H., Goldin-Meadow, S., & Casasanto, D. Visual cortex entrains to sign language.  PNAS, 2017. Doi: 10.1073/pnas.1620350114

Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophys- iological oscillations in auditory cortex entrain to slow (<8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory percep- tion. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic infor- mation regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign lan- guage. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, charac- terized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at ∼1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.


Wakefield, E.M., Novack, M., & Goldin-Meadow, S. Unpacking the ontogeny of gesture understanding: How movement becomes meaningful across development. Child Development, 2017, 1-16. Doi: 10.1111/cdev.12817

Gestures, hand movements that accompany speech, affect children’s learning, memory, and thinking (e.g., Goldin-Meadow, 2003). However, it remains unknown how children distinguish gestures from other kinds of actions. In this study, 4- to 9-year-olds (n = 339) and adults (n = 50) described one of three scenes: (a) an actor moving objects, (b) an actor moving her hands in the presence of objects (but not touching them), or (c) an actor moving her hands in the absence of objects. Participants across all ages were equally able to identify actions on objects as goal directed, but the ability to identify empty-handed movements as representational actions (i.e., as gestures) increased with age and was influenced by the presence of objects, especially in older children.


Congdon, E.L., Novack, M.A., Brooks, N., Hemani-Lopez, N., & O’Keefe, L., & Goldin-Meadow, S. Better together: Simultaneous presentation of speech and gesture in math instruction supports generalization and retention. Journal of Learning and Instruction, 2017, 50, 65-74. Doi: 10.1016/j.learninstruc.2017.03.005

When teachers gesture during instruction, children retain and generalize what they are taught (Goldin- Meadow, 2014). But why does gesture have such a powerful effect on learning? Previous research shows that children learn most from a math lesson when teachers present one problem-solving strategy in speech while simultaneously presenting a different, but complementary, strategy in gesture (Singer & Goldin-Meadow, 2005). One possibility is that gesture is powerful in this context because it presents information simultaneously with speech. Alternatively, gesture may be effective simply because it in- volves the body, in which case the timing of information presented in speech and gesture may be less important for learning. Here we find evidence for the importance of simultaneity: 3rd grade children retain and generalize what they learn from a math lesson better when given instruction containing simultaneous speech and gesture than when given instruction containing sequential speech and gesture. Interpreting these results in the context of theories of multimodal learning, we find that gesture capitalizes on its synchrony with speech to promote learning that lasts and can be generalized. 


Rissman, L. & Goldin-Meadow, S. The development of causal structure without a language model. Language Learning and Development, 2017. Doi: 10.1080/15475441.2016.1254633

Across a diverse range of languages, children proceed through similar stages in their production of causal language: their initial verbs lack internal causal structure, followed by a period during which they produce causative overgeneralizations, indicating knowledge of a productive causative rule. We asked in this study whether a child not exposed to structured linguistic input could create linguistic devices for encoding causation and, if so, whether the emergence of this causal language would follow a trajectory similar to the one observed for children learning language from linguistic input. We show that the child in our study did develop causation-encoding morphology, but only after initially using verbs that lacked internal causal structure. These results suggest that the ability to encode causation linguistically can emerge in the absence of a language model, and that exposure to linguistic input is not the only factor guiding children from one stage to the next in their production of causal language.


Goldin-Meadow, S. What the hands can tell us about language emergence. Psychonomic Bulletin & Review, 2017, 24(1), 213-218. Doi:10.3758/s13423-016-1074-x

Why, in all cultures in which hearing is possible, has language become the province of speech and the oral modality? I address this question by widening the lens with which we look at language to include the manual modality. I suggest that human communication is most effective when it makes use of two types of formats––a discrete and segmented code, produced simultaneously along with an analog and mimetic code. The segmented code is supported by both the oral and the manual modalities. However, the mimetic code ismore easily handled by the manual modality. We might then expect mimetic encoding to be done preferentially in the manualmodality (gesture), leaving segmented encoding to the oral modality (speech). This argument rests on two assumptions: (1) The manual modality is as good at segmented encoding as the oral modality; sign languages, established and idiosyncratic, provide evidence for this assumption. (2) Mimetic encoding is important to human communication and best handled by the manual modality; co-speech gesture provides evidence for this assumption. By including the manual modality in two contexts––when it takes on the primary function of communication (sign language), and when it takes on a complementary communicative function (gesture)––in our analysis of language, we gain new perspectives on the origins and continuing development of language.


Goldin-Meadow, S. & Yang, C. Statistical evidence that a child can create a combinatorial linguistic system without external linguistic input: Implications for  language evolution. Neuroscience & Biobehavioral Reviews, 2017, 81(Pt B), 150-157. Doi: 10.1016/j.neubiorev.2016.12.016

Can a child who is not exposed to a model for language nevertheless construct a communication system characterized by combinatorial structure? We know that deaf children whose hearing losses prevent them from acquiring spoken language, and whose hearing parents have not exposed them to sign language, use gestures, called homesigns, to communicate. In this study, we call upon a new formal analysis that characterizes the statistical profile of grammatical rules and, when applied to child language data, finds that young children’s language is consistent with a productive grammar rather than rote memorization of specific word combinations in caregiver speech. We apply this formal analysis to homesign, and find that homesign can also be characterized as having productive grammar. Our findings thus provide evidence that a child can create a combinatorial linguistic system without external linguistic input, and offer unique insight into how the capacity of language evolved as part of human biology.


Novack, M. & Goldin-Meadow, S. Gesture as representational action: A paper about function. Psychonomic Bulletin and Review, 2017, 24, 652-665. Doi:10.3758/s13423-016-1145-z

A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal––that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15, 495514, 2008)––has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenons function is its purpose rather than its precipitating cause––the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism––it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.

2016
Cooperrider, K., Gentner, D., & Goldin-Meadow, S. Spatial analogies pervade complex relational reasoning: Evidence from spontaneous gestures. Cognitive Research: Principles and Implications, 2016, 1(28). Doi: 10.1186/s41235-016-0024-5

How do people think about complex phenomena like the behavior of ecosystems? Here we hypothesize that people reason about such relational systems in part by creating spatial analogies, and we explore this possibility by examining spontaneous gestures. In two studies, participants read a written lesson describing positive and negative feedback systems and then explained the differences between them. Though the lesson was highly abstract and people were not instructed to gesture, people produced spatial gestures in abundance during their explanations. These gestures used space to represent simple abstract relations (e.g., increase) and sometimes more complex relational structures (e.g., negative feedback). Moreover, over the course of their explanations, participants’ gestures often cohered into larger analogical models of relational structure. Importantly, the spatial ideas evident in the hands were largely unaccompanied by spatial words. Gesture thus suggests that spatial analogies are pervasive in complex relational reasoning, even when language does not.


Andric, M., Goldin-Meadow, S., Small, S. & Hasson, U. Repeated movie viewings produce similar local activity patterns but different network configurations. Neuroimage, 2016, 142, 613-627. Doi: 10.1016/j.neuroimage.2016.07.061

People seek novelty in everyday life, but they also enjoy viewing the same movies or reading the same novels a second time. What changes and what stays the same when re-experiencing a narrative? In examining this question with functional neuroimaging, we found that brain activity reorganizes in a hybrid, scale-dependent manner when individuals processed the same audiovisual narrative a second time. At the most local level, sensory systems (occipital and temporal cortices) maintained a similar temporal activation profile during the two viewings. Nonetheless, functional connectivity between these same lateral temporal regions and other brain regions was stronger during the second viewing. Furthermore, at the level of whole-brain connectivity, we found a significant rearrangement of network partition structure: lateral temporal and inferior frontal regions clustered together during the first viewing but merged within a fronto-parietal cluster in the second. Our findings show that repetition maintains local activity profiles. However, at the same time, it is associated with multiple network-level connectivity changes on larger scales, with these changes strongly involving regions considered core to language processing.


Goldin-Meadow, S. Using our hands to change our minds. WIREs Cognitive Science, 2016. Doi: 10.1002/wcs.1368

Jean Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how children understand the task at each point, but also about how they progress from one point to the next. This article examines a routine behavior that Piaget overlooked—the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker’s talk. Gesture can do more than reflect ideas—it can also change them. Observing the gestures that others produce can change a learner’s ideas, as can producing one’s own gestures. In this sense, gesture behaves like any other action. But gesture differs from many other actions in that it also promotes generalization of new ideas. Gesture represents the world rather than directly manipulating the world (gesture does not move objects around) and is thus a special kind of action. As a result, the mechanisms by which gesture and action promote learning may differ. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.


Asaridou, S., Demir-Lira, O.E., Goldin-Meadow, S., & Small, S.L. The pace of vocabulary growth during preschool predicts cortical structure at school age. Neuropsychologia, 2016, 98, 13-23. Doi: 10.1016/j.neuropsychologia.2016.05.018

Children vary greatly in their vocabulary development during preschool years. Importantly, the pace of this early vocabulary growth predicts vocabulary size at school entrance. Despite its importance for later academic success, not much is known about the relation between individual differences in early vocabulary development and later brain structure and function. Here we examined the association between vocabulary growth in children, as estimated from longitudinal measurements from 14 to 58 months, and individual differences in brain structure measured in 3rd and 4th grade (8–10 years old). Our results show that the pace of vocabulary growth uniquely predicts cortical thickness in the left supramarginal gyrus. Probabilistic tractography revealed that this region is directly connected to the inferior frontal gyrus (pars opercularis) and the ventral premotor cortex, via what is most probably the superior longitudinal fasciculus III. Our findings demonstrate, for the first time, the relation between the pace of vocabulary learning in children and a specific change in the structure of the cerebral cortex, specifically, cortical thickness in the left supramarginal gyrus. They also highlight the fact that differences in the pace of vocabulary growth are associated with the dorsal language stream, which is thought to support speech perception and articulation.


Cooperrider, K., Gentner, D., & Goldin-Meadow, S. Gesture reveals spatial analogies during complex relational reasoning. Proceedings of the 38th Annual Meeting of the Cognitive Science Society (pp. 692-697). Austin, TX: Cognitive Science Society, 2016.

How do people think about complex relational phenomena like the behavior of the stock market? Here we hypothesize that people reason about such phenomena in part by creatingspatial analogies, and we explore this possibility by examining people’s spontaneous gestures. Participants read a written lesson describing positive and negative feedback systems and then explained the key differences between them. Though the lesson was highly abstract and free of concrete imagery, participants produced spatial gestures in abundance during their explanations. These spatial gestures, despite being fundamentally abstract, showed clear regularities and often built off of each other to form larger spatial models of relational structure—that is, spatial analogies. Importantly, the spatial richness and systematicity revealed in participants’ gestures was largely divorced from spatial language. These results provide evidence for the spontaneous use of spatial analogy during complex relational reasoning.


Novack, M.A., Wakefield, E.M., Congdon, E.L., Franconeri, S., & Goldin-Meadow, S. There is more to gesture than meets the eye: Visual attention to gesture’s referents cannot account for its facilitative effects during math instruction. Proceedings of the 37th Annual Meeting of the Cognitive Science Society,(pp/ 2141-2146). Austin, TX: Cognitive Science Society, 2016.

Teaching a new concept with gestures – hand movements that accompany speech – facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, 2005). However, the mechanisms underlying this phenomenon are still being explored. Here, we use eyetracking to explore one mechanism – gesture’s ability to direct visual attention. We examine how children allocate their visual attention during a mathematical equivalence less on that either contains gesture or does not. We show that gesture instruction improves posttest performance, and additionally that gesture does change how children visually attend to instruction: children look more to the problem being explained, and less to the instructor.However looking patterns alone cannot explain gesture’s effect, as posttest performance is not predicted by any of our looking-time measures. These findings suggest that gesture does guide visual attention, but that attention alone cannot account for its facilitative learning effects.


Ozcaliskan, S., Lucero, C., & Goldin-Meadow, S. Is seeing gesture necessary to gesture like a native speaker?  Psychological Science, 2016. Doi:10.1177/0956797616629931

Speakers of all languages gesture, but there are differences in the gestures that they produce. Do speakers learn language-specific gestures by watching others gesture or by learning to speak a particular language? We examined this question by studying the speech and gestures produced by 40 congenitally blind adult native speakers of English and Turkish (n= 20/language), and comparing them with the speech and gestures of 40 sighted adult speakers in each language (20 wearing blindfolds, 20 not wearing blindfolds). We focused on speakers’ descriptions of physical motion, which display strong cross-linguistic differences in patterns of speech and gesture use. Congenitally blind speakers of English and Turkish produced speech that resembled the speech produced by sighted speakers of their native language. More important, blind speakers of each language used gestures that resembled the gestures of sighted speakers of that language. Our results suggest that hearing a particular language is sufficient to gesture like a native speaker of that language.


Ozcaliskan, S., Lucero, C., & Goldin-Meadow, S. Does language shape silent gesture? Cognition, 2016, 148, 10-18. Doi: 10.1016/j.cognition.2015.12.001

Languages differ in how they organize events, particularly in the types of semantic elements they express and the arrangement of those elements within a sentence. Here we ask whether these cross-linguistic differences have an impact on how events are represented nonverbally; more specifically, on how events are represented in gestures produced without speech (silent gesture), compared to gestures produced with speech (co-speech gesture). We observed speech and gesture in 40 adult native speakers of English and Turkish (N= 20/per language) asked to describe physical motion events (e.g., running down a path)—a domain known to elicit distinct patterns of speech and co-speech gesture in English- and Turkish-speakers. Replicating previous work (Kita & Özyürek, 2003), we found an effect of language on gesture when it was produced with speech—co-speech gestures produced by English-speakers differed from co-speech gestures produced by Turkish-speakers. However, we found no effect of language on gesture when it was produced on its own—silent gestures produced by English-speakers were identical inhow motion elements were packaged and ordered to silent gestures produced by Turkish-speakers. The findings provide evidence for a natural semantic organization that humans impose on motion events when they convey those events without language.


Trueswell, J., Lin, Y., Armstrong III, B., Cartmill, E., Goldin-Meadow, S., & Gleitman, L. Perceiving referential intent: Dynamics of reference in natural parent–child interactions. Cognition, 2016, 148, 117-135. Doi:10.1016/j.cognition.2015.11.002

Two studies are presented which examined the temporal dynamics of the social-attentive behaviors that co-occur with referent identification during natural parent–child interactions in the home. Study 1 focused on 6.2 h of videos of 56 parents interacting during everyday activities with their 14–18 month-olds, during which parents uttered common nouns as parts of spontaneously occurring utterances. Trained coders recorded, on a second-by-second basis, parent and child attentional behaviors relevant to reference in the period (40 s) immediately surrounding parental naming. The referential transparency of each interaction was independently assessed by having naïve adult participants guess what word the parent had uttered in these video segments, but with the audio turned off, forcing them to use only non-linguistic evidence available in the ongoing stream of events. We found a great deal of ambiguity in the input along with a few potent moments of word-referent transparency; these transparent moments have a particular temporal signature with respect to parent and child attentive behavior: it was the object’s appearance and/or the fact that it captured parent/child attention at the moment the word was uttered, not the presence of the object throughout the video, that predicted observers’ accuracy. Study 2 experimentally investigated the precision of the timing relation, and whether it has an effect on observer accuracy, by disrupting the timing between when the word was uttered and the behaviors present in the videos as they were originally recorded. Disrupting timing by only ±1 to 2 s reduced participant confidence and significantly decreased their accuracy in word identification. The results enhance an expanding literature on how dyadic attentional factors can influence early vocabulary growth. By hypothesis, this kind of time-sensitive data-selection process operates as a filter on input, removing many extraneous and ill-supported word-meaning hypotheses from consideration during children’s early vocabulary learning.


Novack, M., Wakefield, E., & Goldin-Meadow, S. What makes a movement a gesture? Cognition, 2016, 146, 339-348. Doi:10.1016/j.cognition.2015.10.014

Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement—movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features—the form of an actor’s hands and the presence of speech-like sounds—to test the effect of context on observers’ classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation.

2015
Abner, N., Cooperrider, K., & Goldin-Meadow, S. Gesture for linguists: A handy primer. Language and Linguistics Compass, 2015, 9(11), 437-449. Doi:10.1111/lnc3.12168

Humans communicate using language, but they also communicate using gesture – spontaneous movements of the hands and body that universally accompany speech. Gestures can be distinguished from other movements, segmented, and assigned meaning based on their forms and functions. Moreover, gestures systematically integrate with language at all levels of linguistic structure, as evidenced in both production and perception. Viewed typologically, gesture is universal, but nevertheless exhibits constrained variation across language communities (as does language itself ). Finally, gesture has rich cognitive dimensions in addition to its communicative dimensions. In overviewing these and other topics, we show that the study of language is incomplete without the study of its communicative partner, gesture.


Horton, L., Goldin-Meadow, S., Coppola, M., Senghas, A., & Brentari, D. Forging a morphological system out of two dimensions: Agentivity and number. Open Linguistics, 2015, 1, 596-613. Doi: 10.1515/opli-2015-0021

Languages have diverse strategies for marking agentivity and number. These strategies are negotiated to create combinatorial systems. We consider the emergence of these strategies by studying features of movement in a young sign language in Nicaragua (NSL). We compare two age cohorts of Nicaraguan signers (NSL1 and NSL2), adult homesigners in Nicaragua (deaf individuals creating a gestural system without linguistic input), signers of American and Italian Sign Languages (ASL and LIS), and hearing individuals asked to gesture silently. We find that all groups use movement axis and repetition to encode agentivity and number, suggesting that these properties are grounded in action experiences common to all participants. We find another feature – unpunctuated repetition – in the sign systems (ASL, LIS, NSL, Homesign) but not in silent gesture. Homesigners and NSL1 signers use the unpunctuated form, but limit its use to No-Agent contexts; NSL2 signers use the form across No-Agent and Agent contexts. A single individual can thus construct a marker for number without benefit of a linguistic community (homesign), but generalizing this form across agentive conditions requires an additional step. This step does not appear to be achieved when a linguistic community is first formed (NSL1), but requires transmission across generations of learners (NSL2).


Brooks, N. & Goldin-Meadow, S. Moving to learn: How guiding the hands can set the stage for learning. Cognitive Science, 2015, 1-19. Doi: 10.1111/cogs.12292

Previous work has found that guiding problem-solvers’ movements can have an immediate effect on their ability to solve a problem. Here we explore these processes in a learning paradigm. We ask whether guiding a learner’s movements can have a delayed effect on learning, setting the stage for change that comes about only after instruction. Children were taught movements that were either relevant or irrelevant to solving mathematical equivalence problems and were told to produce the movements on a series of problems before they received instruction in mathematical equivalence. Children in the relevant movement condition improved after instruction significantly more than children in the irrelevant movement condition, despite the fact that the children showed no improvement in their understanding of mathematical equivalence on a ratings task or on a paper-and-pencil test taken immediately after the movements but before instruction. Movements of the body can thus be used to sow the seeds of conceptual change. But those seeds do not necessarily come to fruition until after the learner has received explicit instruction in the concept, suggesting a “sleeper effect” of gesture on learning.


Novack, M., Goldin-Meadow, S., & Woodward, A. Learning from gesture: How early does it happen? Cognition, 2015, 142, 138-147. Doi: 10.1016/j.cognition.2015.05.018

Iconic gesture is a rich source of information for conveying ideas to learners. However, in order to learn from iconic gesture, a learner must be able to interpret its iconic form—a nontrivial task for young children. Our study explores how young children interpret iconic gesture and whether they can use it to infer a previously unknown action. In Study 1, 2- and 3-year-old children were shown iconic gestures that illustrated how to operate a novel toy to achieve a target action. Children in both age groups successfully figured out the target action more often after seeing an iconic gesture demonstration than after seeing no demonstration. However, the 2-year-olds (but not the 3-year-olds) figured out fewer target actions after seeing an iconic gesture demonstration than after seeing a demonstration of an incomplete-action and, in this sense, were not yet experts at interpreting gesture. Nevertheless, both age groups seemed to understand that gesture could convey information that can be used to guide their own actions, and that gesture is thus not movement for its own sake. That is, the children in both groups produced the action displayed in gesture on the object itself, rather than producing the action in the air (in other words, they rarely imitated the experimenter’s gesture as it was performed). Study 2 compared 2-year-olds’ performance following iconic vs. point gesture demonstrations. Iconic gestures led children to discover more target actions than point gestures, suggesting that iconic gesture does more than just focus a learner’s attention, it conveys substantive information about how to solve the problem, information that is accessible to children as young as 2. The ability to learn from iconic gesture is thus in place by toddlerhood and, although still fragile, allows children to process gesture, not as meaningless movement, but as an intentional communicative representation.


Novack, M. & Goldin-Meadow, S. Learning from gesture: How our hands change our minds. Educational Psychology Review, 2015, 27(3), 405-412. Doi: 10.1007/s10648-015-9325-3

When people talk, they gesture, and those gestures often reveal information that cannot be found in speech. Learners are no exception. A learner’s gestures can index moments of conceptual instability, and teachers can make use of those gestures to gain access into a student’s thinking. Learners can also discover novel ideas from the gestures they produce during a lesson or from the gestures they see their teachers produce. Gesture thus has the power not only to reflect a learner’s understanding of a problem but also to change that understanding. This review explores how gesture supports learning across development and ends by offering suggestions for ways in which gesture can be recruited in educational settings.


Goldin-Meadow, S. From action to abstraction: Gesture as a mechanism of change. Developmental Review, 2015, 38. Doi: 10.1016/j.dr.2015.07.007

Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked – the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker’s talk. But gesture can do more than reflect ideas – it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ – gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.


Gunderson, E., Spaepen, E., Gibson, D., Goldin-Meadow, S., & Levine, S. Gesture as a window onto children’s number knowledge. Cognition, 2015, 144, 14-28. Doi:10.1016/j.cognition.2015.07.008

Before learning the cardinal principle (knowing that the last word reached when counting a set represents the size of the whole set), children do not use number words accurately to label most set sizes. However, it remains unclear whether this difficulty reflects a general inability to conceptualize and communicate about number, or a specific problem with number words. We hypothesized that children’s gestures might reflect knowledge of number concepts that they cannot yet express in speech, particularly for numbers they do not use accurately in speech (numbers above their knower-level). Number gestures are iconic in the sense that they are item-based (i.e., each finger maps onto one item in a set) and therefore may be easier to map onto sets of objects than number words, whose forms do not map transparently onto the number of items in a set and, in this sense, are arbitrary. In addition, learners in transition with respect to a concept often produce gestures that convey different information than the accompanying speech. We examined the number words and gestures 3- to 5-year-olds used to label small set sizes exactly (1–4) and larger set sizes approximately (5–10). Children who had not yet learned the cardinal principle were more than twice as accurate when labeling sets of 2 and 3 items with gestures than with words, particularly if the values were above their knower-level. They were also better at approximating set sizes 5–10 with gestures than with words. Further, gesture was more accurate when it differed from the accompanying speech (i.e., a gesture–speech mismatch). These results show that children convey numerical information in gesture that they cannot yet convey in speech, and raise the possibility that number gestures play a functional role in children’s development of number concepts.


Suskind, D., Leffel, K. R., Leininger, L., Gunderson, E. A., Sapolich, S. G., Suskind, E., Hernandez, M.W., Goldin-Meadow, S., Graf, E., & Levine, S. A parent-directed language intervention for children of low socioeconomic status: A randomized controlled pilot study. Journal of Child Language, Available on CJO 2015. Doi:10.1017/S0305000915000033

We designed a parent-directed home-visiting intervention targeting socioeconomic status (SES) disparities in children’s early language environments. A randomized controlled trial was used to evaluate whether the intervention improved parents’ knowledge of child language development and increased the amount and diversity of parent talk. Twenty-three mother–child dyads (12 experimental, 11 control, aged 1;5–3;0) participated in eight weekly hour-long home-visits. In the experimental group, but not the control group, parent knowledge of language development increased significantly one week and four months after the intervention. In lab-based observations, parent word types and tokens and child word types increased significantly one week, but not four months, post-intervention. Inhome-based observations, adult word tokens, conversational turn counts, and child vocalization counts increased significantly during the intervention, but not post-intervention. The results demonstrate the malleability of child-directed language behaviors and knowledge of child language development among low-SES parents.


Goldin-Meadow, S., Brentari, D., Coppola, M., Horton, L., & Senghas, A. Watching language grow in the manual modality: Nominals, predicates, and handshapes. Cognition, 2015, 135, 381-395.

All languages, both spoken and signed, make a formal distinction between two types of terms in a proposition – terms that identify what is to be talked about (nominals) and terms that say something about this topic (predicates). Here we explore conditions that could lead to this property by charting its development in a newly emerging language – Nicaraguan Sign Language (NSL). We examine how handshape is used in nominals vs. predicates in three Nicaraguan groups: (1) homesigners who are not part of the Deaf community and use their own gestures, called homesigns, to communicate; (2) NSL cohort 1 signers who fashioned the first stage of NSL; (3) NSL cohort 2 signers who learned NSL from cohort 1. We compare these three groups to a fourth: (4) native signers of American Sign Language (ASL), an established sign language. We focus on handshape in predicates that are part of a productive classifier system in ASL; handshape in these predicates varies systematically across agent vs. no-agent contexts, unlike handshape in the nominals we study, which does not vary across these contexts. We found that all four groups, including homesigners, used handshape differently in nominals vs. predicates – they displayed variability in handshape form across agent vs. no-agent contexts in predicates, but not in nominals. Variability thus differed in predicates and nominals: (1) In predicates, the variability across grammatical contexts (agent vs. no-agent) was systematic in all four groups, suggesting that handshape functioned as a productive morphological marker on predicate signs, even in homesign. This grammatical use of handshape can thus appear in the earliest stages of an emerging language. (2) In nominals, there was no variability across grammatical contexts (agent vs. no-agent), but there was variability within- and across-individuals in the handshape used in the nominal for a particular object. This variability was striking in homesigners (an individual homesigner did not necessarily use the same handshape in every nominal he produced for a particular object), but decreased in the first cohort of NSL and remained relatively constant in the second cohort. Stability in the lexical use of handshape in nominals thus does not seem to emerge unless there is pressure from a peer linguistic community. Taken together, our findings argue that a community of users is essential to arrive at a stable nominal lexicon, but not to establish a productive morphological marker in predicates. Examining the steps a manual communication system takes as it moves toward becoming a fully-fledged language offers a unique window onto factors that have made human language what it is.


Goldin-Meadow, S. Gesture as a window onto communicative abilities: Implications for diagnosis and intervention. SIG 1 Perspectives on Language Learning and Education, 2015, 22, 50-60. Doi:10.1044/lle22.2.50

Speakers around the globe gesture when they talk, and young children are no exception. In fact, children’s first foray into communication tends to be through their hands rather than their mouths. There is now good evidence that children typically express ideas in gesture before they express the same ideas in speech. Moreover, the age at which these ideas are expressed in gesture predicts the age at which the same ideas are first expressed in speech. Gesture thus not only precedes, but also predicts, the onset of linguistic milestones. These facts set the stage for using gesture in two ways in children who are at risk for language delay. First, gesture can be used to identify individuals who are not producing gesture in a timely fashion, and can thus serve as a diagnostic tool for pinpointing subsequent difficulties with spoken language. Second, gesture can facilitate learning, including word learning, and can thus serve as a tool for intervention, one that can be implemented even before a delay in spoken language is detected.


Demir, O.E., Rowe, M., Heller, G., & Goldin-Meadow, S., & Levine, S.C. Vocabulary, syntax, and narrative development in typically developing children and children with early unilateral brain injury: Early parental talk about the “there-and-then” matters. Developmental Psychology, 2015, 51(2), 161-175. Doi: 10.1037/a0038476

This study examines the role of a particular kind of linguistic input—talk about the past and future, pretend, and explanations, that is, talk that is decontextualized—in the development of vocabulary, syntax, and narrative skill in typically developing (TD) children and children with pre- or perinatal brain injury (BI). Decontextualized talk has been shown to be particularly effective in predicting children’s language skills, but it is not clear why. We first explored the nature of parent decontextualized talk and found it to be linguistically richer than contextualized talk in parents of both TD and BI children. We then found, again for both groups, that parent decontextualized talk at child age 30 months was a significant predictor of child vocabulary, syntax, and narrative performance at kindergarten, above and beyond the child’s own early language skills, parent contextualized talk and demographic factors. Decontextualized talk played a larger role in predicting kindergarten syntax and narrative outcomes for children with lower syntax and narrative skill at age 30 months, and also a larger role in predicting kindergarten narrative outcomes for children with BI than for TD children. The difference between the 2 groups stemmed primarily from the fact that children with BI had lower narrative (but not vocabulary or syntax) scores than TD children. When the 2 groups were matched in terms of narrative skill at kindergarten, the impact that decontextualized talk had on narrative skill did not differ for children with BI and for TD children. Decontextualized talk is thus a strong predictor of later language skill for all children, but may be particularly potent for children at the lower-end of the distribution for language skill. The findings also suggest that variability in the language development of children with BI is influenced not only by the biological characteristics of their lesions, but also by the language input they receive.


Goldin-Meadow, S. Studying the mechanisms of language learning by varying the learning environment and the learner. Language, Cognition & Neuroscience, 2015, 30(8), 899-911. Doi:10.1080/23273798.2015.1016978

Language learning is a resilient process, and many linguistic properties can be developed under a wide range of learning environments and learners. The first goal of this review is to describe properties of language that can be developed without exposure to a language model – the resilient properties of language – and to explore conditions under which more fragile properties emerge. But even if a linguistic property is resilient, the developmental course that the property follows is likely to vary as a function of learning environment and learner, that is, there are likely to be individual differences in the learning trajectories children follow. The second goal is to consider how the resilient properties are brought to bear on language learning when a child is exposed to a language model. The review ends by considering the implications of both sets of findings for mechanisms, focusing on the role that the body and linguistic input play in language learning.


Ozyurek, A., Furman, R., & Goldin-Meadow, S. On the way to language: Event segmentation in homesign and gesture. Journal of Child Language, 2015, 42(1), 64-94. Doi:10.1017/S0305000913000512

Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.


Goldin-Meadow, S. The impact of time on predicate forms in the manual modality:  Signers, homesigners, and silent gesturers. TopICS, 2015, 7, 169-184. doi:10.1111/tops.12119.

It is difficult to create spoken forms that can be understood on the spot. But the manual modal- ity, in large part because of its iconic potential, allows us to construct forms that are immediately understood, thus requiring essentially no time to develop. This paper contrasts manual forms for actions produced over three time spans—by silent gesturers who are asked to invent gestures on the spot; by homesigners who have created gesture systems over their life spans; and by signers who have learned a conventional sign language from other signers—and finds that properties of the predicate differ across these time spans. Silent gesturers use location to establish co-reference in the way established sign languages do, but they show little evidence of the segmentation sign languages display in motion forms for manner and path, and little evidence of the finger complex- ity sign languages display in handshapes in predicates representing events. Homesigners, in con- trast, not only use location to establish co-reference but also display segmentation in their motion forms for manner and path and finger complexity in their object handshapes, although they have not yet decreased finger complexity to the levels found in sign languages in their handling hand- shapes. The manual modality thus allows us to watch language as it grows, offering insight into factors that may have shaped and may continue to shape human language.


Goldin-Meadow, S., Namboodiripad, S., Mylander, C., Ozyurek, A., & Sancar, B. The resilience of structure built around the predicate: Homesign gesture systems in Turkish and American deaf children. Journal of Cognition and Development, 2015, 16(1), 55-80. doi:10.1080/15248372.2013.803970

Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, which have many of the properties of natural language—the so-called resilient properties of language. We explored the resilience of structure built around the predicate—in particular, how manner and path are mapped onto the verb—in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also used the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children’s gestures. Although cospeech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language.


Demir, O.E., Levine, S., & Goldin-Meadow, S. A tale of two hands: Children’s gesture use in narrative production predicts later narrative structure in speechJournal of Child Language, 2015, 42(3), 662-681. doi:10.1017/S0305000914000415

 Speakers of all ages spontaneously gesture as they talk. These gestures predict children’s milestones in vocabulary and sentence structure. We ask whether gesture serves a similar role in the development of narrative skill. Children were asked to retell a story conveyed in a wordless cartoon at age five and then again at six, seven, and eight. Children’s narrative structure in speech improved across these ages. At age five, many of the children expressed a character’s viewpoint in gesture, and these children were more likely to tell better-structured stories at the later ages than children who did not produce characterviewpoint gestures at age five. In contrast, framing narratives from a character’s perspective in speech at age five did not predict later narrative structure in speech. Gesture thus continues to act as a harbinger of change even as it assumes new roles in relation to discourse.


Trofatter, C., Kontra, C., Beilock, S., Goldin-Meadow, S. Gesturing has a larger impact on problem-solving than action, even when action is accompanied by words. Language, Cognition and Neuroscience, 2015, 30(3), 251-260. doi:10.1080/23273798.2014.905692.

The coordination of speech with gesture elicits changes in speakers’ problem-solving behaviour beyond the changes elicited by the coordination of speech with action. Participants solved the Tower of Hanoi puzzle (TOH1); explained their solution using speech coordinated with either Gestures (Gesture + Talk) or Actions (Action + Talk), or demonstrated their solution using Actions alone (Action); then solved the puzzle again (TOH2). For some participants (Switch group), disc weights during TOH2 were reversed (smallest = heaviest). Only in the Gesture + Talk Switch group did performance worsen from TOH1 to TOH2 – for all other groups, performance improved. In the Gesture + Talk Switch group, more one-handed gestures about the smallest disc during the explanation hurt subsequent performance compared to all other groups. These findings contradict the hypothesis that gesture affects thought by promoting the coordination of task-relevant hand movements with task-relevant speech, and lend support to the hypothesis that gesture grounds thought in action via its representational properties.


LeBarton, E. S., Raudenbush, S., & Goldin-Meadow, S. Experimentally-induced increases in early gesture lead to increases in spoken vocabulary. Journal of Cognition and Development, 2015, 16(2), 199-220. doi:10.1080/15248372.2013.858041.

Differences in vocabulary that children bring with them to school can be traced back to the gestures they produce at 1;2, which, in turn, can be traced back to the gestures their parents produce at the same age (Rowe & Goldin-Meadow, 2009b). We ask here whether child gesture can be experimentally increased and, if so, whether the increases lead to increases in spoken vocabulary. Fifteen children aged 1;5 participated in an 8-week athome intervention study (6 weekly training sessions plus follow-up 2 weeks later) in which all were exposed to object words, but only some were told to point at the named objects. Before each training session and at follow-up, children interacted naturally with caregivers to establish a baseline against which changes in communication were measured. Children who were told to gesture increased the number of gesture meanings they conveyed, not only during training but also during interactions with caregivers. These experimentally-induced increases in gesture led to larger spoken repertoires at follow-up.

2014
Goldin-Meadow, S. Widening the lens: What the manual modality reveals about language, learning, and cognition. Philosophical Transactions of the Royal Society, Series B, 2014, 369. Doi: 10.1098/rstb.2013.0295

The goal of this paper is to widen the lens on language to include the manual modality. We look first at hearing children who are acquiring language from a spoken language model and find that even before they use speech to communicate, they use gesture. Moreover, those gestures precede, and predict, the acquisition of structures in speech. We look next at deaf children whose hearing losses prevent them from using the oral modality, and whose hearing parents have not presented them with a language model in the manual modality. These children fall back on the manual modality to communicate and use gestures, which take on many of the forms and functions of natural language. These homemade gesture systems constitute the first step in the emergence of manual sign systems that are shared within deaf communities and are full-fledged languages. We end by widening the lens on sign language to include gesture and find that signers not only gesture, but they also use gesture in learning contexts just as speakers do. These findings suggest that what is key in gesture’s ability to predict learning is its ability to add a second representational format to communication, rather than a second modality. Gesture can thus be language, assuming linguistic forms and functions, when other vehicles are not available; but when speech or sign is possible, gesture works along with language, providing an additional representational format that can promote learning. 


Beaudoin-Ryan, L. & Goldin-Meadow, S. Teaching moral reasoning through gesture. Developmental Science, 2014. Doi: 10.1111/desc.1218

Stem-cell research. Euthanasia. Personhood. Marriage equality. School shootings. Gun control. Death penalty. Ethical dilemmas regularly spark fierce debate about the underlying moral fabric of societies. How do we prepare today’s children to be fully informed and thoughtful citizens, capable of moral and ethical decisions? Current approaches to moral education are controversial, requiring adults to serve as either direct (‘top-down’) or indirect (‘bottom-up’) conduits of information about morality. A common thread weaving throughout these two educational initiatives is the ability to take multiple perspectives–increases in perspective taking ability have been found to precede advances in moral reasoning. We propose gesture as a behavior uniquely situated to augment perspective taking ability. Requiring gesture during spatial tasks has been shown to catalyze the production of more sophisticated problem-solving strategies, allowing children to profit from instruction. Our data demonstrate that requiring gesture during moral reasoning tasks has similar effects, resulting in increased perspective taking ability subsequent to instruction.


Ping, R., Goldin-Meadow, S., & Beilock S. Understanding gesture: Is the listener’s motor system involved? Journal of Experimental Psychology: General, 2014, 143(1), 195-204. Doi: 10.1037/a0032246

Listeners are able to glean information from the gestures that speakers produce, seemingly without conscious awareness. However, little is known about the mechanisms that underlie this process. Research on human action understanding shows that perceiving another’s actions results in automatic activation of the motor system in the observer, which then affects the observer’s understanding of the actor’s goals. We ask here whether perceiving another’s gesture can similarly result in automatic activation of the motor system in the observer. In Experiment 1, we first established a new procedure in which listener response times are used to study how gesture impacts sentence comprehension. In Experiment 2, we used this procedure, in conjunction with a secondary motor task, to investigate whether the listener’s motor system is involved in this process. We showed that moving arms and hands (but not legs and feet) interferes with the listener’s ability to use information conveyed in a speaker’s hand gestures. Our data thus suggest that understanding gesture relies, at least in part, on the listener’s own motor system.


Goldin-Meadow, S. How gesture works to change or our minds. Trends in Neuroscience and Education, 2014. Doi:10.1016/j.tine.2014.01.002

When people talk, they gesture. We now know that these gestures are associated with learning—they can index moments of cognitive instability and reflect thoughts not yet found in speech. But gesture has the potential to do more than just reflect learning—it might be involved in the learning process itself. This review focuses on two non-mutually exclusive possibilities: (1) The gestures we see others produce have the potential to change our thoughts. (2) The gestures that we ourselves produce have the potential to change our thoughts, perhaps by spatializing ideas that are not inherently spatial. The review ends by exploring the mechanisms responsible for gesture’s impact on learning, and by highlighting ways in which gesture can be effectively used in educational settings.


Novack, M.A., Congdon, E.L., Hemani-Lopez, N., & Goldin-Meadow, S. From action to abstraction: Using the hands to learn math. Psychological Science, 2014, 1-8. Doi:10.1177/0956797613518351

Previous research has shown that children benefit from gesturing during math instruction. We asked whether gesturing promotes learning because it is itself a physical action, or because it uses physical action to represent abstract ideas. To address this question, we taught third-grade children a strategy for solving mathematical-equivalence problems that was instantiated in one of three ways: (a) in a physical action children performed on objects, (b) in a concrete gesture miming that action, or (c) in an abstract gesture. All three types of hand movements helped children learn how to solve the problems on which they were trained. However, only gesture led to success on problems that required generalizing the knowledge gained. The results provide the first evidence that gesture promotes transfer of knowledge better than direct action on objects and suggest that the beneficial effects gesture has on learning may reside in the features that differentiate it from action.


Cartmill, E. A., Hunsicker, D., & Goldin-Meadow, S. Pointing and naming are not redundant: Children use gesture to modify nouns before they modify nouns in speech. Developmental Psychology, 2014. Doi: 10.1037/a0036003

Nouns form the first building blocks of children’s language but are not consistently modified by other words until around 2.5 years of age. Before then, children often combine their nouns with gestures that indicate the object labeled by the noun, for example, pointing at a bottle while saying “bottle.” These gestures are typically assumed to be redundant with speech. Here we present data challenging this assumption, suggesting that these early pointing gestures serve a determiner-like function (i.e., point at bottle “bottle” that bottle). Using longitudinal data from 18 children (8 girls), we analyzed all utterances containing nouns and focused on (a) utterances containing an unmodified noun combined with a pointing gesture and (b) utterances containing a noun modified by a determiner. We found that the age at which children first produced point noun combinations predicted the onset age for determiner noun combinations. Moreover, point noun combinations decreased following the onset of determiner noun constructions. Importantly, combinations of pointing gestures with other types of speech (e.g., point at bottle “gimme” gimme that) did not relate to the onset or offset of determiner noun constructions. Point noun combinations thus appear to selectively predict the development of a new construction in speech. When children point to an object and simultaneously label it, they are beginning to develop their understanding of nouns as a modifiable unit of speech.


Fay, N., Lister, C., Ellison, T.M., & Goldin-Meadow, S. Creating a communication system from scratch: Gesture beats vocalization hands down. Frontiers in Psychology (Language Sciences), 2014, 5(354). Doi: 10.3389/fpsyg.2014.0035

How does modality affect people’s ability to create a communication system from scratch? The present study experimentally tests this question by having pairs of participants communicate a range of pre-specified items (emotions, actions, objects) over a series of trials to a partner using either non-linguistic vocalization, gesture or a combination of the two. Gesture-alone outperformed vocalization-alone, both in terms of successful communication and in terms of the creation of an inventory of sign-meaning mappings shared within a dyad (i.e., sign alignment). Combining vocalization with gesture did not improve performance beyond gesture-alone. In fact, for action items, gesture-alone was a more successful means of communication than the combined modalities. When people do not share a system for communication they can quickly create one, and gesture is the best means of doing so.


Demir, O. E., Fisher, J. A., Goldin-Meadow, S. & Levine, S.C. Narrative processing in typically developing children and children with early unilateral brain injury: Seeing gestures matters. Developmental Psychology, 2014, 50(3), 815-828. Doi: 10.1037/a0034322

Narrative skill in kindergarteners has been shown to be a reliable predictor of later reading comprehension and school achievement. However, we know little about how to scaffold children’s narrative skill. Here we examine whether the quality of kindergarten children’s narrative retellings depends on the kind of narrative elicitation they are given. We asked this question with respect to typically developing (TD) kindergarten children and children with pre- or perinatal unilateral brain injury (PL), a group that has been shown to have difficulty with narrative production. We compared children’s skill in retelling stories originally presented to them in 4 different elicitation formats: (a) wordless cartoons, (b) stories told by a narrator through the auditory modality, (c) stories told by a narrator through the audiovisual modality without co-speech gestures, and (e) stories told by a narrator in the audiovisual modality with co-speech gestures. We found that children told better structured narratives in response to the audiovisual + gesture elicitation format than in response to the other 3 elicitation formats, consistent with findings that co-speech gestures can scaffold other aspects of language and memory. The audiovisual + gesture elicitation format was particularly beneficial for children who had the most difficulty telling a well-structured narrative, a group that included children with larger lesions associated with cerebrovascular infarcts. 


Goldin-Meadow, S., Levine, S.C., Hedges, L. V., Huttenlocher, J., Raudenbush, S., & Small, S. New evidence about language and cognitive development based on a longitudinal study: Hypotheses for intervention. American Psychologist, 2014, 69(6), 588-599.

We review findings from a four-year longitudinal study of language learning conducted on two samples: a sample of typically developing children whose parents vary substantially in socioeconomic status, and a sample of children with pre- or perinatal brain injury. This design enables us to study language development across a wide range of language learning environments and a wide range of language learners. We videotaped samples of children’s and parent’s speech and gestures during spontaneous interactions at home every four months, and then we transcribed and coded tapes. We focused on two behaviors known to vary across individuals and environments – child gesture and parent speech – behaviors that have the possibility to index, and perhaps even play a role in creatining, differences across children in linguistic and other cognitive skills. Our observations have lead to four hypotheses that have promise for the development of diagnostic tools and interventions to enhance language and cognitive development and brain plasticity after neonatal injury. One kind of hypothesis involves tools that could identify children who may be at risk for later language deficits. The other involves interventions that have the potential to promote language development. We present our four hypotheses as a summary of the findings from our study because there is scientific evidence behind them and because this evidence has the potential to be put to practical use in improving education. 


Goldin-Meadow, S. In search of resilient and fragile properties of language. Journal of Child Language, 2014, 41, 64-77. 

Young children are skilled language learners. They apply their skills to the language input they receive from their parents and, in this way, derive patterns that are statistically related to their input. But being an excellent statistical learner does not explain why children who are not exposed to usable linguistic input nevertheless communicate using systems containing the fundamental properties of language. Nor does it explain why learners sometimes alter the linguistic input to which they are exposed (input from either a natural or an artificial language). These observations suggest that children are prepared to learn language. Our task now, as it was in 1974, is to figure out what they are prepared with – to identify properties of language that are relatively easy to learn, the resilientproperties, as well as properties of language that are more difficult to learn, the fragile properties. The new tools and paradigms for describing and explaining language learning that have been introduced into the field since 1974 offer great promise for accomplishing this task


Applebaum, L., Coppola, M., & Goldin-Meadow, S. Prosody in a communication system developed without a language model. Sign Language and Linguistics, 2014, 17(2), 181-212. doi: 10.1075/sll.17.2.02app

Prosody, the “music” of language, is an important aspect of all natural languages, spoken and signed. We ask here whether prosody is also robust across learning conditions. If a child were not exposed to a conventional language and had to construct his own communication system, would that system contain prosodic structure? We address this question by observing a deaf child who received no sign language input and whose hearing loss prevented him from acquiring spoken language. Despite his lack of a conventional language model, this child developed his own gestural system. In this system, features known to mark phrase and utterance boundaries in established sign languages were used to consistently mark the ends of utterances, but not to mark phrase or utterance internal boundaries. A single child can thus develop the seeds of a prosodic system, but full elaboration may require more time, more users, or even more generations to blossom.


Dick, A. S., Mok, E. H., Beharelle, A. R., Goldin-Meadow, S., & Small, S. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speechHuman Brain Mapping, 2014, 35(3), 900-971. doi:10.1002/hbm.22222.

In everyday conversation, listeners often rely on a speaker’s gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers’ iconic gestures. We focused on iconic gestures that contribute information not found in the speaker’s talk, compared with those that convey information redundant with the speaker’s talk. We found that three regions—left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)— responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech.


Özçalişkan, S., Gentner, D., & Goldin-Meadow, S. Do iconic gestures pave the way for children’s early verbs? Applied Psycholinguistics, 2014, 35(6), 1143-1162. doi:10.1017/ S0142716412000720.

Children produce a deictic gesture for a particular object (point at dog) approximately 3 months before they produce the verbal label for that object (“dog”; Iverson & Goldin-Meadow, 2005). Gesture thus paves the way for children’s early nouns. We ask here whether the same pattern of gesture preceding and predicting speech holds for iconic gestures. In other words, do gestures that depict actions precede and predict early verbs? We observed spontaneous speech and gestures produced by 40 children (22 girls, 18 boys) from age 14 to 34 months. Children produced their first iconic gestures 6 months later than they produced their first verbs. Thus, unlike the onset of deictic gestures, the onset of iconic gestures conveying action meanings followed, rather than preceded, children’s first verbs. However, iconic gestures increased in frequency at the same time as verbs did and, at that time, began to convey meanings not yet expressed in speech. Our findings suggest that children can use gesture to expand their repertoire of action meanings, but only after they have begun to acquire the verb system underlying their language.


 

2012

Goldin-Meadow, S. Homesign: gesture to language. In R. Pfau, M. Steinbach & B. Woll (eds.), Sign language: An international handbook (pp. 601-625). Berlin: Mouton de Gruyter.

Deaf children whose hearing losses are so severe that they cannot acquire the spoken language that surrounds them and whose hearing parents have not exposed them to sign language lack a usable model for language. If a language model is essential to activate whatever skills children bring to language-learning, deaf children in these circumstances ought not communicate in language-like ways. It turns out, however, that these children do communicate and they use their hands to do so. They invent gesture systems, called “homesigns”, that have many of the properties of natural language. The chapter begins by describing properties of language that have been identified in homesign ! the fact that it has a stable lexicon, has both morphological and syntactic structure, and is used for many of the functions language serves. Although homesigners are not exposed to a conventional sign language, they do see the gestures that their hearing parents produce when they talk. The second section argues that these gestures do not serve as a full-blown model for the linguistic properties found in homesign. The final section then explores how deaf children transform the gestural input they receive from their hearing parents into homesign.


Brentari, D., Coppola, M., Mazzoni, L., & Goldin-Meadow, S. When does a system become phonological? Handshape production in gesturers, signers, and homesigners. Natural Language and Linguistic Theory, 30(1), 1-31.

Sign languages display remarkable crosslinguistic consistencies in the use of handshapes. In particular, handshapes used in classifier predicates display a consistent pattern in finger complexity: classifier handshapes representing objects display more finger complexity than those representing how objects are handled.Here we explore the conditions under which this morphophonological phenomenon arises. In Study 1, we ask whether hearing individuals in Italy and the United States, asked to communicate using only their hands, show the same pattern of finger complexity found in the classifier handshapes of two sign languages: Italian Sign Language (LIS) and American Sign Language (ASL). We find that they do not: gesturers display more finger complexity in handling handshapes than in object handshapes. The morphophonological pattern found in conventional sign languages is therefore not a codified version of the pattern invented by hearing individuals on the spot. In Study 2, we ask whether continued use of gesture as a primary communication system results in a pattern that is more similar to the morphophonological pattern found in conventional sign languages or to the pattern found in gesturers. Homesigners have not acquired a signed or spoken language and instead use a self-generated gesture system to communicate with their hearing family members and friends. We find that homesigners pattern more like signers than like gesturers: their finger complexity in object handshapes is higher than that of gesturers (indeed as high as signers); and their finger complexity in handling handshapes is lower than that of gesturers (but not quite as low as signers). Generally, our findings indicate two markers of the phonologization of handshape in sign languages: increasing finger complexity in object handshapes, and decreasing finger complexity in handling handshapes. These first indicators of phonology appear to be present in individuals developing a gesture system without benefit of a linguistic community. Finally, we propose that iconicity, morphology, and phonology each play an important role in the system of sign language classifiers to create the earliest markers of phonology at the morphophonological interface.


Sauter, A., Uttal, D., Alman. A. S., Goldin-Meadow, S., & Levine, S., C. Learning what children know about space from looking at their hands: The added value of gesture in spatial communication. Journal of Experimental Child Psychology, 111(4), 587-606.

This article examines two issues: the role of gesture in the communinication of spatial information and the relation between communication and mental representation. Children (8–10 years) and adults walked through a space to learn the locations of six hidden toy animals and then explained the space to another person. In Study 1, older children and adults typically gestured when describing the space and rarely provided spatial information in speech without also providing the information in gesture. However, few 8-year- olds communicated spatial information in speech or gesture. Studies 2 and 3 showed that 8-year-olds did understand the spatial arrangement of the animals and could communicate spatial information if prompted to use their hands. Taken together, these results indicate that gesture is important for conveying spatial relations at all ages and, as such, provides us with a more complete picture of what children do and do not know about communicating spatial relations.


Rowe, M. L., Raudenbush, S. W., & Goldin-Meadow, S. The pace of early vocabulary growth helps predict later vocabulary skill. Child Development83(2), 508-525.

Children vary widely in the rate at which they acquire words-some start slow and speed up, others start fast and continue at a steady pace. Do early developmental variations of this sort help predict vocabulary skill just prior to kindergarten entry? This longitudinal study starts by examining important predictors (socioeconomic status [SES], parent input, child gesture) of vocabulary growth between 14 and 46 months (n=62) and then use growth estimates to predict children’s vocabulary at 54 months. Velocity and acceleration in vocabulary development at 30 months predicted later vocabulary, particularly for children from low-SES backgrounds. Understanding the pace of early vocabulary growth thus improves our ability to predict school readiness and may help identify children at risk for starting behind.


Dick, A., Goldin-Meadow, S., Solodkin, A., & Small, S. Gesture in the developing brain. Developmental Science, 15(2), 165-180.

Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old children and adults listening to stories accompanied by hand movements, either meaningful co-speech gestures or meaningless self-adaptors. When listening to stories accompanied by both types of hand movement, both children and adults recruited inferior frontal, inferior parietal, and posterior temporal brain regions known to be involved in processing language not accompanied by hand movements. There were, however, age-related differences in activity in posterior superior temporal sulcus (STSp), inferior frontal gyrus, pars triangularis (IFGTr), and posterior middle temporal gyrus (MTGp) regions previously implicated in processing gesture. Both children and adults showed sensitivity to the meaning of hand movements in IFGTr and MTGp, but in different ways. Finally, we found that hand movement meaning modulates interactions between STSp and other posterior temporal and inferior parietal regions for adults, but not for children. These results shed light on the developing neural substrate for understanding meaning contributed by co-speech gesture.


Goldin-Meadow, S., Shield, A., Lenzen, D., Herzig, M., & Padden, C. The gestures ASL signers use tell us when they are ready to learn math. Cognition, 123, 448-453.

The manual gestures that hearing children produce when explaining their answers to math problems predict whether they will profit from instruction in those problems. We ask here whether gesture plays a similar role in deaf children, whose primary communication system is in the manual modality. Forty ASL-signing deaf children explained their solutions to math problems and were then given instruction in those problems. Children who produced many gestures conveying different information from their signs (gesture-sign mismatches)were more likely to succeed after instruction than children who produced few, suggesting that mismatch can occur within-modality, and paving the way for using gesture-based teaching strategies with deaf learners.


Cook, S. W., Yip, T., & Goldin-Meadow, S. Gestures, but not meaningless movements, lighten working memory load when explaining math. Language and Cognitive Processes27, 594-610.

Gesturing is ubiquitous in communication and serves an important function for listeners, who are able to glean meaningful information from the gestures they see. But gesturing also functions for speakers, whose own gestures reduce demands on their working memory. Here we ask whether gesture’s beneficial effects on working memory stem from its properties as a rhythmic movement, or as a vehicle for representing meaning. We asked speakers to remember letters while explaining their solutions to math problems and producing varying types of movements. Speakers recalled significantly more letters when producing movements that coordinated with the meaning of the accompanying speech, i.e., when gesturing, than when producing meaningless movements or no movement. The beneficial effects that accrue to speakers when gesturing thus seem to stem not merely from the fact that their hands are moving, but from the fact that their hands are moving in coordination with the content of speech.


Cartmill, E. A., Beilock, S., & Goldin-Meadow, S. A word in the hand: Action, gesture, and mental representation in human evolutions. Philosophical Transaction of the Royal Society, Series B, 367, 129-143.

The movements we make with our hands both reflect our mental processes and help to shape them. Our actions and gestures can affect our mental representations of actions and objects. In this paper, we explore the relationship between action, gesture and thought in both humans and non-human primates and discuss its role in the evolution of language. Human gesture (specifically representational gesture) may provide a unique link between action and mental representation. It is kinaesthetically close to action and is, at the same time, symbolic. Non-human primates use gesture frequently to communicate, and do so flexibly. However, their gestures mainly resemble incomplete actions and lack the representational elements that characterize much of human gesture. Differences in the mirror neuron system provide a potential explanation for non-human primates’ lack of representational gestures; the monkey mirror system does not respond to representational gestures, while the human system does. In humans, gesture grounds mental representation in action, but there is no evidence for this link in other primates. We argue that gesture played an important role in the transition to symbolic thought and language in human evolution, following a cognitive leap that allowed gesture to incorporate representational elements.


Quandt, L. C., Marshall, P.J., Shipley, T.F., Beilock, S.L., & Goldin-Meadow, S. Sensitivity of alpha and beta oscillations to sensorimotor characteristics of action: An EEG study of action production and gesture observation. Neuropsychologia, 50(12), 2745-51.

The sensorimotor experiences we gain when performing an action have been found to influence how our own motor systems are activated when we observe others performing that same action. Here we asked whether this phenomenon applies to the observation of gesture. Would the sensorimotor experiences we gain when performing an action on an object influence activation in our own motor systems when we observe others performing a gesture for that object? Participants were given sensorimotor experience with objects that varied in weight, and then observed video clips of an actor producing gestures for those objects. Electroencephalography (EEG) was recorded while participants first observed either an iconic gesture (pantomiming lifting an object) or a deictic gesture (pointing to an object) for an object, and then grasped and lifted the object indicated by the gesture. We analyzed EEG during gesture observation to determine whether oscillatory activity was affected by the observer’s sensorimotor experiences with the object represented in the gesture. Seeing a gesture for an object previously experienced as light was associated with a suppression of power in alpha and beta frequency bands, particularly at posterior electrodes. A similar pattern was found when participants lifted the light object, but over more diffuse electrodes. Moreover, alpha and beta bands at right parieto-occipital electrodes were sensitive to the type of gesture observed (iconic vs. deictic). These results demonstrate that sensorimotor experience with an object affects how a gesture for that object is processed, as measured by the gesture-observer’s EEG, and suggest that different types of gestures recruit the observer’s own motor system in different ways.


Demir, O.E., So, W-C., Ozyurek, A., & Goldin-Meadow, S. Turkish- and English-speaking children display sensitivity to perceptual context in the referring expressions they produce in speech and gesture. Language and Cognitive Processes, 27(6), 844-867.

Speakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children’s sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.


Goldin-Meadow, S., Cook, S.W. Gesture in thought. In K. J. Holyoak & R. G. Morrison (eds.), Oxford handbook of thinking and reasoning. (pp. 631-649). N.Y.: Oxford University Press. 

The spontaneous gestures that speakers produce when they talk about a task reflect aspects of the speakers’ knowledge about that task, aspects that are often not found in the speech that accompanies the gestures. But gesture can go beyond reflecting a speaker’s current knowledge—it frequently presages the next steps the speaker will take in acquiring new knowledge, suggesting that gesture may play a role in cognitive change. To investigate this hypothesis, we explore the functions gesture serves with respect to both communication (the effects gesture has on listeners) and cognition (the effects gesture has on speakers themselves). We also explore the mechanisms that underlie the production of gesture, and we provide evidence that gesture has roots in speech, visuospatial thinking, and action. Gesturing is not merely hand waving, nor is it merely a window into the mind. It can affect how we think and reason and, as such, offers a useful tool to both learners and researchers


Shneidman, L. A., & Goldin-Meadow, S. Language input and acquistion in Mayan village: How important is directed speech? Developmental Science, 15(5), 659-673. 

Theories of language acquisition have highlighted the importance of adult speakers as active participants in children’s language learning. However, in many communities children are reported to be directly engaged by their caregivers only rarely (Lieven, 1994). This observation raises the possibility that these children learn language from observing, rather than participating in, communicative exchanges. In this paper, we quantify naturally occurring language input in one community where directed interaction with children has been reported to be rare (Yucatec Mayan). We compare this input to the input heard by children growing up in large families in the United States, and we consider how directed and overheard input relate to Mayan children’s later vocabulary. In Study 1, we demonstrate that 1-year-old Mayan children do indeed hear a smaller proportion of total input in directed speech than children from the US. In Study 2, we show that for Mayan (but not US) children, there are great increases in the proportion of directed input that children receive between 13 and 35 months. In Study 3, we explore the validity of using videotaped data in a Mayan village. In Study 4, we demonstrate that word types directed to Mayan children from adults at 24 months (but not word types overheard by children or word types directed from other children) predict later vocabulary. These findings suggest that adult talk directed to children is important for early word learning, even in communities where much of children’s early language input comes from overheard speech.


Kontra, C. E., Goldin-Meadow, S., & Beilock, S.L. Embodied Learning across the lifespan. Topics in Cognitive Science, 1-9.

Developmental psychologists have long recognized the extraordinary influence of action on learning (Held & Hein, 1963; Piaget, 1952). Action experiences begin to shape our perception of the world during infancy (e.g., as infants gain an understanding of others’ goal-directed actions; Woodward, 2009) and these effects persist into adulthood (e.g., as adults learn about complex concepts in the physical sciences; Kontra, Lyons, Fischer, & Beilock, 2012). Theories of embodied cognition provide a structure within which we can investigate the mechanisms underlying action’s impact on thinking and reasoning. We argue that theories of embodiment can shed light on the role of action experience in early learning contexts, and further that these theories hold promise for using action to scaffold learning in more formal educational settings later in development.


Goldin-Meadow, S, Levine, S. L., Zinchenko, E., Yip, T.K-Y, Hemani, N., & Factor, L. Doing gesture promotes learning a mental transformation task better than seeing gesture. Developmental Science15(6), 876-884.

Performing action has been found to have a greater impact on learning than observing action. Here we ask whether a particular type of action – the gestures that accompany talk – affect learning in a comparable way. We gave 158 6-year-old children instruction in a mental transformation task. Half the children were asked to produce a Move gesture relevant to the task; half were asked to produce a Point gesture. The children also observed the experimenter producing either a Move or Point gesture. Children who produced a Move gesture improved more than children who observed the Move gesture. Neither producing nor observing the Point gesture facilitated learning. Doing gesture promotes learning better than seeing gesture, as long as the gesture conveys information that could help solve the task.


Hunsicker, D. & Goldin-Meadow, S. Hierarchical structure in a self-created communication system: Building nominal constituents in homesign. Language, 2012. 732-763.

Deaf children whose hearing losses are so severe that they cannot acquire spoken language and whose hearing parents have not exposed them to sign language neverthelessuse gestures, called HOMESIGNS,to communicate. Homesigners have been shown to refer to entities by pointing at that entity (a demonstrative, that). They also use iconic gestures and categor ypoints that refer, not to a particular entity, but to its class (a noun, bird). We used longitudinal data from a homesigner called David to test the hypothesis that these different types of gestures are combined to form larger, multi gesture nominal constituents (that bird). We verified this hypothesis by showing thatDavid’s multi gesture combinations served the same semantic and syntactic functions as demonstrative gestures or noun gestures used on their own. In other words, the larger unit substituted for the smaller units and, in this way, functioned as a nominal constituent. Children are thus able to refer to entities using multi gesture units that contain both nouns and demonstratives, even when they do not have a conventional language to provide a model for this type of hierarchical constituent structure.

2013
Hunsicker, D. & Goldin-Meadow, S. How handshape can distinguish between nouns and verbs in homesign. Gesture13(3), 354-376. Doi 10.1075/gest.13.3.05hun

All established languages, spoken or signed, make a distinction between nouns and verbs. Even a young sign language emerging within a family of deaf individuals has been found to mark the noun-verb distinction, and to use handshape type to do so. Here we ask whether handshape type is used to mark the noun-verb distinction in a gesture system invented by a deaf child who does not have access to a usable model of either spoken or signed language. The child produces homesigns that have linguistic structure, but recieves from his hearing parents co-speech gestures that are structured differently from his own gestures. Thus, unlike users of established and emerging languages, the homesigner is a produce of his system but does not recieve it from others. Nevertheless, we found that the child used handshape type to mark the distinction between nouns and verbs at the early stages of development. The noun-verb distinction is thus so fundamental to language that it can arise in a homesign system that is not shared with others. We also found that the child abandoned handshape type as a device for distinguishing nouns from verbs at just the moment when he developed a combinatorial system of handshape and motion components that marked the distinction. The way the noun-verb distinction is marked thus depends on the full array of linguistic devices available within the system. 


Brentari, D., Coppola, M., Jung, A., & Goldin-Meadow, S. Acquiring word class distinctions in American Sign Language: Evidence from handshape. Language Learning and Development9(2), 130-150. Doi: 10.1080/15475441.2012.679540

Handshape works differently in nouns versus a class of verbs in American Sign Language (ASL) and thus can serve as a cue to distinguish between these two word classes. Handshapes representing characteristics of the object itself (object handshapes) and handshapes representing how the object is handled (handlinghandshapes) appear in both nouns and a particular type of verb, classifier predicates, in ASL. When used as nouns,object and handling handshapes are phonemic—-that is, they are specified in dictionary entries and do not vary with grammatical context. In contrast, when used as classifier predicates, object and handling handshapes do vary with grammatical context for both morphological and syntactic reasons. We ask here when young deaf children learning ASL acquire the word class distinction signaled by handshape. Specifically, we determined the age at which children systematically vary object versus handling handshapes as a function of grammatical context in classifier predicates but not in the nouns that accompany those predicates. We asked 4–6-year-old children, 7–10-year-old children, and adults, all of whom were native ASL signers, to describe a series of vignettes designed to elicit object and handling handshapes in both nouns and classifier predicates.We found that all of the children behaved like adults with respect to all nouns, systematically varying object and handling handshapes as a function of type of item and not grammatical context. The children also behaved like adults with respect to certain classifiers, systematically varying handshape type as a function of grammatical context for items whose nouns have handling handshapes. The children differed from adults in that they did not systematically vary handshape as a function of grammatical context for items whose nouns have object handshapes. These findings extend previous work by showing that children require developmental time to acquire the full morphological system underlying classifier predicates in sign language, just as children acquiring complex morphology in spoken languages do. In addition, we show for the first time that children acquiring ASL treat object and handling handshapes differently as a function of their status as nouns vs. classifier predicates, and thus display a distinction between these word classes as early as 4 years of age.


Gentner, D., Özyurek, A., Gurcanli, O., & Goldin-Meadow, S. Spatial language facilitates spatial cognition:  Evidence from children who lack language input. Cognition127(3), 318–330.

Does spatial language influence how people think about space? To address this question, we observed children who did not know a conventional language, and tested their performance on nonlinguistic spatial tasks. We studied deaf children living in Istanbul whose hearing losses prevented them from acquiring speech and whose hearing parents had not exposed them to sign. Lacking a conventional language, the children used gestures, called homesigns, to communicate . In Study 1, we asked whether homesigners used gesture to convey spatial relations, and found that they did not. In Study 2, we tested a new group of homesigners on a Spatial Mapping Task, and found that they performed significantly worse than hearing Turkish children who were matched to the deaf childre n on another cognitive task. The absence of spatial language thus went hand-in-hand with poor performance on the nonlinguistic spatial task, pointing to the importance of spatial language in thinking about space.


Ozcaliskan, S., Levine, S., & Goldin-Meadow, S. Gesturing with an injured brain: How gesture helps children wth early brain injury learn linguistic constructions. Journal of Child Language40(5), 69-105. Doi:10.1017/S0305000912000220

Children with pre/perinatal unilateral brain lesions (PL) show remarkable plasticity for language development. Is this plasticity characterized by the same developmental trajectory that characterizes typically developing (TD) children, with gesture leading the way into speech? We explored this question, comparing eleven children with PL – matched to thirty TD children on expressive vocabulary – in the second year of life. Children with PL showed similarities to TD children for simple but not complex sentence types. Children with PL produced simple sentences across gesture and speech several months before producing them entirely in speech, exhibiting parallel delays in both gesture + speech and speech-alone. However, unlike TD children, children with PL produced complex sentence types first in speech-alone. Overall, the gesture–speech system appears to be a robust feature of language learning for simple – but not complex – sentence constructions, acting as a harbinger of change in language development even when that language is developing in an injured brain. 


Goldin-Meadow, S. & Alibali, M.W. Gesture’s role in speaking, learning, and creating language. Annual Review of Psychology123, 448-453. Doi: 10.1146/annurev-psych-113011-143802

When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture’s contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers’ thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers’ thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy,or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.


Shneidman, L. A., Arroyo, M. E., Levine, S., & Goldin-Meadow, S. What counts as effective input for word learning?  Journal of Child Language, 40(3), 672-86.

The talk children hear from their primary caregivers predicts the size of their vocabularies. But children who spend time with multiple individuals also hear talk that others direct to them, as well as talk not directed to them at all. We investigated the effect of linguistic input on vocabulary acquisition in children who routinely spent time with one vs. multiple individuals. For all children, the number of words primary caregivers directed to them at age 2;6 predicted vocabulary size at age 3;6. For children who spent time with multiple individuals, child-directed words from ALL household members also predicted later vocabulary and accounted for more variance in vocabulary than words from primary caregivers alone. Interestingly, overheard words added no predictive value to the model. These findings suggest that speech directed to children is important for early word learning, even in households where a sizable proportion of input comes from overheard speech.


Andric, M., Solodkin, A. Buccino, G., Goldin-Meadow, S., Rizzolatti, G., & Small, S. L. Brain function overlaps when people observe emblems, speech, and grasping. Neuropsychologia51(8), 1619-1629.

A hand grasping a cup or gesturing “thumbs-up”, while both manual actions, have different purposes and effects. Grasping directly affects the cup, whereas gesturing “thumbs-up” has an effect through an impliedverbal (symbolic) meaning. Because grasping and emblematic gestures (“emblems”) are both goal-oriented hand actions, we pursued the hypothesis that observing each should evoke similar activity in neural regions implicated in processing goal-oriented hand actions. However, because emblems express symbolic meaning, observing them should also evoke activity in regions implicated in interpreting meaning, which is most commonly expressed in language. Using fMRI to test this hypothesis, we had participants watch videos of an actor performing emblems, speaking utterances matched in meaning to the emblems, and grasping objects. Our results show that lateral temporal and inferior frontal regions respond to symbolic meaning, even when it is expressed by a single hand action. In particular, we found that left inferior frontal and right lateral temporal regions are strongly engaged when people observe either emblems or speech. In contrast, we also replicate and extend previous work that implicates parietal and premotor responses in observing goal-oriented hand actions. For hand actions, we found that bilateral parietal and premotor regions are strongly engaged when people observe either emblems or grasping. These findings thus characterize converging brain responses to shared features (e.g., symbolic or manual), despite their encoding and presentation in different stimulus modalities.


Göksun, T., Goldin-Meadow, S., Newcombe, N., & Shipley, T. Individual differences in mental rotation: What does gesture tell us? Cognitive Processing14, 153-162.

Gestures are common when people convey spatial information, for example, when they give directions or describe motion in space. Here, we examine the gestures speakers produce when they explain how they solved mental rotation problems (Shepard and Meltzer in Science 171:701–703, 1971). We asked whether speakers gesture differently while describing their problems as a function of their spatial abilities. We found that low-spatial individuals (as assessed by a standard paper-and-pencil measure) gestured more to explain their solutions than high-spatial individuals. While this finding may seem surprising, finer-grained analyses showed that low-spatial participants used gestures more often than high-spatial participants to convey ‘‘static only’’ information but less often than high-spatial participants to convey dynamic information. Furthermore, the groups differed in the types of gestures used to convey static information: high-spatial individuals were more likely than low-spatial individuals to use gestures that captured the internal structure of the block forms. Our gesture findings thus suggest that encoding block structure may be as important as rotating the blocks in mental spatial transformation.


Gunderson, E. A., Gripshover, S. J., Romero, C., Dweck, C. S., Goldin-Meadow, S., & Levine, S. C. Parent praise to 1- to 3-year-olds predicts children’s motivational framworks 5 years later. Child Development, 84(5), 1526-1541. Doi: 10.1111/cdev.12064

In laboratory studies, praising children’s effort encourages them to adopt incremental motivational frameworks—they believe ability is malleable, attribute success to hard work, enjoy challenges, and generate strategies for improvement. In contrast, praising children’s inherent abilities encourages them to adopt fixed-ability frameworks. Does the praise parents spontaneously give children at home show the same effects? Although parents’ early praise of inherent characteristics was not associated with children’s later fixed-ability frameworks, parents’ praise of children’s effort at 14–38 months (N=53) did predict incremental frameworks at 7–8 years, suggesting that causal mechanisms identified in experimental work may be operating in home environments.


So, W-C., Kita, S., & Goldin-Meadow, S. When do speakers use gesture to specify who does what to whom in a narrative? The role of language proficiency and type of gesture.  Journal of Psycholinguistic Research, 42, 581-594. Doi: 10.1007/s10936-012-9230-6

Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context.


Coppola, M., Spaepen, E., & Goldin-Meadow, S. Communicating about quantity without a language model: Number devices in homesign grammar. Cognitive Psychology67, 1-25.

All natural languages have formal devices for communicating about number, be they lexical (e.g.,two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner’s gesture system and, in this sense, linguistic. The number gestures produced by the homesigners’ hearing communication partners displayed some, but not all, of the homesigners’ linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child’s number gestures displayed all of the properties found in the adult homesigners’ gestures, but his mother’s gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners’ linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input.


Spaepen, E., Flaherty, M., Coppola, M., Spelke, E., & Goldin-Meadow, S. Generating a lexicon without a language model: Do words for number count? Journal of Memory and Language. Doi: 10.1016/j.jml.2013.05.004

Homesigns are communication systems created by deaf individuals without access to conventional linguistic input. To investigate how homesign gestures for number function in short-term memory compared to homesign gestures for objects, actions, or attributes, we conducted memory span tasks with adult homesigners in Nicaragua, and with comparison groups of unschooled hearing Spanish speakers and deaf Nicaraguan Sign Language signers. There was no difference between groups in recall of gestures or words for objects, actions or attributes; homesign gestures therefore can function as word units in short-term memory. However, homesigners showed poorer recall of numbers than the other groups. Unlike the other groups, increasing the numerical value of the to-be-remembered quantities negatively affected recall in homesigners, but not controls. When developed without linguistic input, gestures for number do not seem to function as summaries of the cardinal values of the sets (four), but rather as indexes of items within a set (one–one–one–one). 


Cartmill, E.A., Armstrong, B.F., Gleitman, L.R., Goldin-Meadow, S., Medina, T.N., & Trueswell, J. D. Quality of early parent input predicts child vocabulary three years later. Proceedings of the National Academy of Sciences of the United States of America, 110(28), 11278-11283. Doi: 10.1073/pnas.1309518110

Children vary greatly in the number of words they know when they enter school, a major factor influencing subsequent school and workplace success. This variability is partially explained by the differential quantity of parental speech to preschoolers. However, the contexts in which young learners hear new words are also likely to vary in referential transparency; that is, in how clearly word meaning can be inferred from the immediate extralinguistic context, an aspect of input quality. To examine this aspect, we asked 218 adult participants to guess 50 parents’ words from (muted) videos of their interactions with their 14- to 18-mo-old children. We found systematic differences in how easily individual parents’ words could be identified purely from this socio-visual context. Differences in this kind of input quality correlated with the size of the children’s vocabulary 3 y later, even after controlling for differences in input quantity. Although input quantity differed as a function of socioeconomic status, input quality (as here measured) did not, suggesting that the quality of nonverbal cues to word meaning that parents offer to their children is an individual matter, widely distributed across the population of parents.

**additional years are available upon request.