Abstracts

2018

Lu, J., Goldin-Meadow, S. Creating Images With the Stroke of a Hand: Depiction of Size and Shape in Sign Language. Frontiers in Psychology, Doi: 10.3389/fpsyg.2018.01276, PDF

 In everyday communication, not only do speakers describe, but they also depict. When depicting, speakers take on the role of other people and quote their speech or imitate their actions. In previous work, we developed a paradigm to elicit depictions in speakers. Here we apply this paradigm to signers to explore depiction in the manual modality, with a focus on depiction of the size and shape of objects. We asked signers to describe two objects that could easily be characterized using lexical signs (Descriptive Elicitation), and objects that were more difficult to distinguish using lexical signs, thus encouraging the signers to depict (Depictive Elicitation). We found that signers used two types of depicting constructions (DCs), conventional DCs and embellished DCs. Both conventional and embellished DCs make use of categorical handshapes to identify objects. But embellished DCs also capture imagistic aspects of the objects, either by adding a tracing movement to gradiently depict the contours of the object, or by adding a second handshape to depict the configuration of the object. Embellished DCs were more frequent in the Depictive Elicitation context than in the Descriptive Elicitation context; lexical signs showed the reverse pattern; and conventional DCs were equally like in the two contexts. In addition, signers produced iconic mouth movements, which are temporally and semantically integrated with the signs they accompany and depict the size and shape of objects, more often with embellished DCs than with either lexical signs or conventional DCs. Embellished DCs share a number of properties with embedded depictions, constructed action, and constructed dialog in signed and spoken languages. We discuss linguistic constraints on these gradient depictions, focusing on how handshape constrains the type of depictions that can be formed, and the function of depiction in everyday discourse.


Spaepen, E., Gunderson, E., Gibson, D., Goldin-Meadow, S., & Levine, S. Meaning before order: Cardinal principle knowledge predicts imporevement in understanding the successor principle and exact ordering. Cognition, Doi: 10,1016/j.cognition.2018.06.012, PDF

Learning the cardinal principle (the last word reached when counting a set represents the size of the whole set) is a major milestone in early mathematics. But researchers disagree about the relationship between cardinal principle knowledge and other concepts, including how counting implements the successor function (for each number word N representing a cardinal value, the next word in the count list represents the cardinal value N + 1) and exact ordering (cardinal values can be ordered such that each is one more than the value before it and one less than the value after it). No studies have investigated acquisition of the successor principle and exact ordering over time, and in relation to cardinal principle knowledge. An open question thus remains: Is the cardinal principle a “gatekeeper” concept children must acquire before learning about succession and exact ordering, or can these concepts develop separately? Preschoolers (N = 127) who knew the cardinal principle (CP-knowers) or who knew the cardinal meanings of number words up to “three” or “four” (3–4-knowers) completed succession and exact ordering tasks at pretest and posttest. In between, children completed one of two trainings: counting only versus counting, cardinal labeling, and comparison. CP-knowers started out better than 3–4-knowers on succession and exact ordering. Controlling for this disparity, we found that CP-knowers improved over time on succession and exact ordering; 3–4-knowers did not. Improvement did not differ between the two training conditions. We conclude that children can learn the cardinal principle without understanding succession or exact ordering and hypothesize that children must understand the cardinal principle before learning these concepts.


Cooperrider, K., Abner, N., & Goldin-Meadow, S. The Palm-Up Puzzle: Meanings and Origins of a Widespread Form in Gesture and Sign. Frontiers in Communication, Doi: 10.3389/fcomm.2018.00023, PDF

During communication, speakers commonly rotate their forearms so that their palms turn upward. Yet despite more than a century of observations of such palm-up gestures, their meanings and origins have proven difficult to pin down. We distinguish two gestures within the palm-up form family: the palm-up presentational and the palm-up epistemic. The latter is a term we introduce to refer to a variant of the palm-up that prototypically involves lateral separation of the hands. This gesture—our focus—is used in speaking communities around the world to express a recurring set of epistemic meanings, several of which seem quite distinct. More striking, a similar palm-up form is used to express the same set of meanings in many established sign languages and in emerging sign systems. Such observations present a two-part puzzle: the first part is how this set of seemingly distinct meanings for the palm-up epistemic are related, if indeed they are; the second is why the palm-up form is so widely used to express just this set of meanings. We propose a network connecting the different attested meanings of the palm-up epistemic, with a kernel meaning of absence of knowledge, and discuss how this proposal could be evaluated through additional developmental, corpus-based, and experimental research. We then assess two contrasting accounts of the connection between the palm-up form and this proposed meaning network, and consider implications for our understanding of the palm-up form family more generally. By addressing the palm-up puzzle, we aim, not only to illuminate a widespread form found in gesture and sign, but also to provide insights into fundamental questions about visual-bodily communication: where communicative forms come from, how they take on new meanings, and how they become integrated into language in signing communities.


Wakefield, E., Novack, M.A., Congdon, E.L., Franconeri, S., & Goldin-Meadow, S.  Gesture helps learners learn, but not merely by guiding their visual attention.  Developmental Science, Doi: 10.1111/desc.12664, PDF

Teaching a new concept through gestures—hand movements that accompany speech—facilitates learning above‐and‐beyond instruction through speech alone (e.g., Singer & Goldin‐Meadow, 2005). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism—gesture’s ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture—they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor’s speech (i.e., follow along with speech) than children who watch the no‐gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learning—following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture’s beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech.


Gunderson, E. A., Sorhagen, N., Gripshover, S. J., Dweck C.S., Goldin-Meadow, S. & Levine, S. C. Parent praise to toddlers predicts fourth grade academic achievement via children’s incremental mindsetsDevelopmental Psychology, 2018, 54(3), 397-409. Doi: 10.1037/dev0000444, PDF

In a previous study, parent–child praise was observed in natural interactions at home when children were 1, 2, and 3 years of age. Children who received a relatively high proportion of process praise (e.g., praise for effort and strategies) showed stronger incremental motivational frameworks, including a belief that intelligence can be developed and a greater desire for challenge, when they were in 2nd or 3rd grade (Gunderson et al., 2013). The current study examines these same children’s (n 53) academic achievement 1 to 2 years later, in 4th grade. Results provide the first evidence that process praise to toddlers predicts children’s academic achievement (in math and reading comprehension) 7 years later, in elementary school, via their incremental motivational frameworks. Further analysis of these motivational frameworks shows that process praise had its effect on fourth grade achievement through children’s trait beliefs (e.g., believing that intelligence is fixed vs. malleable), rather than through their learning goals (e.g., preference for easy vs. challenging tasks). Implications for the socialization of motivation are discussed.


Brooks,N., Barner, D., Frank, M., & Goldin-Meadow, S.  The role of gesture in supporting mental representations: The case of mental abacus arithmeticCognitive Science, 2018, 42(2), 554-575. doi: 10.1111/cogs.12527, PDF

People frequently gesture when problem-solving, particularly on tasks that require spatial transformation. Gesture often facilitates task performance by interacting with internal mental representations, but how this process works is not well understood. We investigated this question by exploring the case of mental abacus (MA), a technique in which users not only imagine moving beads on an abacus to compute sums, but also produce movements in gestures that accompany the calculations. Because the content of MA is transparent and readily manipulated, the task offers a unique window onto how gestures interface with mental representations. We find that the size and number of MA gestures reflect the length and difficulty of math problems. Also, by selectively interfering with aspects of gesture, we find that participants perform significantly worse on MA under motor interference, but that perceptual feedback is not critical for success on the task. We conclude that premotor processes involved in the planning of gestures are critical to mental representation in MA.


Levine, S., Goldin-Meadow, S., Carlson, M., Hemani-Lopez, N. Mental Transformaion Skill in Young Children: The Role of Concrete and Abstract Motor TrainingCognitive Science (2018) 1-22, doi: 10.1111/cogs.12603, PDF

We examined the effects of three different training conditions, all of which involve the motor system, on kindergarteners’ mental transformation skill. We focused on three main questions. First, we asked whether training that involves making a motor movement that is relevant to the mental transformation— either concretely through action (action training) or more abstractly through gestural movements that represent the action (move-gesture training)— resulted in greater gains than training using motor movements irrelevant to the mental transformation (point-gesture training). We tested children prior to training, immediately after training (posttest), and 1 week after training (retest), and we found greater improvement in mental transformation skill in both the action and move-gesture training conditions than in the point-gesture condition, at both posttest and retest. Second, we asked whether the total gain made by retest differed depending on the abstractness of the movement-relevant training (action vs. move-gesture), and we found that it did not. Finally, we asked whether the time course of improvement differed for the two movementrelevant conditions, and we found that it did— gains in the action condition were realized immediately at posttest, with no further gains at retest; gains in the move-gesture condition were realized throughout, with comparable gains from pretest-to-posttest and from posttest-to-retest. Training that involves movement, whether concrete or abstract, can thus benefit children’s mental transformation skill. However, the benefits unfold differently over time— the benefits of concrete training unfold immediately after training (online learning); the benefits of more abstract training unfold in equal steps immediately after training (online learning) and during the intervening week with no additional training (offline learning). These findings have implications for the kinds of instruction that can best support spatial learning.


Wakefield, E., Hall, C., James, J., Goldin-Meadow, S. Gesture of generalization: gesture facilitates flexible learning of words for actions on objectsDevelopmental Science, doi:10.1111/desc.12656, PDF

Verb learning is difficult for children (Gentner, 1982), partially because children have a bias to associate a novel verb not only with the action it represents, but also with the object on which it is learned (Kersten & Smith, 2002). Here we investigate how well 4-and 5-year-old children (N = 48) generalize novel verbs for actions on objects after doing or seeing the action (e.g., twisting a knob on an object) or after doing or seeing a gesture for the action (e.g., twisting in the air near an object). We find not only that children generalize more effectively through gesture experience, but also that this ability to generalize persists after a 24-hour delay.


Congdon, E., Novack, M., Goldin-Meadow, S. Gesture in Experimental Studies: How Videotape Technology Can Advance Psychological TheoryOrganizational Research Methods 2018, Volume 21(2) 789-499, doi:10.1177/1094428116654548. PDF

Video recording technology allows for the discovery of psychological phenomena that might otherwise go unnoticed. We focus here on gesture as an example of such a phenomenon. Gestures are movements of the hands or body that people spontaneously produce while speaking or thinking through a difficult problem. Despite their ubiquity, speakers are not always aware that they are gesturing, and listeners are not always aware that they are observing gesture. We review how video technology has facilitated major insights within the field of gesture research by allowing researchers to capture, quantify, and better understand these transient movements. We propose that gesture, which can be easily missed if it is not a researcher’s focus, has the potential to affect thinking and learning in the people who produce it, as well as in the people who observe it, and that it can alter the communicative context of an experiment or social interaction. Finally, we discuss the challenges of using video technology to capture gesture in psychological studies, and we discuss opportunities and suggestions for making use of this rich source of information both within the field of developmental psychology and within the field of organizational psychology.


Uccelli, P., Demir-Lira, O.E., Rowe, M., Levine, S. & Goldin-Meadow, S. Children’s Early Decontextualized Talk Predicts Academic Language Profiency in MidadolescenceChild Development, 2018, doi:10.1111/cdev.13034, PDF

This study examines whether children’s decontextualized talk—talk about nonpresent events, explanations, or pretend—at 30 months predicts seventh-grade academic language proficiency (age 12). Academic language (AL) refers to the language of school texts. AL proficiency has been identified as an important predictor of adolescent text comprehension. Yet research on precursors to AL proficiency is scarce. Child decontextualized talk is known to be a predictor of early discourse development, but its relation to later language outcomes remains unclear. Forty-two children and their caregivers participated in this study. The proportion of child talk that was decontextualized emerged as a significant predictor of seventh-grade AL proficiency, even after controlling for socioeconomic status, parent decontextualized talk, child total words, child vocabulary, and child syntactic comprehension.

2017

Demir-Lira, O.E., Asaridou, S., Beharelle, A.R., Holt, A., Goldin-Meadow, S., Small, S. Functional neuroanatomy of gesture-speech integration in children varies with individual differences in gesture processingDevelopmental Science, doi:10.1111/desc.12648, PDF

Gesture is an integral part of children’s communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8- to 10-year-old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture–speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., “pet” + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., “bird” + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post-test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture– speech integration in children overlaps with—but is broader than—the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children’s increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration.


Brentari,D., & Goldin-Meadow, S. Gesture, sign, and language: The coming of age of sign language and gesture studiesCambridge Core in Behavioral and Brain Sciences (2017), doi: 10.1017/S0140525X1600039X, PDF

How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plusgesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders btween sign and gesture.


Cartmill, E., Rissman, L., Novack, M., & Goldin-Meadow, S. The development if iconicity in children’s co-speech gesture and homesignLanguage, Ineraction and Acquisition 8:1 (2017), doi:10.1075/lia.8.1.03car, PDF

Gesture can illustrate objects and events in the world by iconically reproducing elements of those objects and events. Children do not begin to express ideas iconically, however, until after they have begun to use conventional forms. In this paper, we investigate how children’s use of iconic resources in gesture relates to the developing structure of their communicative systems. Using longitudinal video corpora, we compare the emergence of manual iconicity in hearing children who are learning a spoken language (co-speech gesture) to the emergence of manual iconicity in a deaf child who is creating a manual system of communication homesign). We focus on one particular element of iconic gesture – the shape of the hand (handshape). We ask how handshape is used as an iconic resource in 1–5-year-olds, and how it relates to the semantic content of children’s communicative acts. We find that patterns of handshape development are broadly similar between co-speech gesture and homesign, suggesting that the building blocks underlying children’s ability to iconically map manual forms to meaning are shared across different communicative systems: those where gesture is produced alongside speech, and those where gesture is the primary mode of communication.


Brentari, D., & Goldin-Meadow, S. Language emergence.  Annual Review of Linguistic, PDF

Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.


Cooperrider, K., & Goldin-Meadow, S.  When gesture becomes analogyTopics in Cognitive Science, doi: 10.1111/tops.12276, Abstract, PDF

Analogy researchers do not often examine gesture, and gesture researchers do not often borrow ideas from the study of analogy. One borrowable idea from the world of analogy is the importance of distinguishing between attributes and relations. Gentner (1983, 1988) observed that some metaphors highlight attributes and others highlight relations, and called the latter analogies. Mirroring this logic, we observe that some metaphoric gestures represent attributes and others represent relations, and propose to call the latter analogical gestures. We provide examples of such analogical gestures and show how they relate to the categories of iconic and metaphoric gestures described previously. Analogical gestures represent different types of relations and different degrees of relational complexity, and sometimes cohere into larger analogical models. Treating analogical gestures as a distinct phenomenon prompts new questions and predictions, and illustrates one way that the study of gesture and the study of analogy can be mutually informative.


Ozcaliskan, S., Lucero, C., & Goldin-Meadow, S.  Blind speakers show language-specific patterns in co-speech gesture but not silent gesture.  Cognitive Science, doi: 10.1111/cogs.12502, Abstract, PDF

Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co-speech gesture), not without speech (silent gesture). We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three-dimensional motion scenes. We found an effect of language on co-speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure.


Brookshire, G., Lu, J., Nusbaum, H., Goldin-Meadow, S., & Casasanto, D.  Visual cortex entrains to sign language.  PNAS, doi: 10.1073/pnas.1620350114, PDF

Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow (<8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at ∼1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that max


Wakefield, E.M., Novack, M., & Goldin-Meadow, S.  Unpacking the ontogeny of gesture understanding:  How movement becomes meaningful across development.  Child Development, DOI: 10.1111/cdev.12817, PDF

Gestures, hand movements that accompany speech, affect children’s learning, memory, and thinking (e.g., Goldin-Meadow, 2003). However, it remains unknown how children distinguish gestures from other kinds of actions. In this study, 4- to 9-year-olds (n = 339) and adults (n = 50) described one of three scenes: (a) an actor moving objects, (b) an actor moving her hands in the presence of objects (but not touching them), or (c) an actor moving her hands in the absence of objects. Participants across all ages were equally able to identify actions on objects as goal directed, but the ability to identify empty-handed movements as representational actions (i.e., as gestures) increased with age and was influenced by the presence of objects, especially in older children.


Congdon, E.L., Novack, M.A., Brooks, N., Hemani-Lopez, N., & O’Keefe, L., & Goldin-Meadow, S.  Better together:  Simultaneous presentation of speech and gesture in math instruction supports generalization and retention.  Journal of Learning and Instruction, 2017. doi: 10.1016/j.learninstruc.2017.03.005. PDF

When teachers gesture during instruction, children retain and generalize what they are taught (Goldin- Meadow, 2014). But why does gesture have such a powerful effect on learning? Previous research shows that children learn most from a math lesson when teachers present one problem-solving strategy in speech while simultaneously presenting a different, but complementary, strategy in gesture (Singer & Goldin-Meadow, 2005). One possibility is that gesture is powerful in this context because it presents information simultaneously with speech. Alternatively, gesture may be effective simply because it in- volves the body, in which case the timing of information presented in speech and gesture may be less important for learning. Here we find evidence for the importance of simultaneity: 3rd grade children retain and generalize what they learn from a math lesson better when given instruction containing simultaneous speech and gesture than when given instruction containing sequential speech and gesture. Interpreting these results in the context of theories of multimodal learning, we find that gesture capi- talizes on its synchrony with speech to promote learning that lasts and can be generalized. 


Rissman, L., & Goldin-Meadow, S.  The development of causal structure without a language modelLanguage Learning and Development, doi: 10.1080/15475441.2016.1254633. PDF

Across a diverse range of languages, children proceed through similar stages in their production of causal language: their initial verbs lack internal causal structure, followed by a period during which they produce causative overgeneralizations, indicating knowledge of a productive causative rule. We asked in this study whether a child not exposed to structured linguistic input could create linguistic devices for encoding causation and, if so, whether the emergence of this causal language would follow a trajectory similar to the one observed for children learning language from linguistic input. We show that the child in our study did develop causation-encoding morphology, but only after initially using verbs that lacked internal causal structure. These results suggest that the ability to encode causation linguistically can emerge in the absence of a language model, and that exposure to linguistic input is not the only factor guiding children from one stage to
the next in their production of causal language.


Goldin-Meadow, S. Using our hands to change our minds. WIREs Cognitive Science, doi: 10.1002/wcs.1368. PDF

Jean Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how children understand the task at each point, but also about how they progress from one point to the next. This article examines a routine behavior that Piaget overlooked—the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker’s talk. Gesture can do more than reflect ideas—it can also change them. Observing the gestures that others produce can change a learner’s ideas, as can producing one’s own gestures. In this sense, gesture behaves like any other action. But gesture differs from many other actions in that it also promotes generalization of new ideas. Gesture represents the world rather than directly manipulating the world (gesture does not move objects around) and is thus a special kind of action. As a result, the mechanisms by which gesture and action promote learning may differ. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.


Goldin-Meadow, S. What the hands can tell us about language emergence. Psychonomic Bulletin & Review, 2017, 24(1), 213-218, doi:10.3758/s13423-016-1074-x, PDF

Why, in all cultures in which hearing is possible, has language become the province of speech and the oral modality? I address this question by widening the lens with which we look at language to include the manual modality. I suggest that human communication is most effective when it makes use of two types of formats––a discrete and segmented code, produced simultaneously along with an analog and mimetic code. The segmented code is supported by both the oral and the manual modalities. However, the mimetic code ismore easily handled by the manual modality. We might then expect mimetic encoding to be done preferentially in the manualmodality (gesture), leaving segmented encoding to the oral modality (speech). This argument rests on two assumptions: (1) The manual modality is as good at segmented encoding as the oral modality; sign languages, established and idiosyncratic, provide evidence for this assumption. (2) Mimetic encoding is important to human communication and best handled by the manual modality; co-speech gesture provides evidence for this assumption. By including the manual modality in two contexts––when it takes on the primary function of communication (sign language), and when it takes on a complementary communicative function (gesture)––in our analysis of language, we gain new perspectives on the origins and continuing development of language.


Goldin-Meadow, S., & Yang, C. Statistical evidence that a child can create a combinatorial linguistic system without external linguistic input: Implications for  language evolution. Neuroscience & Biobehavioral Reviews, doi: 10.1016/j.neubiorev.2016.12.016 PDF

Can a child who is not exposed to a model for language nevertheless construct a communication system characterized by combinatorial structure? We know that deaf children whose hearing losses prevent them from acquiring spoken language, and whose hearing parents have not exposed them to sign language, use gestures, called homesigns, to communicate. In this study, we call upon a new formal analysis that characterizes the statistical profile of grammatical rules and, when applied to child language data, finds that young children’s language is consistent with a productive grammar rather than rote memorization of specific word combinations in caregiver speech. We apply this formal analysis to homesign, and find that homesign can also be characterized as having productive grammar. Our findings thus provide evidence that a child can create a combinatorial linguistic system without external linguistic input, and offer unique insight into how the capacity of language evolved as part of human biology.

2016

Cooperrider, K., Gentner, D. & Goldin-Meadow, S. Spatial analogies pervade complex relational reasoning: Evidence from spontaneous gestures. Cognitive Research: Principles and Implications. doi: 10.1186/s41235-016-0024-5. PDF

How do people think about complex phenomena like the behavior of ecosystems? Here we hypothesize that people reason about such relational systems in part by creating spatial analogies, and we explore this possibility by examining spontaneous gestures. In two studies, participants read a written lesson describing positive and negative feedback systems and then explained the differences between them. Though the lesson was highly abstract and people were not instructed to gesture, people produced spatial gestures in abundance during their explanations. These gestures used space to represent simple abstract relations (e.g., increase) and sometimes more complex relational structures (e.g., negative feedback). Moreover, over the course of their explanations, participants’ gestures often cohered into larger analogical models of relational structure. Importantly, the spatial ideas evident in the hands were largely unaccompanied by spatial words. Gesture thus suggests that spatial analogies are pervasive in complex relational reasoning, even when language does not.


Wakefield, E. M., Hall, C., James, K. H., & Goldin-Meadow, S. Representational gesture as a tool for promoting word learning in young children. In Proceedings of the 41st Annual Boston University Conference on Language Development, Boston, MA, 2016.

 The movements we produce or observe others produce can help us learn. Two forms of movement that are commonplace in our daily lives are actions,  hand movements that directly manipulate our environment, and gestures , hand movements that accompany speech and represent ideas but do not  lead to physical changes in the environment. Both action and gesture have been found to influence cognition, facilitating our ability to learn and remember new information (e.g., Calvo-Merino, Glaser, Grezes, Passingham, & Haggard, 2005; Casile & Giese, 2006; Chao & Martin, 2000; Cook, Mitchell, & Goldin-Meadow, 2008; Goldin-Meadow, Cook, & Mitchell, 2009; Goldin-Meadow et al., 2012; James, 2010; James & Atwood, 2009; James & Gauthier, 2006; James & Maouene, 2009; James & Swain, 2011; Longcamp, Anton, Roth, & Velay, 2003; Longcamp, Tanskanen, & Hari, 2006; Pulvermüller, 2001; Wakefield & James, 2015) . However, the two types of movement may affect learning in different ways. In previous work, the effects of action and gesture on learning have been considered separately (but see Novack, Congdon, Hemani-Lopez, & Goldin-Meadow, 2014). Our goal here is to directly compare children’s ability to learn from actions on  objects versus gestures off  objects. We consider this question in the realm of word learning, specifically, teaching children verbs for actions that are performed on objects. We also ask whether learning through these movements unfolds differently when movements are produced versus observed by a child. More broadly, our study is a first step in understanding how information is learned, generalized, and retained based on whether it is expressed through action or gesture.

Novack, M., & Goldin-Meadow, S.  Gesture as representational action:  A paper about functionPsychonomic Bulletin and Review, doi:10.3758/s13423-016-1145-z. PDF

A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal––that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15,495–514, 2008)––has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon’s function is its purpose rather than its precipitating cause––the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism––it is clearly not reducible to action in terms of its function. Most notably,because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.


Goldin-Meadow, S., & Brentari, D.  Gesture, sign and language:  The coming of age of sign language and gesture studiesBehavioral and Brain Sciences, doi:10.1017/S0140525X15001247. PDF

Characterizations of sign language have swung from the view that sign is nothing more than a language of pictorial gestures with no linguistic structure, to the view that sign is no different from spoken language and has the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign, gesture, and speech. We conclude that signers gesture just as speakers do–both produce imagistic gestures along with categorical signs/words, and we call for new technology to help us better calibrate the borders between sign and gesture.


Andric, M., Goldin-Meadow, S., Small, S. & Hasson, U. Repeated movie viewings produce similar local activity patterns but different network configurationsNeuroimage, 2016, doi: 10.1016/j.neuroimage.2016.07.061. PDF

People seek novelty in everyday life, but they also enjoy viewing the same movies or reading the same novels a second time. What changes and what stays the same when re-experiencing a narrative? In examining this question with functional neuroimaging, we found that brain activity reorganizes in a hybrid, scale-dependent manner when individuals processed the same audiovisual narrative a second time. At the most local level, sensory systems (occipital and temporal cortices) maintained a similar temporal activation profile during the two viewings. Nonetheless, functional connectivity between these same lateral temporal regions and other brain regions was stronger during the second viewing. Furthermore, at the level of whole-brain connectivity, we found a significant rearrangement of network partition structure: lateral temporal and inferior frontal regions clustered together during the first viewing but merged within a fronto-parietal cluster in the second. Our findings show that repetition maintains local activity profiles. However, at the same time, it is associated with multiple network-level connectivity changes on larger scales, with these changes strongly involving regions considered core to language processing.


Asaridou, S., Demir-Lira, O.E., Goldin-Meadow, S., & Small, S.L.  The pace of vocabulary growth during preschool predicts cortical structure at school ageNeuropsychologia, 2016, doi: 10.1016/j.neuropsychologia.2016.05.018. PDF

Children vary greatly in their vocabulary development during preschool years. Importantly, the pace of this early vocabulary growth predicts vocabulary size at school entrance. Despite its importance for later academic success, not much is known about the relation between individual differences in early vocabulary development and later brain structure and function. Here we examined the association between vocabulary growth in children, as estimated from longitudinal measurements from 14 to 58 months, and individual differences in brain structure measured in 3rd and 4th grade (8–10 years old). Our results show that the pace of vocabulary growth uniquely predicts cortical thickness in the left supramarginal gyrus. Probabilistic tractography revealed that this region is directly connected to the inferior frontal gyrus (pars opercularis) and the ventral premotor cortex, via what is most probably the superior longitudinal fasciculus III. Our findings demonstrate, for the first time, the relation between the pace of vocabulary learning in children and a specific change in the structure of the cerebral cortex, specifically, cortical thickness in the left supramarginal gyrus. They also highlight the fact that differences in the pace of vocabulary growth are associated with the dorsal language stream, which is thought to support speech perception and articulation.


Cooperrider, K., Gentner, D., & Goldin-Meadow, S. Gesture reveals spatial analogies during complex relational reasoning. Proceedings of the 38th Annual Meeting of the Cognitive Science Society (pp. 692-697). Austin, TX: Cognitive Science Society, 2016. PDF

How do people think about complex relational phenomena like the behavior of the stock market? Here we hypothesize that people reason about such phenomena in part by creatingspatial analogies, and we explore this possibility by examining people’s spontaneous gestures. Participants read a written lesson describing positive and negative feedback systems and then explained the key differences between them. Though the lesson was highly abstract and free of concrete imagery, participants produced spatial gestures in abundance during their explanations. These spatial gestures, despite being fundamentally abstract, showed clear regularities and often built off of each other to form larger spatial models of relational structure—that is, spatial analogies. Importantly, the spatial richness and systematicity revealed in participants’ gestures was largely divorced from spatial language. These results provide evidence for the spontaneous use of spatial analogy during complex relational reasoning.


Novack, M.A., Wakefield, E.M., Congdon, E.L., Franconeri, S., & Goldin-Meadow, S.  There is more to gesture than meets the eye:  Visual attention to gesture’s referents cannot account for its facilitative effects during math instruction. Proceedings of the 37th Annual Meeting of the Cognitive Science Society,(pp/ 2141-2146). Austin, TX: Cognitive Science Society, 2016. PDF

Teaching a new concept with gestures – hand movements that accompany speech – facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, 2005). However, the mechanisms underlying this phenomenon are still being explored. Here, we use eyetracking to explore one mechanism – gesture’s ability to direct visual attention. We examine how children allocate their visual attention during a mathematical equivalence less on that either contains gesture or does not. We show that gesture instruction improves posttest performance, and additionally that gesture does change how children visually attend to instruction: children look more to the problem being explained, and less to the instructor.However looking patterns alone cannot explain gesture’s effect, as posttest performance is not predicted by any of our looking-time measures. These findings suggest that gesture does guide visual attention, but that attention alone cannot account for its facilitative learning effects.


Goldin-Meadow, S.   What the hands can tell us about language emergence. Psychonomic Bulletin and Review, doi:  10.3758/s13423-016-1074-x, PDF

Why, in all cultures in which hearing is possible, has language become the province of speech and the oral modality? I address this question by widening the lens with which we look at language to include the manual modality. I suggest that human communication is most effective when it makes use of two types of formats––a discrete and segmented code, produced simultaneously along with an analog and mimetic code. The segmented code is supported by both the oral and the manual modalities. However, the mimetic code is more easily handled by the manual modality. We might then expect mimetic encoding to be done preferentially in the manual modality (gesture), leaving segmented encoding to the oral modality (speech). This argument rests on two assumptions: (1) The manual modality is as good at segmented encoding as the oral modality; sign languages, established and idiosyncratic, provide evidence for this assumption. (2) Mimetic encoding is important to human communication and best handled by the manual modality; co-speech gesture provides evidence for this assumption. By including the manual modality in two contexts––when it takes on the primary function of communication (sign language), and when it takes on a complementary communicative function (gesture)––in our analysis of language, we gain new perspectives on the origins and continuing development of language.


Ozcaliskan, S., Lucero, C., & Goldin-Meadow, S. Is seeing gesture necessary to gesture like a native speaker?  Psychological Science, doi:10.1177/0956797616629931. PDF

Speakers of all languages gesture, but there are differences in the gestures that they produce. Do speakers learn language-specific gestures by watching others gesture or by learning to speak a particular language? We examined this question by studying the speech and gestures produced by 40 congenitally blind adult native speakers of English and Turkish (n= 20/language), and comparing them with the speech and gestures of 40 sighted adult speakers in each language (20 wearing blindfolds, 20 not wearing blindfolds). We focused on speakers’ descriptions of physical motion, which display strong cross-linguistic differences in patterns of speech and gesture use. Congenitally blind speakers of English and Turkish produced speech that resembled the speech produced by sighted speakers of their native language. More important, blind speakers of each language used gestures that resembled the gestures of sighted speakers of that language. Our results suggest that hearing a particular language is sufficient to gesture like a native speaker of that language.


Ozcaliskan, S., Lucero, C. & Goldin-Meadow, S. Does language shape silent gesture? Cognition, 2016, 148, 10-18, doi: 10.1016/j.cognition.2015.12.001. PDF

Languages differ in how they organize events, particularly in the types of semantic elements they express and the arrangement of those elements within a sentence. Here we ask whether these cross-linguistic differences have an impact on how events are represented nonverbally; more specifically, on how events are represented in gestures produced without speech (silent gesture), compared to gestures produced with speech (co-speech gesture). We observed speech and gesture in 40 adult native speakers of English and Turkish (N= 20/per language) asked to describe physical motion events (e.g., running down a path)—a domain known to elicit distinct patterns of speech and co-speech gesture in English- and Turkish-speakers. Replicating previous work (Kita & Özyürek, 2003), we found an effect of language on gesture when it was produced with speech—co-speech gestures produced by English-speakers differed from co-speech gestures produced by Turkish-speakers. However, we found no effect of language on gesture when it was produced on its own—silent gestures produced by English-speakers were identical inhow motion elements were packaged and ordered to silent gestures produced by Turkish-speakers. The findings provide evidence for a natural semantic organization that humans impose on motion events when they convey those events without language.


Trueswell, J., Lin, Y., Armstrong III, B., Cartmill, E., Goldin-Meadow, S. & Gleitman, L. Perceiving referential intent: Dynamics of reference in natural parent–child interactions. Cognition, 2016, 148, 117-135, doi:10.1016/j.cognition.2015.11.002. PDF

Two studies are presented which examined the temporal dynamics of the social-attentive behaviors that co-occur with referent identification during natural parent–child interactions in the home. Study 1 focused on 6.2 h of videos of 56 parents interacting during everyday activities with their 14–18 month-olds, during which parents uttered common nouns as parts of spontaneously occurring utterances. Trained coders recorded, on a second-by-second basis, parent and child attentional behaviors relevant to reference in the period (40 s) immediately surrounding parental naming. The referential transparency of each interaction was independently assessed by having naïve adult participants guess what word the parent had uttered in these video segments, but with the audio turned off, forcing them to use only non-linguistic evidence available in the ongoing stream of events. We found a great deal of ambiguity in the input along with a few potent moments of word-referent transparency; these transparent moments have a particular temporal signature with respect to parent and child attentive behavior: it was the object’s appearance and/or the fact that it captured parent/child attention at the moment the word was uttered, not the presence of the object throughout the video, that predicted observers’ accuracy. Study 2 experimentally investigated the precision of the timing relation, and whether it has an effect on observer accuracy, by disrupting the timing between when the word was uttered and the behaviors present in the videos as they were originally recorded. Disrupting timing by only ±1 to 2 s reduced participant confidence and significantly decreased their accuracy in word identification. The results enhance an expanding literature on how dyadic attentional factors can influence early vocabulary growth. By hypothesis, this kind of time-sensitive data-selection process operates as a filter on input, removing many extraneous and ill-supported word-meaning hypotheses from consideration during children’s early vocabulary learning.


Novack, M., Wakefield, E. & Goldin-Meadow, S. What makes a movement a gesture? Cognition, 2016, 146, 339-348, doi:10.1016/j.cognition.2015.10.014. PDF

Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement—movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features—the form of an actor’s hands and the presence of speech-like sounds—to test the effect of context on observers’ classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation.

2015

Abner, N., Cooperrider, K., & Goldin-Meadow, S.  Gesture for linguists: A handy primer. Language and Linguistics Compass, 2015, 9/11, 437-449, doi:10.1111/lnc3.12168. PDF

Humans communicate using language, but they also communicate using gesture – spontaneous movements
of the hands and body that universally accompany speech. Gestures can be distinguished from other
movements, segmented, and assigned meaning based on their forms and functions. Moreover, gestures
systematically integrate with language at all levels of linguistic structure, as evidenced in both production
and perception. Viewed typologically, gesture is universal, but nevertheless exhibits constrained variation
across language communities (as does language itself ). Finally, gesture has rich cognitive dimensions in
addition to its communicative dimensions. In overviewing these and other topics, we show that the study
of language is incomplete without the study of its communicative partner, gesture.


Horton, L., Goldin-Meadow, S., Coppola, M., Senghas, A., & Brentari, D. Forging a morphological system out of two dimensions: Agentivity and number. Open Linguistics, 2015, 1, 596-613, doi: 10.1515/opli-2015-0021. PDF

Languages have diverse strategies for marking agentivity and number. These strategies are negotiated to create combinatorial systems. We consider the emergence of these strategies by studying features of movement in a young sign language in Nicaragua (NSL). We compare two age cohorts of Nicaraguan signers (NSL1 and NSL2), adult homesigners in Nicaragua (deaf individuals creating a gestural system without linguistic input), signers of American and Italian Sign Languages (ASL and LIS), and hearing individuals asked to gesture silently. We find that all groups use movement axis and repetition to encode agentivity and number, suggesting that these properties are grounded in action experiences common to all participants. We find another feature – unpunctuated repetition – in the sign systems (ASL, LIS, NSL, Homesign) but not in silent gesture. Homesigners and NSL1 signers use the unpunctuated form, but limit its use to No-Agent contexts; NSL2 signers use the form across No-Agent and Agent contexts. A single individual can thus construct a marker for number without benefit of a linguistic community (homesign), but generalizing this form across agentive conditions requires an additional step. This step does not appear to be achieved when a linguistic community is first formed (NSL1), but requires transmission across generations of learners (NSL2).


Brooks, N., & Goldin-Meadow, S.  Moving to learn: How guiding the hands can set the stage for learningCognitive Science, 2015, doi: 10.1111/cogs.12292. PDF

Previous work has found that guiding problem-solvers’ movements can have an immediate effect on their ability to solve a problem. Here we explore these processes in a learning paradigm. We ask whether guiding a learner’s movements can have a delayed effect on learning, setting the stage for change that comes about only after instruction. Children were taught movements that were either relevant or irrelevant to solving mathematical equivalence problems and were told to produce the movements on a series of problems before they received instruction in mathematical equivalence. Children in the relevant movement condition improved after instruction significantly more than children in the irrelevant movement condition, despite the fact that the children showed no improvement in their understanding of mathematical equivalence on a ratings task or on a paper-and-pencil test taken immediately after the movements but before instruction. Movements of the body can thus be used to sow the seeds of conceptual change. But those seeds do not necessarily come to fruition until after the learner has received explicit instruction in the concept, suggesting a “sleeper effect” of gesture on learning.


Novack, M., Goldin-Meadow, S., & Woodward, A.  Learning from gesture: How early does it happen? Cognition, 2015, 142,138-147. doi: 10.1016/j.cognition.2015.05.018. PDF

Iconic gesture is a rich source of information for conveying ideas to learners. However, in order to learn from iconic gesture, a learner must be able to interpret its iconic form—a nontrivial task for young children. Our study explores how young children interpret iconic gesture and whether they can use it to infer a previously unknown action. In Study 1, 2- and 3-year-old children were shown iconic gestures that illustrated how to operate a novel toy to achieve a target action. Children in both age groups successfully figured out the target action more often after seeing an iconic gesture demonstration than after seeing no demonstration. However, the 2-year-olds (but not the 3-year-olds) figured out fewer target actions after seeing an iconic gesture demonstration than after seeing a demonstration of an incomplete-action and, in this sense, were not yet experts at interpreting gesture. Nevertheless, both age groups seemed to understand that gesture could convey information that can be used to guide their own actions, and that gesture is thus not movement for its own sake. That is, the children in both groups produced the action displayed in gesture on the object itself, rather than producing the action in the air (in other words, they rarely imitated the experimenter’s gesture as it was performed). Study 2 compared 2-year-olds’ performance following iconic vs. point gesture demonstrations. Iconic gestures led children to discover more target actions than point gestures, suggesting that iconic gesture does more than just focus a learner’s attention, it conveys substantive information about how to solve the problem, information that is accessible to children as young as 2. The ability to learn from iconic gesture is thus in place by toddlerhood and, although still fragile, allows children to process gesture, not as meaningless movement, but as an intentional communicative representation.


Novack, M., & Goldin-Meadow, S. Learning from gesture: How our hands change our minds. Educational Psychology Review, 2015, 27(3), 405-412, doi: 10.1007/s10648-015-9325-3. PDF

When people talk, they gesture, and those gestures often reveal information that cannot be found in speech. Learners are no exception. A learner’s gestures can index moments of conceptual instability, and teachers can make use of those gestures to gain access into a student’s thinking. Learners can also discover novel ideas from the gestures they produce during a lesson or from the gestures they see their teachers produce. Gesture thus has the power not only to reflect a learner’s understanding of a problem but also to change that understanding. This review explores how gesture supports learning across development and ends by offering suggestions for ways in which gesture can be recruited in educational settings.


Goldin-Meadow, S., From Action to Abstraction: Gesture as a mechanism of change. Developmental Review, 2015, doi: 10.1016/j.dr.2015.07.007. PDF

Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked – the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker’s talk. But gesture can do more than reflect ideas – it can also change them. In this sense, gesture behaves like any other action;
both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ – gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning
abstract ideas.


Gunderson, E., Spaepen, E., Gibson, D., Goldin-Meadow, S., Levine, S. Gesture as a window onto children’s number knowledge. Cognition, 2015, 144, 14-28, doi:10.1016/j.cognition.2015.07.008. PDF

Before learning the cardinal principle (knowing that the last word reached when counting a set represents the size of the whole set), children do not use number words accurately to label most set sizes. However, it remains unclear whether this difficulty reflects a general inability to conceptualize and communicate about number, or a specific problem with number words. We hypothesized that children’s gestures might reflect knowledge of number concepts that they cannot yet express in speech, particularly for numbers they do not use accurately in speech (numbers above their knower-level). Number gestures are iconic in the sense that they are item-based (i.e., each finger maps onto one item in a set) and therefore may be easier to map onto sets of objects than number words, whose forms do not map transparently onto the number of items in a set and, in this sense, are arbitrary. In addition, learners in transition with respect to a concept often produce gestures that convey different information than the accompanying speech. We examined the number words and gestures 3- to 5-year-olds used to label small set sizes exactly (1–4) and larger set sizes approximately (5–10). Children who had not yet learned the cardinal principle were more than twice as accurate when labeling sets of 2 and 3 items with gestures than with words, particularly if the values were above their knower-level. They were also better at approximating set sizes 5–10 with gestures than with words. Further, gesture was more accurate when it differed from the accompanying speech (i.e., a gesture–speech mismatch). These results show that children convey numerical information in gesture that they cannot yet convey in speech, and raise the possibility that number gestures play a functional role in children’s development of number concepts.


Suskind, D., Leffel, K. R., Leininger, L., Gunderson, E. A., Sapolich, S. G., Suskind, E., Hernandez, M.W., Goldin-Meadow, S., Graf, E. & Levine, S. A Parent-Directed Language Intervention for Children of Low Socioeconomic Status: A Randomized Controlled Pilot Study. Journal of Child Language, Available on CJO 2015. doi:10.1017/S0305000915000033, PDF

We designed a parent-directed home-visiting intervention targeting socioeconomic status (SES) disparities in children’s early language environments. A randomized controlled trial was used to evaluate whether the intervention improved parents’ knowledge of child language development and increased the amount and diversity of parent talk. Twenty-three mother–child dyads (12 experimental, 11 control, aged 1;5–3;0) participated in eight weekly hour-long home-visits. In the experimental group, but not the control group, parent knowledge of language development increased significantly one week and four months after the intervention. In lab-based observations, parent word types and tokens and child word types increased significantly one week, but not four months, post-intervention. Inhome-based observations, adult word tokens, conversational turn counts, and child vocalization counts increased significantly during the intervention, but not post-intervention. The results demonstrate the malleability of child-directed language behaviors and knowledge of child language development among low-SES parents.


Goldin-Meadow, S., Brentari, D., Coppola, M., Horton, L., Senghas, A. Watching language grow in the manal modality: Nominals, predicates, and handshapes. Cognition, 2015, 135, 381-395. PDF

All languages, both spoken and signed, make a formal distinction between two types of terms in a proposition – terms that identify what is to be talked about (nominals) and terms that say something about this topic (predicates). Here we explore conditions that could lead to this property by charting its development in a newly emerging language – Nicaraguan Sign Language (NSL). We examine how handshape is used in nominals vs. predicates in three Nicaraguan groups: (1) homesigners who are not part of the Deaf community and use their own gestures, called homesigns, to communicate; (2) NSL cohort 1 signers who fashioned the first stage of NSL; (3) NSL cohort 2 signers who learned NSL from cohort 1. We compare these three groups to a fourth: (4) native signers of American Sign Language (ASL), an established sign language. We focus on handshape in predicates that are part of a productive classifier system in ASL; handshape in these predicates varies systematically across agent vs. no-agent contexts, unlike handshape in the nominals we study, which does not vary across these contexts. We found that all four groups, including homesigners, used handshape differently in nominals vs. predicates – they displayed variability in handshape form across agent vs. no-agent contexts in predicates, but not in nominals. Variability thus differed in predicates and nominals: (1) In predicates, the variability across grammatical contexts (agent vs. no-agent) was systematic in all four groups, suggesting that handshape functioned as a productive morphological marker on predicate signs, even in homesign. This grammatical use of handshape can thus appear in the earliest stages of an emerging language. (2) In nominals, there was no variability across grammatical contexts (agent vs. no-agent), but there was variability within- and across-individuals in the handshape used in the nominal for a particular object. This variability was striking in homesigners (an individual homesigner did not necessarily use the same handshape in every nominal he produced for a particular object), but decreased in the first cohort of NSL and remained relatively constant in the second cohort. Stability in the lexical use of handshape in nominals thus does not seem to emerge unless there is pressure from a peer linguistic community. Taken together, our findings argue that a community of users is essential to arrive at a stable nominal lexicon, but not to establish a productive morphological marker in predicates. Examining the steps a manual communication system takes as it moves toward becoming a fully-fledged language offers a unique window onto factors that have made human language what it is.


Goldin-Meadow, S. Gesture as a window onto communicative abilities:  Implications for diagnosis and intervention. SIG 1 Perspectives on Language Learning and Education, 2015, 22, 50-60. doi:10.1044/lle22.2.50.

Speakers around the globe gesture when they talk, and young children are no exception. In fact, children’s first foray into communication tends to be through their hands rather than their mouths. There is now good evidence that children typically express ideas in gesture before they express the same ideas in speech. Moreover, the age at which these ideas are expressed in gesture predicts the age at which the same ideas are first expressed in speech. Gesture thus not only precedes, but also predicts, the onset of linguistic milestones. These facts set the stage for using gesture in two ways in children who are at risk for language delay. First, gesture can be used to identify individuals who are not producing gesture in a timely fashion, and can thus serve as a diagnostic tool for pinpointing subsequent difficulties with spoken language. Second, gesture can facilitate learning, including word learning, and can thus serve as a tool for intervention, one that can be implemented even before a delay in spoken language is detected.


Demir, O.E., Rowe, M., Heller, G., & Goldin-Meadow, S., Levine, S.C.  Vocabulary, syntax, and narrative development in typically developing children and children with early unilateral brain injury:  Early parental talk about the there-and-then matters. Developmental Psychology, 2015, 51(2), 161-175. doi: 10.1037/a0038476. PDF.

This study examines the role of a particular kind of linguistic input—talk about the past and future, pretend, and explanations, that is, talk that is decontextualized—in the development of vocabulary, syntax, and narrative skill in typically developing (TD) children and children with pre- or perinatal brain injury (BI). Decontextualized talk has been shown to be particularly effective in predicting children’s language skills, but it is not clear why. We first explored the nature of parent decontextualized talk and found it to be linguistically richer than contextualized talk in parents of both TD and BI children. We then found, again for both groups, that parent decontextualized talk at child age 30 months was a significant predictor of child vocabulary, syntax, and narrative performance at kindergarten, above and beyond the child’s own early language skills, parent contextualized talk and demographic factors. Decontextualized talk played a larger role in predicting kindergarten syntax and narrative outcomes for children with lower syntax and narrative skill at age 30 months, and also a larger role in predicting kindergarten narrative outcomes for children with BI than for TD children. The difference between the 2 groups stemmed primarily from the fact that children with BI had lower narrative (but not vocabulary or syntax) scores than TD children. When the 2 groups were matched in terms of narrative skill at kindergarten, the impact that decontextualized talk had on narrative skill did not differ for children with BI and for TD children. Decontextualized talk is thus a strong predictor of later language skill for all children, but may be particularly potent for children at the lower-end of the distribution for language skill. The findings also suggest that variability in the language development of children with BI is influenced not only by the biological characteristics of their lesions, but also by the language input they receive.


Goldin-Meadow, S. Studying the mechanisms of language learning by varying the learning environment and the learnerLanguage, Cognition & Neuroscience. doi:10.1080/23273798.2015.1016978. PDF

Language learning is a resilient process, and many linguistic properties can be developed under a wide range of learning environments and learners. The first goal of this review is to describe properties of language that can be developed without exposure to a language model – the resilient properties of language – and to explore conditions under which more fragile properties emerge. But even if a linguistic property is resilient, the developmental course that the property follows is likely to vary as a function of learning environment and learner, that is, there are likely to be individual differences in the learning trajectories children follow. The second goal is to consider how the resilient properties are brought to bear on language learning when a child is exposed to a language model. The review ends by considering the implications of both sets of findings for mechanisms, focusing on the role that the body and linguistic input play in language learning.


Demir, O.E., Levine, S., & Goldin-Meadow, S.  A tale of two hands: Children’s gesture use in narrative production predicts later narrative structure in speech. Journal of Child Language, 2015, 42(3), 662-681. doi:10.1017/S0305000914000415, PDF

Speakers of all ages spontaneously gesture as they talk. These gestures predict children’s milestones in vocabulary and sentence structure. We ask whether gesture serves a similar role in the development of narrative skill. Children were asked to retell a story conveyed in a wordless cartoon at age five and then again at six, seven, and eight. Children’s narrative structure in speech improved across these ages. At age five, many of the children expressed a character’s viewpoint in gesture, and these children were more likely to tell better-structured stories at the later ages than children who did not produce character-viewpoint gestures at age five. In contrast, framing narratives from a character’s perspective in speech at age five did not predict later narrative structure in speech. Gesture thus continues to act as a harbinger of change even as it assumes new roles in relation to discourse.


Ozyurek, A., Furman, R., & Goldin-Meadow, S.  On the way to language:  event segmentation in homesign and gesture.  Journal of Child Language, 2015, 42(1), 64-94.  doi:10.1017/S0305000913000512. PDF

Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages.


**additional years are available upon request.
Skip to toolbar