The Resilience of Language: Spontaneously Created Gesture Systems

It is commonly asked whether language is learned or innate. In my research, I recast the question so that it is amenable to investigation. I ask which aspects of language development are more (or less) sensitive to linguistic and environmental input. Specifically, I have been engaged in a research program to identify the properties of language whose development can withstand wide variations in learning conditions – the “resilient” properties of language.

My students and I have observed children who have not been exposed to conventional linguistic input in order to determine which properties of language can be developed under one set of severely degraded input conditions. The children we study are deaf with hearing losses so extensive that they cannot naturally acquire oral language. In addition, they are born to hearing parents who have not yet exposed them to sign language. Under such inopportune conditions, we might expect no symbolic communication at all or, at the least, communication that is unlike conventional language. This turns out not to be the case. The children use their hands to communicate – they gesture (see the drawing to the left of a “fish” gesture created by a Chinese deaf child and click here to see an example of an American deaf child gesturing about snow shovels). Even more striking, the gestures that the children create are structured like the early communication systems that children develop when exposed to conventional language, either spoken or signed. My book, The Resilience of Language, summarizes these findings and considers what they tell us about how all children learn language.

My current work asks whether deaf children across the globe who lack conventional language will develop structured gesture systems. In this work, we focus on the resilience of various properties of language in the face of wide cultural variation. We are currently studying the gesture systems invented by deaf children of hearing parents in four cultures – Spanish, Turkish, Chinese, and American. We have chosen these cultures because the gestures that hearing speakers produce in Spanish and Turkish cultures look different from the gestures produced by hearing speakers in American and Chinese cultures. As a result, the gestures that deaf children born to hearing parents see in Spain and Turkey may differ from those seen in China and America. For example, the gestures Spanish- and Turkish-speakers produce when they talk seem to be richer (with gestures for more different types of semantic elements), but also more variable than the gestures produced by English- and Mandarin-speakers. This variability might provide the deaf children we study with a stepping stone to a more complex linguistic system. Alternatively, variability could make it more difficult to abstract the essential elements of a semantic relation and thus result in a less language-like system.

By comparing different gesture models that speakers of Spanish and Turkish vs. English and Mandarin present to the deaf child, we will have a paradigm within which to observe the relation between adult input and child output – and a unique opportunity to observe the child’s skills as language-maker.

Hearing Gesture: The Gestures We Produce When We Talk

Another facet of my work explores the spontaneous gestures that hearing adults and children produce as they speak. Take, for example, the line drawing below of an adult thinking through a moral dilemma and gesturing as he does so. It is often assumed that gesture is nothing more than hand-waving. In fact, the gestures that we produce when we talk often convey substantive, task-related information. Consider, for example, a child asked to explain how he solved the math problem 6 + 3 + 4 = __ + 4. He put 13 in the blank and said “6 plus 3 is 9, 9 plus 4 equals 13.” At the same time, he pointed at the 6, the 3, the 4 on the left side of the equation, and the 13 in the blank. The child conveyed an “add-to-equal-sign” strategy in speech and the same “add-to-equal-sign” strategy in gesture – he produced a gesture-speech match (click here to see a video example of speech with matching gesture).

Interestingly, there are times when the information conveyed in gesture is different from the information conveyed in the words that accompany those gestures. For example, a child solved the problem 7 + 6 + 5 = __ + 5 by putting 18 in the blank. When asked to explain her solution, she said “7 plus 6 is 13 plus 5 more is 18 and that’s all I did” – she gave an “add-to-equal-sign” strategy in speech just like the child in the first example. However, this second child conveyed a different strategy in gesture – she pointed at all four numbers in the problem (the 7, the 6, the left 5, and the right 5). She conveyed an “add-all-numbers” strategy in gesture and thus produced a gesture-speech mismatch (click here to see a video example of a gesture-speech mismatch).

Man making geture pictureWe have discovered that speakers who produce mismatches on a task are particularly ready to learn that task (even if, as in the above example, both strategies in the mismatch lead to incorrect answers). Mismatch can thus tell us who is ready to learn. In addition, mismatch offers us insight into the mental processes that characterize the learner when in this transitional state. In a mismatch, two beliefs are simultaneously expressed on the same problem – one in gesture and another in speech. It is the simultaneous activation of multiple beliefs that appears to characterize the transitional knowledge state and create gesture-speech mismatch.

Thus, the gestures we produce reflect our thoughts, and those thoughts are often not revealed in our words. In my current research, I am exploring two possible ways in which gesture can not only reflect cognitive change, but may also help to create it.

First, gesture might play a role in the learning process by displaying, for all to see, the learner’s newest, and perhaps undigested, thoughts. Parents, teachers, and peers would then have the opportunity to react to those unspoken thoughts and provide the learner with the input necessary for future steps. Gesture, by influencing the input learners receive from others, would then be part of the process of change itself. We have found evidence consistent with this hypothesis in teachers instructing school-aged children in mathematical equivalence problems.

Second, gesture could play a role in the learning process more directly by influencing learners themselves. Gesture externalizes ideas differently and therefore may draw on different resources than does speech. Conveying an idea across modalities may, in the end, require less effort than conveying the idea within speech alone. In other words, gesture may serve as a “cognitive prop,” freeing up cognitive effort that can be used on other tasks. If so, using gesture may actually ease the learner’s processing burden and, in this way, function as part of the mechanism of change.

Gesture thus has the potential to contribute to cognitive change in at least two ways – directly by influencing the learner, and indirectly by influencing the learning environment. Our current works suggests that gesture fulfills this potential. My book, Hearing Gesture, makes it clear that the spontaneous gestures we produce play a role in how we think and talk.