Grammatical Subjects in home sign: Abstract linguistic structure in adult primary gesture systems without linguistic input.
ABSTRACT: Language ordinarily emerges in young children as a consequence of both linguistic experience (for example, exposure to a spoken or signed language) and innate abilities (for example, the ability to acquire certain types of language patterns). One way to discern which aspects of language acquisition are controlled by experience and which arise from innate factors is to remove or manipulate linguistic input. However, experimental manipulations that involve depriving a child of language input are impossible. The present work examines the communication systems resulting from natural situations of language deprivation and thus explores the inherent tendency of humans to build communication systems of particular kinds, without any conventional linguistic input. We examined the gesture systems that three isolated deaf Nicaraguans (ages 14-23 years) have developed for use with their hearing families. These deaf individuals have had no contact with any conventional language, spoken or signed. To communicate with their families, they have each developed a gestural communication system within the home called "home sign." Our analysis focused on whether these systems show evidence of the grammatical category of Subject. Subjects are widely considered to be universal to human languages. Using specially designed elicitation tasks, we show that home signers also demonstrate the universal characteristics of Subjects in their gesture productions, despite the fact that their communicative systems have developed without exposure to a conventional language. These findings indicate that abstract linguistic structure, particularly the grammatical category of Subject, can emerge in the gestural modality without linguistic input.
Project description:An eye tracking experiment explored the gaze behavior of deaf individuals when perceiving language in spoken and sign language only, and in sign-supported speech (SSS). Participants were deaf (n = 25) and hearing (n = 25) Spanish adolescents. Deaf students were prelingually profoundly deaf individuals with cochlear implants (CIs) used by age 5 or earlier, or prelingually profoundly deaf native signers with deaf parents. The effectiveness of SSS has rarely been tested within the same group of children for discourse-level comprehension. Here, video-recorded texts, including spatial descriptions, were alternately transmitted in spoken language, sign language and SSS. The capacity of these communicative systems to equalize comprehension in deaf participants with that of spoken language in hearing participants was tested. Within-group analyses of deaf participants tested if the bimodal linguistic input of SSS favored discourse comprehension compared to unimodal languages. Deaf participants with CIs achieved equal comprehension to hearing controls in all communicative systems while deaf native signers with no CIs achieved equal comprehension to hearing participants if tested in their native sign language. Comprehension of SSS was not increased compared to spoken language, even when spatial information was communicated. Eye movements of deaf and hearing participants were tracked and data of dwell times spent looking at the face or body area of the sign model were analyzed. Within-group analyses focused on differences between native and non-native signers. Dwell times of hearing participants were equally distributed across upper and lower areas of the face while deaf participants mainly looked at the mouth area; this could enable information to be obtained from mouthings in sign language and from lip-reading in SSS and spoken language. Few fixations were directed toward the signs, although these were more frequent when spatial language was transmitted. Both native and non-native signers looked mainly at the face when perceiving sign language, although non-native signers looked significantly more at the body than native signers. This distribution of gaze fixations suggested that deaf individuals - particularly native signers - mainly perceived signs through peripheral vision.
Project description:All natural languages have formal devices for communicating about number, be they lexical (e.g., two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner's gesture system and, in this sense, linguistic. The number gestures produced by the homesigners' hearing communication partners displayed some, but not all, of the homesigners' linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child's number gestures displayed all of the properties found in the adult homesigners' gestures, but his mother's gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners' linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input.
Project description:Studies testing linguistic laws outside language have provided important insights into the organization of biological systems. For example, patterns consistent with Zipf's law of abbreviation (which predicts a negative relationship between word length and frequency of use) have been found in the vocal and non-vocal behaviour of a range of animals, and patterns consistent with Menzerath's law (according to which longer sequences are made up of shorter constituents) have been found in primate vocal sequences, and in genes, proteins and genomes. Both laws have been linked to compression-the information theoretic principle of minimizing code length. Here, we present the first test of these laws in animal gestural communication. We initially did not find the negative relationship between gesture duration and frequency of use predicted by Zipf's law of abbreviation, but this relationship was seen in specific subsets of the repertoire. Furthermore, a pattern opposite to that predicted was seen in one subset of gestures-whole body signals. We found a negative correlation between number and mean duration of gestures in sequences, in line with Menzerath's law. These results provide the first evidence that compression underpins animal gestural communication, and highlight an important commonality between primate gesturing and language.
Project description:Studies of spoken and signed language processing reliably show involvement of the posterior superior temporal cortex. This region is also reliably activated by observation of meaningless oral and manual actions. In this study we directly compared the extent to which activation in posterior superior temporal cortex is modulated by linguistic knowledge irrespective of differences in language form. We used a novel cross-linguistic approach in two groups of volunteers who differed in their language experience. Using fMRI, we compared deaf native signers of British Sign Language (BSL), who were also proficient speechreaders of English (i.e., two languages) with hearing people who could speechread English, but knew no BSL (i.e., one language). Both groups were presented with BSL signs and silently spoken English words, and were required to respond to a signed or spoken target. The interaction of group and condition revealed activation in the superior temporal cortex, bilaterally, focused in the posterior superior temporal gyri (pSTG, BA 42/22). In hearing people, these regions were activated more by speech than by sign, but in deaf respondents they showed similar levels of activation for both language forms - suggesting that posterior superior temporal regions are highly sensitive to language knowledge irrespective of the mode of delivery of the stimulus material.
Project description:During everyday social interaction, gestures are a fundamental part of human communication. The communicative pragmatic role of hand gestures and their interaction with spoken language has been documented at the earliest stage of language development, in which two types of indexical gestures are most prominent: the pointing gesture for directing attention to objects and the give-me gesture for making requests. Here we study, in adult human participants, the neurophysiological signatures of gestural-linguistic acts of communicating the pragmatic intentions of naming and requesting by simultaneously presenting written words and gestures. Already at ~150 ms, brain responses diverged between naming and request actions expressed by word-gesture combination, whereas the same gestures presented in isolation elicited their earliest neurophysiological dissociations significantly later (at ~210?ms). There was an early enhancement of request-evoked brain activity as compared with naming, which was due to sources in the frontocentral cortex, consistent with access to action knowledge in request understanding. In addition, an enhanced N400-like response indicated late semantic integration of gesture-language interaction. The present study demonstrates that word-gesture combinations used to express communicative pragmatic intentions speed up the brain correlates of comprehension processes - compared with gesture-only understanding - thereby calling into question current serial linguistic models viewing pragmatic function decoding at the end of a language comprehension cascade. Instead, information about the social-interactive role of communicative acts is processed instantaneously.
Project description:Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.
Project description:A universally acknowledged, core property of language is its complexity, at each level of structure - sounds, words, phrases, clauses, utterances, and higher levels of discourse. How does this complexity originate and develop in a language? We cannot fully answer this question from spoken languages, since they are all thousands of years old or descended from old languages. However, sign languages of deaf communities can arise at any time and provide empirical data for testing hypotheses related to the emergence of language complexity. An added advantage of the signed modality is a correspondence between visible physical articulations and linguistic structures, providing a more transparent view of linguistic complexity and its emergence (Sandler, 2012). These essential characteristics of sign languages allow us to address the issue of emerging complexity by documenting the use of the body for linguistic purposes. We look at three types of discourse relations of increasing complexity motivated by research on spoken languages - additive, symmetric, and asymmetric (Mann and Thompson, 1988; Sanders et al., 1992). Each relation type can connect units at two different levels: within propositions (simpler) and across propositions (more complex). We hypothesized that these relations provide a measure for charting the time course of emergence of complexity, from simplest to most complex, in a new sign language. We test this hypothesis on Israeli Sign Language (ISL), a young language, some of whose earliest users are still available for recording. Taking advantage of the unique relation in sign languages between bodily articulations and linguistic form, we study fifteen ISL signers from three generations, and demonstrate that the predictions indeed hold. We also find that younger signers tend to converge on more systematic marking of relations, that they use fewer articulators for a given linguistic function than older signers, and that the form of articulations becomes reduced, as the language matures. Mapping discourse relations to the bodily expression of linguistic components across age groups reveals how simpler, less constrained, and more gesture-like expressions, become language.
Project description:Theories of early learning of nouns in children's vocabularies divide into those that emphasize input (language and non-linguistic aspects) and those that emphasize child conceptualisation. Most data though come from production alone, assuming that learning a word equals speaking it. Methodological issues can mean production and comprehension data within or across input languages are not comparable. Early vocabulary production and comprehension were examined in children hearing two Eastern Bantu languages whose grammatical features may encourage early verb knowledge. Parents of 208 infants aged 8-20 months were interviewed using Communicative Development Inventories that assess infants' first spoken and comprehended words. Raw totals, and proportions of chances to know a word, were compared to data from other languages. First spoken words were mainly nouns (75-95% were nouns versus less than 10% verbs) but first comprehended words included more verbs (15% were verbs) than spoken words did. The proportion of children's spoken words that were verbs increased with vocabulary size, but not the proportion of comprehended words. Significant differences were found between children's comprehension and production but not between languages. This may be for pragmatic reasons, rather than due to concepts with which children approach language learning, or directly due to the input language.
Project description:Gestures are an important part of interpersonal communication, for example by illustrating physical properties of speech contents (e.g., "the ball is round"). The meaning of these so-called iconic gestures is strongly intertwined with speech. We investigated the neural correlates of the semantic integration for verbal and gestural information. Participants watched short videos of five speech and gesture conditions performed by an actor, including variation of language (familiar German vs. unfamiliar Russian), variation of gesture (iconic vs. unrelated), as well as isolated familiar language, while brain activation was measured using functional magnetic resonance imaging. For familiar speech with either of both gesture types contrasted to Russian speech-gesture pairs, activation increases were observed at the left temporo-occipital junction. Apart from this shared location, speech with iconic gestures exclusively engaged left occipital areas, whereas speech with unrelated gestures activated bilateral parietal and posterior temporal regions. Our results demonstrate that the processing of speech with speech-related versus speech-unrelated gestures occurs in two distinct but partly overlapping networks. The distinct processing streams (visual versus linguistic/spatial) are interpreted in terms of "auxiliary systems" allowing the integration of speech and gesture in the left temporo-occipital region.
Project description:Speakers around the globe gesture when they talk, and young children are no exception. In fact, children's first foray into communication tends to be through their hands rather than their mouths. There is now good evidence that children typically express ideas in gesture before they express the same ideas in speech. Moreover, the age at which these ideas are expressed in gesture predicts the age at which the same ideas are first expressed in speech. Gesture thus not only precedes, but also predicts, the onset of linguistic milestones. These facts set the stage for using gesture in two ways in children who are at risk for language delay. First, gesture can be used to identify individuals who are not producing gesture in a timely fashion, and can thus serve as a diagnostic tool for pinpointing subsequent difficulties with spoken language. Second, gesture can facilitate learning, including word learning, and can thus serve as a tool for intervention, one that can be implemented even before a delay in spoken language is detected.