Functional near-infrared spectroscopy in toddlers: Neural differentiation of communicative cues and relation to future language abilities.
ABSTRACT: The toddler and preschool years are a time of significant development in both expressive and receptive communication abilities. However, little is known about the neurobiological underpinnings of language development during this period, likely due to difficulties acquiring functional neuroimaging data. Functional near-infrared spectroscopy (fNIRS) is a motion-tolerant neuroimaging technique that assesses cortical brain activity and can be used in very young children. Here, we use fNIRS during perception of communicative and noncommunicative speech and gestures in typically developing 2- and 3-year-olds (Study 1, n = 15, n = 12 respectively) and in a sample of 2-year-olds with both fNIRS data collected at age 2 and language outcome data at age 3 (Study 2, n = 18). In Study 1, 2- and 3-year-olds differentiated between communicative and noncommunicative stimuli as well as between speech and gestures in the left lateral frontal region. However, 2-year-olds showed different patterns of activation from 3-year-olds in right medial frontal regions. In Study 2, which included two toddlers identified with early language delays along with 16 typically developing toddlers, neural differentiation of communicative stimuli in the right medial frontal region at age 2 predicted receptive language at age 3. Specifically, after accounting for variance related to verbal ability at age 2, increased neural activation for communicative gestures (vs. both communicative speech and noncommunicative gestures) at age 2 predicted higher receptive language scores at age 3. These results are discussed in the context of the underlying mechanisms of toddler language development and use of fNIRS in prediction of language outcomes.
Project description:OBJECTIVE:We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. METHODS:For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. RESULTS:Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. CONCLUSION:Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.
Project description:Vocabulary is a critical early marker of language development. The MacArthur Bates Communicative Development Inventory has been adapted to dozens of languages, and provides a bird's-eye view of children's early vocabularies which can be informative for both research and clinical purposes. We present an update to the American Sign Language Communicative Development Inventory (the ASL-CDI 2.0, https://www.aslcdi.org ), a normed assessment of early ASL vocabulary that can be widely administered online by individuals with no formal training in sign language linguistics. The ASL-CDI 2.0 includes receptive and expressive vocabulary, and a Gestures and Phrases section; it also introduces an online interface that presents ASL signs as videos. We validated the ASL-CDI 2.0 with expressive and receptive in-person tasks administered to a subset of participants. The norming sample presented here consists of 120 deaf children (ages 9 to 73 months) with deaf parents. We present an analysis of the measurement properties of the ASL-CDI 2.0. Vocabulary increases with age, as expected. We see an early noun bias that shifts with age, and a lag between receptive and expressive vocabulary. We present these findings with indications for how the ASL-CDI 2.0 may be used in a range of clinical and research settings.
Project description:Although the linguistic structure of speech provides valuable communicative information, nonverbal behaviors can offer additional, often disambiguating cues. In particular, being able to see the face and hand movements of a speaker facilitates language comprehension . But how does the brain derive meaningful information from these movements? Mouth movements provide information about phonological aspects of speech [2-3]. In contrast, cospeech gestures display semantic information relevant to the intended message [4-6]. We show that when language comprehension is accompanied by observable face movements, there is strong functional connectivity between areas of cortex involved in motor planning and production and posterior areas thought to mediate phonological aspects of speech perception. In contrast, language comprehension accompanied by cospeech gestures is associated with tuning of and strong functional connectivity between motor planning and production areas and anterior areas thought to mediate semantic aspects of language comprehension. These areas are not tuned to hand and arm movements that are not meaningful. Results suggest that when gestures accompany speech, the motor system works with language comprehension areas to determine the meaning of those gestures. Results also suggest that the cortical networks underlying language comprehension, rather than being fixed, are dynamically organized by the type of contextual information available to listeners during face-to-face communication.
Project description:To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor-patient-act, is analogous to the subject-object-verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.
Project description:We investigated gesture production in infants at high and low risk for autism spectrum disorder (ASD) and caregiver responsiveness between 12 and 24 months of age and assessed the extent to which early gesture predicts later language and ASD outcomes. Participants included 55 high-risk infants, 21 of whom later met criteria for ASD, 34 low-risk infants, and their caregivers. Results indicated that (a) infants with ASD outcomes used fewer gestures and a lower proportion of developmentally advanced gesture-speech combinations; (b) caregivers of all the infants provided similar rates of contingent responses to their infants' gestures; and (c) gesture production at 12 months predicted subsequent receptive language and ASD outcomes within the high-risk group.
Project description:Speech allows humans to communicate and to navigate the social world. By 12?months, infants recognize that speech elicits appropriate responses from others. However, it is unclear how infants process dynamic communicative scenes and how their processing abilities compare with those of adults. Do infants, like adults, process communicative events while the event is occurring or only after being presented with the outcome? We examined 12-month-olds' and adults' eye movements as they watched a Communicator grasp one (target) of two objects. During the test event, the Communicator could no longer reach the objects, so she spoke or coughed to a Listener, who selected either object. Infants' and adults' patterns of looking to the actors and objects revealed that both groups immediately evaluated the Communicator's speech, but not her cough, as communicative and recognized that the Listener should select the target object only when the Communicator spoke. Furthermore, infants and adults shifted their attention between the actors and the objects in very similar ways. This suggests that 12-month-olds can quickly process communicative events as they occur with adult-like accuracy. However, differences in looking reveal that 12-month-olds process slower than adults. This early developing processing ability may allow infants to learn language and acquire knowledge from communicative interactions.
Project description:Body orientation of gesture entails social-communicative intention, and may thus influence how gestures are perceived and comprehended together with auditory speech during face-to-face communication. To date, despite the emergence of neuroscientific literature on the role of body orientation on hand action perception, limited studies have directly investigated the role of body orientation in the interaction between gesture and language. To address this research question, we carried out an electroencephalography (EEG) experiment presenting to participants (n = 21) videos of frontal and lateral communicative hand gestures of 5 s (e.g., raising a hand), followed by visually presented sentences that are either congruent or incongruent with the gesture (e.g., "the mountain is high/low…"). Participants underwent a semantic probe task, judging whether a target word is related or unrelated to the gesture-sentence event. EEG results suggest that, during the perception phase of handgestures, while both frontal and lateral gestures elicited a power decrease in both the alpha (8-12?Hz) and the beta (16-24?Hz) bands, lateral versus frontal gestures elicited reduced power decrease in the beta band, source-located to the medial prefrontal cortex. For sentence comprehension, at the critical word whose meaning is congruent/incongruent with the gesture prime, frontal gestures elicited an N400 effect for gesture-sentence incongruency. More importantly, this incongruency effect was significantly reduced for lateral gestures. These findings suggest that body orientation plays an important role in gesture perception, and that its inferred social-communicative intention may influence gesture-language interaction at semantic level.
Project description:Adult humans process communicative interactions by recognizing that information is being communicated through speech (linguistic ability) and simultaneously evaluating how to respond appropriately (social-pragmatic ability). These abilities may originate in infancy. Infants understand how speech communicates in social interactions, helping them learn language and how to interact with others. Infants later diagnosed with autism spectrum disorder (ASD), who show deficits in social-pragmatic abilities, differ in how they attend to the linguistic and social-pragmatic information in their environment. Despite their interdependence, experimental measures of language and social-pragmatic attention are often studied in isolation in infancy. Thus, the extent to which language and social-pragmatic abilities are related constructs remains unknown. Understanding how related or separable language and social-pragmatic abilities are in infancy may reveal whether these abilities are supported by distinguishable developmental mechanisms. This study uses a single communicative scene to examine whether real-time linguistic and social-pragmatic attention are separable in neurotypical infants and infants later diagnosed with ASD, and whether attending to linguistic and social-pragmatic information separately predicts later language and social-pragmatic abilities 1 year later. For neurotypical 12-month-olds and 12-month-olds later diagnosed with ASD, linguistic attention was not correlated with concurrent social-pragmatic attention. Furthermore, infants' real-time attention to the linguistic and social-pragmatic aspects of the scene at 12 months predicted and distinguished language and social-pragmatic abilities at 24 months. Language and social-pragmatic attention during communication are thus separable in infancy and may follow distinguishable developmental trajectories. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Project description:In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.