Establishing a mental lexicon with cochlear implants: an ERP study with young children.
ABSTRACT: In the present study we explore the implications of acquiring language when relying mainly or exclusively on input from a cochlear implant (CI), a device providing auditory input to otherwise deaf individuals. We focus on the time course of semantic learning in children within the second year of implant use; a period that equals the auditory age of normal hearing children during which vocabulary emerges and extends dramatically. 32 young bilaterally implanted children saw pictures paired with either matching or non-matching auditory words. Their electroencephalographic responses were recorded after 12, 18 and 24 months of implant use, revealing a large dichotomy: Some children failed to show semantic processing throughout their second year of CI use, which fell in line with their poor language outcomes. The majority of children, though, demonstrated semantic processing in form of the so-called N400 effect already after 12 months of implant use, even when their language experience relied exclusively on the implant. This is slightly earlier than observed for normal hearing children of the same auditory age, suggesting that more mature cognitive faculties at the beginning of language acquisition lead to faster semantic learning.
Project description:Difficulties in auditory and phonological processing affect semantic processing in speech comprehension for deaf and hard-of-hearing (DHH) children. However, little is known about brain responses related to semantic processing in this group. We investigated event-related potentials (ERPs) in DHH children with cochlear implants (CIs) and/or hearing aids (HAs), and in normally hearing controls (NH). We used a semantic priming task with spoken word primes followed by picture targets. In both DHH children and controls, cortical response differences between matching and mismatching targets revealed a typical N400 effect associated with semantic processing. Children with CI had the largest mismatch response despite poor semantic abilities overall; Children with CI also had the largest ERP differentiation between mismatch types, with small effects in within-category mismatch trials (target from same category as prime) and large effects in between-category mismatch trials (where target is from a different category than prime), compared to matching trials. Children with NH and HA had similar responses to both mismatch types. While the large and differentiated ERP responses in the CI group were unexpected and should be interpreted with caution, the results could reflect less precision in semantic processing among children with CI, or a stronger reliance on predictive processing.
Project description:Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study.Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined.Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise.Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.
Project description:Central auditory pathway maturation in children depends on auditory sensory stimulation. The objective of the present study was to monitor the cortical maturation of children with cochlear implants using electrophysiological and auditory skills measurements. The study was longitudinal and consisted of 30 subjects, 15 (8 girls and 7 boys) of whom had a cochlear implant, with a mean age at activation time of 36.4 months (minimum, 17 months; maximum, 66 months), and 15 of whom were normal-hearing children who were matched based on gender and chronological age. The auditory and speech skills of the children with cochlear implants were evaluated using GASP, IT-MAIS and MUSS measures. Both groups underwent electrophysiological evaluation using long-latency auditory evoked potentials. Each child was evaluated at three and nine months after cochlear implant activation, with the same time interval adopted for the hearing children. The results showed improvements in auditory and speech skills as measured by IT-MAIS and MUSS. Similarly, the long-latency auditory evoked potential evaluation revealed a decrease in P1 component latency; however, the latency remained significantly longer than that of the hearing children, even after nine months of cochlear implant use. It was observed that a shorter P1 latency corresponded to more evident development of auditory skills. Regarding auditory behavior, it was observed that children who could master the auditory skill of discrimination showed better results in other evaluations, both behavioral and electrophysiological, than those who had mastered only the speech-detection skill. Therefore, cochlear implant auditory stimulation facilitated auditory pathway maturation, which decreased the latency of the P1 component and advanced the development of auditory and speech skills. The analysis of the long-latency auditory evoked potentials revealed that the P1 component was an important biomarker of auditory development during the rehabilitation process.
Project description:Bilateral hearing in early development protects auditory cortices from reorganizing to prefer the better ear. Yet, such protection could be disrupted by mismatched bilateral input in children with asymmetric hearing who require electric stimulation of the auditory nerve from a cochlear implant in their deaf ear and amplified acoustic sound from a hearing aid in their better ear (bimodal hearing). Cortical responses to bimodal stimulation were measured by electroencephalography in 34 bimodal users and 16 age-matched peers with normal hearing, and compared with the same measures previously reported for 28 age-matched bilateral implant users. Both auditory cortices increasingly favoured the better ear with delay to implanting the deaf ear; the time course mirrored that occurring with delay to bilateral implantation in unilateral implant users. Preference for the implanted ear tended to occur with ongoing implant use when hearing was poor in the non-implanted ear. Speech perception deteriorated with longer deprivation and poorer access to high-frequencies. Thus, cortical preference develops in children with asymmetric hearing but can be avoided by early provision of balanced bimodal stimulation. Although electric and acoustic stimulation differ, these inputs can work sympathetically when used bilaterally given sufficient hearing in the non-implanted ear.
Project description:Children with severe hearing loss most likely receive the greatest benefit from a cochlear implant (CI) when implanted at less than 2 years of age. Children with a hearing loss may also benefit greater from binaural sensory stimulation. Four children who received their first CI under 12 months of age were included in this study. Effects on auditory development were determined using the German LittlEARS Auditory Questionnaire, closed- and open-set monosyllabic word tests, aided free-field, the Mainzer and Göttinger speech discrimination tests, Monosyllabic-Trochee-Polysyllabic (MTP), and Listening Progress Profile (LiP). Speech production and grammar development were evaluated using a German language speech development test (SETK), reception of grammar test (TROG-D) and active vocabulary test (AWST-R). The data showed that children implanted under 12 months of age reached open-set monosyllabic word discrimination at an age of 24 months. LiP results improved over time, and children recognized 100% of words in the MTP test after 12 months. All children performed as well as or better than their hearing peers in speech production and grammar development. SETK showed that the speech development of these children was in general age appropriate. The data suggests that early hearing loss intervention benefits speech and language development and supports the trend towards early cochlear implantation. Furthermore, the data emphasizes the potential benefits associated with bilateral implantation.
Project description:Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in cochlear implant technology.
Project description:We identified studies that described use of any patient-reported outcome scale for hearing loss or tinnitus among children and adolescents and young adults (AYAs) with cancer or hematopoietic stem cell transplantation (HSCT) recipients.In this systematic review, we performed electronic searches of OvidSP MEDLINE, EMBASE, and PsycINFO to August 2015. We included studies if they used any patient-reported scale of hearing loss or tinnitus among children and AYAs with cancer or HSCT recipients. Only English language publications were included. Two reviewers identified studies and abstracted data.There were 953 studies screened; 6 met eligibility criteria. All studies administered hearing patient-reported outcomes only once, after therapy completion. None of the studies described the psychometric properties of the hearing-specific component. Three instruments (among 6 studies) were used: Health Utilities Index (Barr et al., 2000; Fu et al., 2006; Kennedy et al., 2014), Hearing Measurement Scales (Einar-Jon et al., 2011; Einarsson et al., 2011), and the Tinnitus Questionnaire for Auditory Brainstem Implant (Soussi & Otto, 1994). All had limitations, precluding routine use for hearing assessment in this population.We identified few studies that included hearing patient-reported measures for children and AYA cancer and HSCT patients. None are ideal to take forward into future studies. Future work should focus on the creation of a new psychometrically sound instrument for hearing outcomes in this population.
Project description:Congenital deafness causes large changes in the auditory cortex structure and function, such that without early childhood cochlear-implant, profoundly deaf children do not develop intact, high-level, auditory functions. But how is auditory cortex organization affected by congenital, prelingual, and long standing deafness? Does the large-scale topographical organization of the auditory cortex develop in people deaf from birth? And is it retained despite cross-modal plasticity? We identified, using fMRI, topographic tonotopy-based functional connectivity (FC) structure in humans in the core auditory cortex, its extending tonotopic gradients in the belt and even beyond that. These regions show similar FC structure in the congenitally deaf throughout the auditory cortex, including in the language areas. The topographic FC pattern can be identified reliably in the vast majority of the deaf, at the single subject level, despite the absence of hearing-aid use and poor oral language skills. These findings suggest that large-scale tonotopic-based FC does not require sensory experience to develop, and is retained despite life-long auditory deprivation and cross-modal plasticity. Furthermore, as the topographic FC is retained to varying degrees among the deaf subjects, it may serve to predict the potential for auditory rehabilitation using cochlear implants in individual subjects.
Project description:BACKGROUND:Cochlear implants can provide auditory perception to many people with hearing impairment who derive insufficient benefits from hearing aid use. For optimal speech perception with a cochlear implant, postoperative auditory training is necessary to adapt the brain to the new sound transmitted by the implant. Currently, this training is usually conducted via face-to-face sessions in rehabilitation centers. With the aging of society, the prevalence of age-related hearing loss and the number of adults with cochlear implants are expected to increase. Therefore, augmenting face-to-face rehabilitation with alternative forms of auditory training may be highly valuable. OBJECTIVE:The purpose of this multidisciplinary study was to evaluate the newly developed internet-based teletherapeutic multimodal system Train2hear, which enables adult cochlear implant users to perform well-structured and therapist-guided hearing rehabilitation sessions on their own. METHODS:The study was conducted in 3 phases: (1) we searched databases from January 2005 to October 2018 for auditory training programs suitable for adult cochlear implant users; (2) we developed a prototype of Train2hear based on speech and language development theories; (3) 18 cochlear implant users (mean age 61, SD 15.4 years) and 10 speech and language therapists (mean age 34, SD 10.9 years) assessed the usability and the feasibility of the prototype. This was achieved via questionnaires, including the System Usability Scale (SUS) and a short version of the intrinsic motivation inventory (KIM) questionnaires. RESULTS:The key components of the Train2hear training program are an initial analysis according to the International Classification of Functioning, Disability and Health; a range of different hierarchically based exercises; and an automatic and dynamic adaptation of the different tasks according to the cochlear implant user's progress. In addition to motivational mechanisms (such as supportive feedback), the cochlear implant user and therapist receive feedback in the form of comprehensive statistical analysis. In general, cochlear implant users enjoyed their training as assessed by KIM scores (mean 19, SD 2.9, maximum 21). In terms of usability (scale 0-100), the majority of users rated the Train2hear program as excellent (mean 88, SD 10.5). Age (P=.007) and sex (P=.01) had a significant impact on the SUS score with regard to usability of the program. The therapists (SUS score mean 93, SD 9.2) provided slightly more positive feedback than the cochlear implant users (mean 85, SD 10.3). CONCLUSIONS:Based on this first evaluation, Train2hear was well accepted by both cochlear implant users and therapists. Computer-based auditory training might be a promising cost-effective option that can provide a highly personalized rehabilitation program suited to individual cochlear implant user characteristics.
Project description:The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and spoken language knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included-deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG) recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami) was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed.