Rhyme Awareness in Children With Normal Hearing and Children With Cochlear Implants: An Exploratory Study.
ABSTRACT: Phonological awareness is a critical component of phonological processing that predicts children's literacy outcomes. Phonological awareness skills enable children to think about the sound structure of words and facilitates decoding and the analysis of words during spelling. Past research has shown that children's vocabulary knowledge and working memory capacity are associated with their phonological awareness skills. Linguistic characteristics of words, such as phonological neighborhood density and orthography congruency have also been found to influence children's performance in phonological awareness tasks. Literacy is a difficult area for deaf and hard of hearing children, who have poor phonological awareness skills. Although cochlear implantation (CI) has been found to improve these children's speech and language outcomes, limited research has investigated phonological awareness in children with CI. Rhyme awareness is the first level of phonological awareness to develop in children with normal hearing (NH). The current study investigates whether rhyme awareness in children with NH (n = 15, median age = 5; 5, IQR = 11 ms) and a small group of children with CI (n = 6, median age = 6; 11.5, IQR = 3.75 ms) is associated with individual differences in vocabulary and working memory. Using a rhyme oddity task, well-controlled for perceptual similarity, we also explored whether children's performance was associated with linguistic characteristics of the task items (e.g., rhyme neighborhood density, orthographic congruency). Results indicate that there is an association between vocabulary and working memory and performance in a rhyme awareness task in NH children. Only working memory was correlated with rhyme awareness performance in CI children. Linguistic characteristics of the task items, on the other hand, were not found to be associated with success. Implications of the results and future directions are discussed.
Project description:Purpose Better auditory prostheses and earlier interventions have led to remarkable improvements in spoken language abilities for children with hearing loss (HL), but these children often still struggle academically. This study tested a hypothesis for why this may be, proposing that the language of school becomes increasingly disconnected from everyday discourse, requiring greater reliance on bottom-up phonological structure, and children with HL have difficulty recovering that structure from the speech signal. Participants One hundred nineteen fourth graders participated: 48 with normal hearing (NH), 19 with moderate losses who used hearing aids (HAs), and 52 with severe-to-profound losses who used cochlear implants (CIs). Method Three analyses were conducted. <b>#1:</b> Sentences with malapropisms were created, and children's abilities to recognize them were assessed. <b>#2:</b> Factors contributing to those abilities were evaluated, including phonological awareness, phonological processing, vocabulary, verbal working memory, and oral narratives. <b>#3:</b> Teachers' ratings of students' academic competence were obtained, and factors accounting for those ratings were evaluated, including the five listed above, along with word reading and reading comprehension. Results <b>#1:</b> Children with HAs and CIs performed more poorly on malapropism recognition than children with NH, but similarly to each other. <b>#2:</b> All children with HL demonstrated large phonological deficits, but they were especially large for children with CIs. Phonological awareness explained the most variance in malapropism recognition for children with CIs. Vocabulary knowledge explained malapropism recognition for children with NH or HAs, but other factors also contributed. <b>#3:</b> Teachers rated academic competence for children with CIs more poorly than for children with NH or HAs, and variance in those ratings for children with CIs were primarily explained by malapropism scores. Conclusion Children with HL have difficulty recognizing acoustic-phonetic detail in the speech signal, and that constrains their abilities to follow conversations in academic settings, especially if HL is severe enough to require CIs. Supplemental Material https://doi.org/10.23641/asha.13133018.
Project description:This study investigated the etiology of individual differences in Chinese language and reading skills in 312 typically developing Chinese twin pairs aged from 3 to 11 years (228 pairs of monozygotic twins and 84 pairs of dizygotic twins; 166 male pairs and 146 female pairs). Children were individually given tasks of Chinese word reading, receptive vocabulary, phonological memory, tone awareness, syllable and rhyme awareness, rapid automatized naming, morphological awareness and orthographic skills, and Raven's Coloured Progressive Matrices. All analyses controlled for the effects of age. There were moderate to substantial genetic influences on word reading, tone awareness, phonological memory, morphological awareness and rapid automatized naming (estimates ranged from .42 to .73), while shared environment exerted moderate to strong effects on receptive vocabulary, syllable and rhyme awareness and orthographic skills (estimates ranged from .35 to .63). Results were largely unchanged when scores were adjusted for nonverbal reasoning as well as age. Findings of this study are mostly similar to those found for English, a language with very different characteristics, and suggest the universality of genetic and environmental influences across languages.
Project description:Purpose:We examined the effects of vocabulary, lexical characteristics (age of acquisition and phonotactic probability), and auditory access (aided audibility and daily hearing aid [HA] use) on speech perception skills in children with HAs. Method:Participants included 24 children with HAs and 25 children with normal hearing (NH), ages 5-12 years. Groups were matched on age, expressive and receptive vocabulary, articulation, and nonverbal working memory. Participants repeated monosyllabic words and nonwords in noise. Stimuli varied on age of acquisition, lexical frequency, and phonotactic probability. Performance in each condition was measured by the signal-to-noise ratio at which the child could accurately repeat 50% of the stimuli. Results:Children from both groups with larger vocabularies showed better performance than children with smaller vocabularies on nonwords and late-acquired words but not early-acquired words. Overall, children with HAs showed poorer performance than children with NH. Auditory access was not associated with speech perception for the children with HAs. Conclusions:Children with HAs show deficits in sensitivity to phonological structure but appear to take advantage of vocabulary skills to support speech perception in the same way as children with NH. Further investigation is needed to understand the causes of the gap that exists between the overall speech perception abilities of children with HAs and children with NH.
Project description:OBJECTIVES:Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. DESIGN:Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). RESULTS:Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child's ability to perceive noise-vocoded isolated words and sentences. CONCLUSIONS:First, we successfully replicated the major findings from the ) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children's ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.
Project description:Speech-language input from adult caregivers is a strong predictor of children's developmental outcomes. But the properties of this child-directed speech are not static over the first months or years of a child's life. This study assesses a large cohort of children and caregivers (<i>n</i> = 84) at 7, 10, 18, and 24 months to document (1) how a battery of phonetic, phonological, and lexical characteristics of child-directed speech changes in the first 2 years of life and (2) how input at these different stages predicts toddlers' phonological processing and vocabulary size at 2 years. Results show that most measures of child-directed speech do change as children age, and certain characteristics, like hyperarticulation, actually peak at 24 months. For language outcomes, children's phonological processing benefited from exposure to longer (in phonemes) words, more diverse word types, and enhanced coarticulation in their input. It is proposed that longer words in the input may stimulate children's phonological working memory development, while heightened coarticulation simultaneously introduces important sublexical cues and exposes them to challenging, naturalistic speech, leading to overall stronger phonological processing outcomes.
Project description:Purpose This study examined how lexical representations and intervention intensity affect phonological acquisition and generalization in children with speech sound disorders. Method Using a single-subject multiple baseline design, 24 children with speech sound disorders (3;6 to 6;10 [years;months]) were split into 3 word lexicality types targeting word-initial complex singleton phonemes: /? l ? ?/. Specifically, academic vocabulary words, nonwords (NWs), and high-frequency (HF) words were contrasted. Intervention intensity was examined by comparing the performance of 12 children who completed eleven 50-min sessions (4 children/word type) to the performance of 12 who completed 19 sessions (4 children/word type). Children's production accuracy of their treated phonemes and overall percent consonants correct values were used to measure phonological generalization via percentage accuracy scores and d scores. Results All word lexicality conditions elicited phonological change, suggesting that academic vocabulary words, NWs, and HF words are viable intervention targets. Group mean averages were similarly high for the NWs and HF words, although children in the NW condition demonstrated more consistent phonological gains. Children who received 19 intervention sessions achieved 6 times more gains in treated sound accuracy than did children who received 11 sessions. Conclusions Word lexicality did not significantly influence children's intervention outcomes. More intensive intervention, as characterized by the number sessions, resulted in greater phonological change than did a shorter intervention program. Intervention intensity outcomes should be considered when establishing best practices for speech intervention scheduling. Supplemental Material https://doi.org/10.23641/asha.7336055.
Project description:Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Project description:The present study evaluated the efficacy of a new preschool early literacy intervention created specifically for deaf and hard-of-hearing (DHH) children with functional hearing. Teachers implemented Foundations for Literacy with 25 DHH children in 2 schools (intervention group). One school used only spoken language, and the other used sign with and without spoken language. A "business as usual" comparison group included 33 DHH children who were matched on key characteristics with the intervention children but attended schools that did not implement Foundations for Literacy. Children's hearing losses ranged from moderate to profound. Approximately half of the children had cochlear implants. All children had sufficient speech perception skills to identify referents of spoken words from closed sets of items. Teachers taught small groups of intervention children an hour a day, 4 days a week for the school year. From fall to spring, intervention children made significantly greater gains on tests of phonological awareness, letter-sound knowledge, and expressive vocabulary than did comparison children. In addition, intervention children showed significant increases in standard scores (based on hearing norms) on phonological awareness and vocabulary tests. This quasi-experimental study suggests that the intervention shows promise for improving early literacy skills of DHH children with functional hearing.
Project description:Purpose This study examines the influence of lexical and phonological factors on expressive lexicon size in 40 French-speaking children tested longitudinally from 22 to 48 months. The factors include those based on the lexical and phonological properties of words in the children's lexicons (phonetic complexity, word length, neighborhood density [ND], and word frequency [WF]) as well as variables measuring phonological production (percent consonants correct and phonetic inventory size). Specifically, we investigate the relative influence of these factors at individual ages, namely, 22, 29, 36, and 48 months, and which factors measured at 22 and 29 months influence lexicon size at 36 and 48 months. Method Children were selected based on parent-reported vocabulary size. We included children with low, medium, and high vocabulary scores. The children's lexicons were coded in terms of phonetic complexity, word length, ND, and WF, and their phonological production skills were based on measures of percent consonants correct and phonetic inventory size extracted from spontaneous speech samples at 29, 36, and 48 months. In the case of ND and WF, we focused on one- and two-syllable nouns. Results Across the age range, the most important factor that explained variance in lexicon size was the WF of nouns. Children who selected low-frequency nouns had larger vocabularies across all ages (22-48 months). The WF of two-syllable nouns and phonological production measured at 29 months influenced lexicon size at 36 months, whereas the WF (of one- and two-syllable words) influenced lexicon size at 48 months. Conclusions The findings support the role of WF and phonological production in explaining expressive vocabulary development. Children enlarge their vocabularies by adding nouns of increasingly lower frequency. Phonological production plays a role in accounting for vocabulary size up until the age of 36 months. Supplemental Material https://doi.org/10.23641/asha.12291074.
Project description:Better understanding the mechanisms underlying developing literacy has promoted the development of more effective reading interventions for typically developing children. Such knowledge may facilitate effective instruction of deaf and hard-of-hearing (DHH) children. Hence, the current study examined the multivariate associations among phonological awareness, alphabetic knowledge, word reading, and vocabulary skills in DHH children who have auditory access to speech. One hundred and sixty-seven DHH children (M age = 60.43 months) were assessed with a battery of early literacy measures. Forty-six percent used at least 1 cochlear implant; 54% were fitted with hearing aids. About a fourth of the sample was acquiring both spoken English and sign. Scores on standardized tests of phonological awareness and vocabulary averaged at least 1 standard deviation (SD) below the mean of the hearing norming sample. Confirmatory factor analyses showed that DHH children's early literacy skills were best characterized by a complex 3-factor model in which phonological awareness, alphabetic knowledge, and vocabulary formed 3 separate, but highly correlated constructs, with letter-sound knowledge and word reading skills relating to both phonological awareness and alphabetic knowledge. This supports the hypothesis that early reading of DHH children with functional hearing is qualitatively similar to that of hearing children.