Project description:The term camouflaging describes behaviors that cover up neurodivergent difficulties. While researched in autism, camouflaging has received no systematic study in other conditions affecting communication, including developmental language disorder (DLD). This study explored camouflaging in DLD, drawing on the experience and expertise of speech and language pathologists and parents of children with DLD. Using a qualitative descriptive design, we interviewed six speech and language pathologists and six parents of children with DLD. The inductive thematic analysis considered three broad topic areas: What camouflaging behaviors do children with DLD do, the impacts of camouflaging, and what factors are associated with camouflaging. Camouflaging took a range of forms, with eight common presentations identified. Camouflaging reportedly delayed recognition of children's language needs and affected interventions. Camouflaging reportedly impacted children's exhaustion, mental health, self-esteem, personality, friendships, and how others view them. Research characterizing camouflaging in DLD could help reduce the underdetection of children's language needs.
Project description:Automated speech and language analysis (ASLA) is a promising approach for capturing early markers of neurodegenerative diseases. However, its potential remains underexploited in research and translational settings, partly due to the lack of a unified tool for data collection, encryption, processing, download, and visualization. Here we introduce the Toolkit to Examine Lifelike Language (TELL) v.1.0.0, a web-based app designed to bridge such a gap. First, we outline general aspects of its development. Second, we list the steps to access and use the app. Third, we specify its data collection protocol, including a linguistic profile survey and 11 audio recording tasks. Fourth, we describe the outputs the app generates for researchers (downloadable files) and for clinicians (real-time metrics). Fifth, we survey published findings obtained through its tasks and metrics. Sixth, we refer to TELL's current limitations and prospects for expansion. Overall, with its current and planned features, TELL aims to facilitate ASLA for research and clinical aims in the neurodegeneration arena. A demo version can be accessed here: https://demo.sci.tellapp.org/ .
Project description:Disorders of speech and language arise out of a complex interaction of genetic, environmental, and neural factors. Little is understood about the neural bases of these disorders. Here we systematically reviewed neuroimaging findings in Speech disorders (SD) and Language disorders (LD) over the last five years (2008-2013; 10 articles). In participants with SD, structural and functional anomalies in the left supramarginal gyrus suggest a possible deficit in sensory feedback or integration. In LD, cortical and subcortical anomalies were reported in a widespread language network, with little consistency across studies except in the superior temporal gyri. In summary, both functional and structural anomalies are associated with LD and SD, including greater activity and volumes relative to controls. The variability in neuroimaging approach and heterogeneity within and across participant samples restricts our full understanding of the neurobiology of these conditions- reducing the potential for devising novel interventions targeted at the underlying pathology.
Project description:ObjectiveMusic and speech are complex signals containing regularities in how they unfold in time. Similarities between music and speech/language in terms of their auditory features, rhythmic structure, and hierarchical structure have led to a large body of literature suggesting connections between the two domains. However, the precise underlying mechanisms behind this connection remain to be elucidated.MethodIn this theoretical review article, we synthesize previous research and present a framework of potentially shared neural mechanisms for music and speech rhythm processing. We outline structural similarities of rhythmic signals in music and speech, synthesize prominent music and speech rhythm theories, discuss impaired timing in developmental speech and language disorders, and discuss music rhythm training as an additional, potentially effective therapeutic tool to enhance speech/language processing in these disorders.ResultsWe propose the processing rhythm in speech and music (PRISM) framework, which outlines three underlying mechanisms that appear to be shared across music and speech/language processing: Precise auditory processing, synchronization/entrainment of neural oscillations to external stimuli, and sensorimotor coupling. The goal of this framework is to inform directions for future research that integrate cognitive and biological evidence for relationships between rhythm processing in music and speech.ConclusionThe current framework can be used as a basis to investigate potential links between observed timing deficits in developmental disorders, impairments in the proposed mechanisms, and pathology-specific deficits which can be targeted in treatment and training supporting speech therapy outcomes. On these grounds, we propose future research directions and discuss implications of our framework. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Project description:Although a growing literature points to substantial variation in speech/language abilities related to individual differences in musical abilities, mainstream models of communication sciences and disorders have not yet incorporated these individual differences into childhood speech/language development. This article reviews three sources of evidence in a comprehensive body of research aligning with three main themes: (a) associations between musical rhythm and speech/language processing, (b) musical rhythm in children with developmental speech/language disorders and common comorbid attentional and motor disorders, and (c) individual differences in mechanisms underlying rhythm processing in infants and their relationship with later speech/language development. In light of converging evidence on associations between musical rhythm and speech/language processing, we propose the Atypical Rhythm Risk Hypothesis, which posits that individuals with atypical rhythm are at higher risk for developmental speech/language disorders. The hypothesis is framed within the larger epidemiological literature in which recent methodological advances allow for large-scale testing of shared underlying biology across clinically distinct disorders. A series of predictions for future work testing the Atypical Rhythm Risk Hypothesis are outlined. We suggest that if a significant body of evidence is found to support this hypothesis, we can envision new risk factor models that incorporate atypical rhythm to predict the risk of developing speech/language disorders. Given the high prevalence of speech/language disorders in the population and the negative long-term social and economic consequences of gaps in identifying children at-risk, these new lines of research could potentially positively impact access to early identification and treatment. This article is categorized under: Linguistics > Language in Mind and Brain Neuroscience > Development Linguistics > Language Acquisition.
Project description:ObjectiveWe assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years.MethodsFor this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences.ResultsDespite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality.ConclusionFuture research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.
Project description:FOXP2, the first gene to have been implicated in a developmental communication disorder, offers a unique entry point into neuromolecular mechanisms influencing human speech and language acquisition. In multiple members of the well-studied KE family, a heterozygous missense mutation in FOXP2 causes problems in sequencing muscle movements required for articulating speech (developmental verbal dyspraxia), accompanied by wider deficits in linguistic and grammatical processing. Chromosomal rearrangements involving this locus have also been identified. Analyses of FOXP2 coding sequence in typical forms of specific language impairment (SLI), autism, and dyslexia have not uncovered any etiological variants. However, no previous study has performed mutation screening of children with a primary diagnosis of verbal dyspraxia, the most overt feature of the disorder in affected members of the KE family. Here, we report investigations of the entire coding region of FOXP2, including alternatively spliced exons, in 49 probands affected with verbal dyspraxia. We detected variants that alter FOXP2 protein sequence in three probands. One such variant is a heterozygous nonsense mutation that yields a dramatically truncated protein product and cosegregates with speech and language difficulties in the proband, his affected sibling, and their mother. Our discovery of the first nonsense mutation in FOXP2 now opens the door for detailed investigations of neurodevelopment in people carrying different etiological variants of the gene. This endeavor will be crucial for gaining insight into the role of FOXP2 in human cognition.
Project description:Here we use two filtered speech tasks to investigate children's processing of slow (<4 Hz) versus faster (∼33 Hz) temporal modulations in speech. We compare groups of children with either developmental dyslexia (Experiment 1) or speech and language impairments (SLIs, Experiment 2) to groups of typically-developing (TD) children age-matched to each disorder group. Ten nursery rhymes were filtered so that their modulation frequencies were either low-pass filtered (<4 Hz) or band-pass filtered (22 - 40 Hz). Recognition of the filtered nursery rhymes was tested in a picture recognition multiple choice paradigm. Children with dyslexia aged 10 years showed equivalent recognition overall to TD controls for both the low-pass and band-pass filtered stimuli, but showed significantly impaired acoustic learning during the experiment from low-pass filtered targets. Children with oral SLIs aged 9 years showed significantly poorer recognition of band pass filtered targets compared to their TD controls, and showed comparable acoustic learning effects to TD children during the experiment. The SLI samples were also divided into children with and without phonological difficulties. The children with both SLI and phonological difficulties were impaired in recognizing both kinds of filtered speech. These data are suggestive of impaired temporal sampling of the speech signal at different modulation rates by children with different kinds of developmental language disorder. Both SLI and dyslexic samples showed impaired discrimination of amplitude rise times. Implications of these findings for a temporal sampling framework for understanding developmental language disorders are discussed.
Project description:Three groups of native English speakers named words aloud in Spanish, their second language (L2). Intermediate proficiency learners in a classroom setting (Experiment 1) and in a domestic immersion program (Experiment 2) were compared to a group of highly proficient English-Spanish speakers. All three groups named cognate words more quickly and accurately than matched noncognates, indicating that all speakers experienced cross-language activation during speech planning. However, only the classroom learners exhibited effects of cross-language activation in their articulation: Cognate words were named with shorter overall durations, but longer (more English-like) voice onset times. Inhibition of the first language during L2 speech planning appears to impact the stages of speech production at which cross-language activation patterns can be observed.
Project description:To analyze the association between stable asymptomatic white matter lesions (WMLs) and the cochlear implantation (CI) effect in congenitally deaf children, 43 CI children with stable asymptomatic WMLs determined via preoperative assessments and 86 peers with normal white matter were included. Outcome measurements included closed-set Mandarin Chinese (tone, disyllable, and sentence) recognition tests; categories of auditory performance (CAPs); and speech intelligibility rating (SIR) scales at 1, 12, and 24 months post-CI. Generalized estimating equation (GEE) models were used to analyze the association between WML and outcomes. In the WML group (control group), median CAP and SIR scores were 5 (5) and 4 (4) with mean rates of tone, disyllable, and sentence recognition of 84.8% (89.0%), 87.9% (89.7%), and 85.8% (88.0%) at 24 months post-CI, respectively. Auditory and speech performance improved significantly with implant use. Compared to their peers in the control group, for the participants with stable asymptomatic WMLs, auditory and speech abilities were not significantly different (p > 0.05). Stable asymptomatic WMLs might not be associated with poor auditory and speech intelligibility post-CI, which indicates that it is feasible to use comprehensive assessments to screen suitable candidates with WMLs who are likely to present with a good prognosis.