Neural systems for reading aloud: a multiparametric approach.
ABSTRACT: Reading aloud involves computing the sound of a word from its visual form. This may be accomplished 1) by direct associations between spellings and phonology and 2) by computation from orthography to meaning to phonology. These components have been studied in behavioral experiments examining lexical properties such as word frequency; length in letters or phonemes; spelling-sound consistency; semantic factors such as imageability, measures of orthographic, or phonological complexity; and others. Effects of these lexical properties on specific neural systems, however, are poorly understood, partially because high intercorrelations among lexical factors make it difficult to determine if they have independent effects. We addressed this problem by decorrelating several important lexical properties through careful stimulus selection. Functional magnetic resonance imaging data revealed distributed neural systems for mapping orthography directly to phonology, involving left supramarginal, posterior middle temporal, and fusiform gyri. Distinct from these were areas reflecting semantic processing, including left middle temporal gyrus/inferior-temporal sulcus, bilateral angular gyrus, and precuneus/posterior cingulate. Left inferior frontal regions generally showed increased activation with greater task load, suggesting a more general role in attention, working memory, and executive processes. These data offer the first clear evidence, in a single study, for the separate neural correlates of orthography-phonology mapping and semantic access during reading aloud.
Project description:Although much is known about the cognitive and neural basis of establishing letter-sound mappings in learning word forms, relatively little is known about what makes for the most effective feedback during this process. We sought to determine the neural basis by which elaborative feedback (EF), which contains both reward-related and content-specific information, may be more helpful than feedback containing only one kind of information (simple positive feedback, PF) or the other (content feedback, CF) in learning orthography-phonology (spelling-sound) mappings for novel letter strings. Compared to CF, EF activated the ventromedial prefrontal cortex, implicated in reward processing. Compared to PF, EF activated the posterior middle temporal, superior temporal, and supramarginal gyri-regions implicated in orthography-phonology conversion. In the same comparison, EF also activated the left fusiform gyrus/visual word form area-implicated in orthographic processing. Also EF, but not CF or PF, modulated activity in the caudate nucleus. In a postscan questionnaire, EF and PF were rated as more pleasant than CF, suggesting that modulation of the caudate for EF may be due to the coupling of reward and skill content. These findings suggest the enhanced effectiveness of EF may be due to concurrent activation of reward-related and task-relevant brain regions.
Project description:Research on Japanese reading has generally indicated that processing of the logographic script Kanji primarily involves whole-word lexical processing and follows a semantics-to-phonology route, while the two phonological scripts Hiragana and Katakana (collectively called Kana) are processed via a sub-lexical route, and more in a phonology-to-semantics manner. Therefore, switching between the two scripts often involves switching between two reading processes, which results in a delayed response for the second script (a script switch cost). In the present study, participants responded to pairs of words that were written either in the same orthography (within-script), or in two different Japanese orthographies (cross-script), switching either between Kanji and Hiragana, or between Katakana and Hiragana. They were asked to read the words aloud (Experiments 1 and 3) and to make a semantic decision about them (Experiments 2 and 4). In contrast to initial predictions, a clear switch cost was observed when participants switched between the two Kana scripts, while script switch costs were less consistent when participants switched between Kanji and Hiragana. This indicates that there are distinct processes involved in reading of the two types of Kana, where Hiragana reading appears to bear some similarities to Kanji processing. This suggests that the role of semantic processing in Hiragana (but not Katakana) reading is more prominent than previously thought and thus, Hiragana is not likely to be processed strictly phonologically.
Project description:There has been an enduring fascination with the possibility of gender differences in the brain basis of language, yet the evidence has been largely equivocal. Evidence does exist, however, for women being at greater risk than men for developing psychomotor slowing and even Alzheimer disease with advancing age, although this may in part at least be due to women living longer. We examined whether gender, age, or their interaction influenced language-related or more general processes in reading. Reading consists of elements related to language, such as the processing of word sound patterns (phonology) and meanings (semantics), along with the lead-in processes of visual perception and orthographic (visual word form) processing that are specific to reading. To test for any influence of gender and age on either semantic processing or orthography-phonology mapping, we tested for an interaction of these factors on differences between meaningful words and meaningless but pronounceable non-words. We also tested for effects of gender and age on how the number of letters in a word modulates neural activity for reading. This lead-in process presumably relates most to orthography. Behaviorally, reading accuracy declined with age for both men and women, but the decline was steeper for men. Neurally, interactions between gender and age were found exclusively in medial orbitofrontal cortex (mOFC). These factors influenced the word-non-word contrast, but not the parametric effect of number of letters. Men showed increasing activation with age for non-words compared to words. Women showed only slightly decreasing activation with age for novel letter strings. Overall, we found interactive effects of gender and age in the mOFC on the left primarily for novel letter strings, but no such interaction for a contrast that emphasized visual form processing. Thus the interaction of gender with age in the mOFC may relate most to orthography-phonology conversion for unfamiliar letter strings. More generally, this suggests that efforts to investigate effects of gender on language-related tasks may benefit from taking into account age and the type of cognitive process being highlighted.
Project description:Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels.
Project description:Patients with surface dyslexia have disproportionate difficulty pronouncing irregularly spelled words (e.g. pint), suggesting impaired use of lexical-semantic information to mediate phonological retrieval. Patients with this deficit also make characteristic 'regularization' errors, in which an irregularly spelled word is mispronounced by incorrect application of regular spelling-sound correspondences (e.g. reading plaid as 'played'), indicating over-reliance on sublexical grapheme-phoneme correspondences. We examined the neuroanatomical correlates of this specific error type in 45 patients with left hemisphere chronic stroke. Voxel-based lesion-symptom mapping showed a strong positive relationship between the rate of regularization errors and damage to the posterior half of the left middle temporal gyrus. Semantic deficits on tests of single-word comprehension were generally mild, and these deficits were not correlated with the rate of regularization errors. Furthermore, the deep occipital-temporal white matter locus associated with these mild semantic deficits was distinct from the lesion site associated with regularization errors. Thus, in contrast to patients with surface dyslexia and semantic impairment from anterior temporal lobe degeneration, surface errors in our patients were not related to a semantic deficit. We propose that these patients have an inability to link intact semantic representations with phonological representations. The data provide novel evidence for a post-semantic mechanism mediating the production of surface errors, and suggest that the posterior middle temporal gyrus may compute an intermediate representation linking semantics with phonology.
Project description:Here we describe the public neuroimaging and behavioral dataset entitled "Cross-Sectional Multidomain Lexical Processing" available on the OpenNeuro project (https://openneuro.org). This dataset explores the neural mechanisms and development of lexical processing through task based functional magnetic resonance imaging (fMRI) of rhyming, spelling, and semantic judgement tasks in both the auditory and visual modalities. Each task employed varying degrees of trial difficulty, including conflicting versus non-conflicting orthography-phonology pairs (e.g. harm - warm, wall - tall) in the rhyming and spelling tasks as well as high versus low word pair association in the semantic tasks (e.g. dog - cat, dish - plate). In addition, this dataset contains scores from a battery of standardized psychoeducational assessments allowing for future analyses of brain-behavior relations. Data were collected from a cross-sectional sample of 91 typically developing children aged 8.7- to 15.5- years old. The cross-sectional design employed in this dataset as well as the inclusion of multiple measures of lexical processing in varying difficulties and modalities allows for multiple avenues of future research on reading development.
Project description:While numerous studies have explored single-word naming, few have evaluated the behavioral and neural correlates of more naturalistic language, like connected speech, which we produce every day. Here, in a retrospective analysis of 120 participants at least six months following left hemisphere stroke, we evaluated the distribution of word errors (paraphasias) and associated brain damage during connected speech (picture description) and object naming. While paraphasias in connected speech and naming shared underlying neural substrates, analysis of the distribution of paraphasias suggested that lexical-semantic load is likely reduced during connected speech. Using voxelwise lesion-symptom mapping (VLSM), we demonstrated that verbal (real word: semantically related and unrelated) and sound (phonemic and neologistic) paraphasias during both connected speech and naming loaded onto the left hemisphere ventral and dorsal streams of language, respectively. Furthermore, for the first time using both connected speech and naming data, we localized semantically related paraphasias to more anterior left hemisphere temporal cortex and unrelated paraphasias to more posterior left temporal and temporoparietal cortex. The connected speech results, in particular, highlight a gradient of specificity as one translates visual recognition from left temporo-occipital cortex to posterior and subsequently anterior temporal cortex. The robustness of VLSM results for sound paraphasias derived during connected speech was notable, in that analyses performed on sound paraphasias from the connected speech task, and not the naming task, demonstrated significant results following removal of lesion volume variance and related apraxia of speech variance. Therefore, connected speech may be a particularly sensitive task on which to evaluate further lexical-phonological processing in the brain. The results presented here demonstrate the related, though different, distribution of paraphasias during connected speech, confirm that paraphasias arising in connected speech and single-word naming likely share neural origins, and endorse the need for continued evaluation of the neural substrates of connected speech processes.
Project description:We explored the impact of task context on subliminal neural priming using functional magnetic resonance imaging. The repetition of words during semantic categorization produced activation reduction in the left middle temporal gyrus previously associated with semantic-level representation and dorsal premotor cortex. By contrast, reading aloud produced repetition enhancement in the left inferior parietal lobe associated with print-to-sound conversion and ventral premotor cortex. Analyses of effective connectivity revealed that the task set for reading generated reciprocal excitatory connections between the left inferior parietal and superior temporal regions, reflecting the audiovisual integration required for vocalization, whereas categorization did not produce such backward projection to posterior regions. Thus, masked repetition priming involves two distinct components in the task-specific neural streams, one in the parietotemporal cortex for task-specific word processing and the other in the premotor cortex for behavioral response preparation. The top-down influence of task sets further changes the directions of the unconscious priming in the entire cerebral circuitry for reading.
Project description:According to the competition account of lexical selection in word production, conceptually driven word retrieval involves the activation of a set of candidate words in left temporal cortex and competitive selection of the intended word from this set, regulated by frontal cortical mechanisms. However, the relative contribution of these brain regions to competitive lexical selection is uncertain. In the present study, five patients with left prefrontal cortex lesions (overlapping in ventral and dorsal lateral cortex), eight patients with left lateral temporal cortex lesions (overlapping in middle temporal gyrus), and 13 matched controls performed a picture-word interference task. Distractor words were semantically related or unrelated to the picture, or the name of the picture (congruent condition). Semantic interference (related vs. unrelated), tapping into competitive lexical selection, was examined. An overall semantic interference effect was observed for the control and left-temporal groups separately. The left-frontal patients did not show a reliable semantic interference effect as a group. The left-temporal patients had increased semantic interference in the error rates relative to controls. Error distribution analyses indicated that these patients had more hesitant responses for the related than for the unrelated condition. We propose that left middle temporal lesions affect the lexical activation component, making lexical selection more susceptible to errors.
Project description:<h4>Background</h4>In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed.<h4>Methodology/principal findings</h4>We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone.<h4>Conclusions/significance</h4>We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN), indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage.