Reading Words or Pictures: Eye Movement Patterns in Adults and Children Differ by Age Group and Receptive Language Ability.
ABSTRACT: This study was conducted to explore the differences in the degree of attention given to Chinese print and pictures by children and adults when they read picture books with and without Chinese words. We used an eye tracker from SensoMotoric Instruments to record the visual fixations of the subjects. The results showed that the adults paid more attention to Chinese print and looked at the print sooner than the children did. The stronger the children's receptive language abilities were, the less time it took them to view the pictures. All participants spent the same amount of time looking at the pictures whether Chinese words were present or absent.
Project description:The meaning of a picture can be extracted rapidly, but the form-to-meaning relationship is less obvious for printed words. In contrast to English words that follow grapheme-to-phoneme correspondence rule, the iconic nature of Chinese words might predispose them to activate their semantic representations more directly from their orthographies. By using the paradigm of repetition blindness (RB) that taps into the early level of word processing, we examined whether Chinese words activate their semantic representations as directly as pictures do. RB refers to the failure to detect the second occurrence of an item when it is presented twice in temporal proximity. Previous studies showed RB for semantically related pictures, suggesting that pictures activate their semantic representations directly from their shapes and thus two semantically related pictures are represented as repeated. However, this does not apply to English words since no RB was found for English synonyms. In this study, we replicated the semantic RB effect for pictures, and further showed the absence of semantic RB for Chinese synonyms. Based on our findings, it is suggested that Chinese words are processed like English words, which do not activate their semantic representations as directly as pictures do.
Project description:Comparisons of word and picture processing using event-related potentials (ERPs) are contaminated by gross physical differences between the two types of stimuli. In the present study, we tackle this problem by comparing picture processing with word processing in an alphabetic and a logographic script, that are also characterized by gross physical differences. Native Mandarin Chinese speakers viewed pictures (line drawings) and Chinese characters (Experiment 1), native English speakers viewed pictures and English words (Experiment 2), and naïve Chinese readers (native English speakers) viewed pictures and Chinese characters (Experiment 3) in a semantic categorization task. The varying pattern of differences in the ERPs elicited by pictures and words across the three experiments provided evidence for (i) script-specific processing arising between 150 and 200 ms post-stimulus onset, (ii) domain-specific but script-independent processing arising between 200 and 300 ms post-stimulus onset, and (iii) processing that depended on stimulus meaningfulness in the N400 time window. The results are interpreted in terms of differences in the way visual features are mapped onto higher-level representations for pictures and words in alphabetic and logographic writing systems.
Project description:The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.
Project description:Little is known about the language and behaviors that typically occur when adults read electronic books with infants and toddlers, and which are supportive of learning. In this study, we report differences in parent and child behavior and language when reading print versus electronic versions of the same books, and investigate links between behavior and vocabulary learning. Parents of 102 toddlers aged 17-26 months were randomly assigned to read two commercially available electronic books or two print format books with identical content with their toddler. After reading, children were asked to identify an animal labeled in one of the books in both two-dimensional (pictures) and three-dimensional (replica objects) formats. Toddlers who were read the electronic books paid more attention, made themselves more available for reading, displayed more positive affect, participated in more page turns, and produced more content-related comments during reading than those who were read the print versions of the books. Toddlers also correctly identified a novel animal labeled in the book more often when they had read the electronic than the traditional print books. Availability for reading and attention to the book acted as mediators in predicting children's animal choice at test, suggesting that electronic books supported children's learning by way of increasing their engagement and attention. In contrast to prior studies conducted with older children, there was no difference between conditions in behavioral or off-topic talk for either parents or children. More research is needed to determine the potential hazards and benefits of new media formats for very young children.
Project description:Controlled semantic retrieval to words elicits co-activation of inferior frontal (IFG) and left posterior temporal cortex (pMTG), but research has not yet established (i) the distinct contributions of these regions or (ii) whether the same processes are recruited for non-verbal stimuli. Words have relatively flexible meanings - as a consequence, identifying the context that links two specific words is relatively demanding. In contrast, pictures are richer stimuli and their precise meaning is better specified by their visible features - however, not all of these features will be relevant to uncovering a given association, tapping selection/inhibition processes. To explore potential differences across modalities, we took a commonly-used manipulation of controlled retrieval demands, namely the identification of weak vs. strong associations, and compared word and picture versions. There were 4 key findings: (1) Regions of interest (ROIs) in posterior IFG (BA44) showed graded effects of modality (e.g., words>pictures in left BA44; pictures>words in right BA44). (2) An equivalent response was observed in left mid-IFG (BA45) across modalities, consistent with the multimodal semantic control deficits that typically follow LIFG lesions. (3) The anterior IFG (BA47) ROI showed a stronger response to verbal than pictorial associations, potentially reflecting a role for this region in establishing a meaningful context that can be used to direct semantic retrieval. (4) The left pMTG ROI also responded to difficulty across modalities yet showed a stronger response overall to verbal stimuli, helping to reconcile two distinct literatures that have implicated this site in semantic control and lexical-semantic access respectively. We propose that left anterior IFG and pMTG work together to maintain a meaningful context that shapes ongoing semantic processing, and that this process is more strongly taxed by word than picture associations.
Project description:Item and associative recognition for pictures and words with college-age young adults and 60-75-year-old adults were examined in the experiment reported in this article. The diffusion model (Ratcliff & McKoon, 2008) was used to extract estimates of components of processing from the empirical values of accuracy and correct and error response time distributions. The model fit the empirical data well for both picture and word stimuli. Results showed that boundary separation was larger and nondecision time was longer for older relative to young adults. Drift rates were not lower for older adults for item recognition but they were for associative recognition, indicating that the richer structure of pictures did not provide an enhanced ability to form associations for the older adults. There were also significant correlations among the components of processing across the tasks of the experiment, suggesting common factors, but participants' accuracy and response times did not significantly correlate within and across the tasks.
Project description:To understand the neural correlates of the memorial power of pictures, pictures and words were systematically varied at study and test within subjects, and high-density event-related potentials (ERPs) were recorded at retrieval. Using both conventional and novel methods, the results were presented as ERP waveforms, 50 ms scalp topographies, and video clips, and analyzed using t-statistic topographic maps and nonparametric p-value maps. The authors found that a parietally-based ERP component was enhanced when pictures were presented at study or test, compared to when words were presented. An early frontally-based component was enhanced when words were presented at study compared to pictures. From these data the authors speculate that the memorial power of pictures is related to the ability of pictures to enhance recollection. Familiarity, by contrast, was enhanced when words were presented at study compared to pictures. From these results and the dynamic view of memory afforded by viewing the data as video clips, the authors propose an ERP model of recognition memory.
Project description:Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4.
Project description:Previous studies have demonstrated the automatic vigilance effect for faces and pictures and have attributed it to the brain's prioritized unconscious evaluation of early evolutionary stimuli that are critical to survival. Whether this effect exists for evolutionarily more recent stimuli, such as written words, has become the center of much debate. Apparently contradicting results have been reported in different languages, such as Hebrew, English, and Traditional Chinese (TC), with regard to the unconscious processing of emotional words in breaking continuous flash suppression (b-CFS). Our current study used two experiments (with two-character words or single-character words) to verify whether the emotional valence or the length of Simplified Chinese (SC) words would modulate conscious access in b-CFS. We failed to replicate the findings reported in Yang and Yeh (2011) using TC, but found that complex high-level emotional information could not be extracted from interocularly suppressed words regardless of their length. Our findings comply with the distinction between subliminal and preconscious states in Global Neuronal Workspace Theory and support the current notion that preconsciousness or partial awareness may be indispensable for high-level cognitive tasks such as reading comprehension.
Project description:A surprisingly small portion of reading research has been dedicated to investigating how the visual word recognition process is influenced by embedded words (e.g., 'arm' in 'charm'), and no research has yet investigated embedded words in a natural reading setting. Covering this issue, the present work reports analyses of eye-tracking data from the GECO bilingual book reading corpus. Word viewing times were analyzed as a function of the number, frequency and proportional length of embedded words. We anticipated two scenarios: embedded words would either facilitate processing due to increased word-letter feedback, or inhibit processing due to increased lexical competition. A main facilitatory effect of embedded words on the recognition process was established, with an increasing number of embedded words resulting in shorter word viewing times and fewer fixations. This pattern was depicted by readers of Dutch as well as readers of English. Long, high-frequency embedded words formed an exception however, as these led to inhibition (Dutch participants) or a null-effect (English participants). The present results indicate that both scenarios outlined above are at play, but with a theoretical constraint on the role of word-to-word inhibitory connections. Specifically, such connections may predominantly exist among words of similar length. Hence, embedded words generally facilitate processing through word-letter feedback, but this facilitatory effect is countered by word-to-word inhibition if the embedded word's length approximates that of its superset.