Visual appearance interacts with conceptual knowledge in object recognition.
ABSTRACT: Objects contain rich visual and conceptual information, but do these two types of information interact? Here, we examine whether visual and conceptual information interact when observers see novel objects for the first time. We then address how this interaction influences the acquisition of perceptual expertise. We used two types of novel objects (Greebles), designed to resemble either animals or tools, and two lists of words, which described non-visual attributes of people or man-made objects. Participants first judged if a word was more suitable for describing people or objects while ignoring a task-irrelevant image, and showed faster responses if the words and the unfamiliar objects were congruent in terms of animacy (e.g., animal-like objects with words that described human). Participants then learned to associate objects and words that were either congruent or not in animacy, before receiving expertise training to rapidly individuate the objects. Congruent pairing of visual and conceptual information facilitated observers' ability to become a perceptual expert, as revealed in a matching task that required visual identification at the basic or subordinate levels. Taken together, these findings show that visual and conceptual information interact at multiple levels in object recognition.
Project description:Much of the richness of perception is conveyed by implicit, rather than image or feature-level, information. The perception of animacy or lifelikeness of objects, for example, cannot be predicted from image level properties alone. Instead, perceiving lifelikeness seems to be an inferential process and one might expect it to be cognitively demanding and serial rather than fast and automatic. If perceptual mechanisms exist to represent lifelikeness, then observers should be able to perceive this information quickly and reliably, and should be able to perceive the lifelikeness of crowds of objects. Here, we report that observers are highly sensitive to the lifelikeness of random objects and even groups of objects. Observers' percepts of crowd lifelikeness are well predicted by independent observers' lifelikeness judgements of the individual objects comprising that crowd. We demonstrate that visual impressions of abstract dimensions can be achieved with summary statistical representations, which underlie our rich perceptual experience.
Project description:Handling our everyday life, we often react manually to verbal requests or instruction, but the functional interrelations of motor control and language are not fully understood yet, especially their neurophysiological basis. Here, we investigated whether specific motor representations for grip types interact neurophysiologically with conceptual information, that is, when reading nouns. Participants performed lexical decisions and, for words, executed a grasp-and-lift task on objects of different sizes involving precision or power grips while the electroencephalogram was recorded. Nouns could denote objects that require either a precision or a power grip and could, thus, be (in)congruent with the performed grasp. In a control block, participants pointed at the objects instead of grasping them. The main result revealed an event-related potential (ERP) interaction of grip type and conceptual information which was not present for pointing. Incongruent compared to congruent conditions elicited an increased positivity (100-200 ms after noun onset). Grip type effects were obtained in response-locked analyses of the grasping ERPs (100-300 ms at left anterior electrodes). These findings attest that grip type and conceptual information are functionally related when planning a grasping action but such an interaction could not be detected for pointing. Generally, the results suggest that control of behaviour can be modulated by task demands; conceptual noun information (i.e., associated action knowledge) may gain processing priority if the task requires a complex motor response.
Project description:In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.
Project description:A major part of learning a language is learning to map spoken words onto objects in the environment. An open question is what are the consequences of this learning for cognition and perception? Here, we present a series of experiments that examine effects of verbal labels on the activation of conceptual information as measured through picture verification tasks. We find that verbal cues, such as the word "cat," lead to faster and more accurate verification of congruent objects and rejection of incongruent objects than do either nonverbal cues, such as the sound of a cat meowing, or words that do not directly refer to the object, such as the word "meowing." This label advantage does not arise from verbal labels being more familiar or easier to process than other cues, and it does extends to newly learned labels and sounds. Despite having equivalent facility in learning associations between novel objects and labels or sounds, conceptual information is activated more effectively through verbal means than through nonverbal means. Thus, rather than simply accessing nonverbal concepts, language activates aspects of a conceptual representation in a particularly effective way. We offer preliminary support that representations activated via verbal means are more categorical and show greater consistency between subjects. These results inform the understanding of how human cognition is shaped by language and hint at effects that different patterns of naming can have on conceptual structure.
Project description:Known words can guide visual attention, affecting how information is sampled. How do novel words, those that do not provide any top-down information, affect preschoolers' visual sampling in a conceptual task? We proposed that novel names can also change visual sampling by influencing how long children look. We investigated this possibility by analyzing how children sample visual information when they hear a sentence with a novel name versus without a novel name. Children completed a match-to-sample task while their moment-to-moment eye movements were recorded using eye-tracking technology. Our analyses were designed to provide specific information on the properties of visual sampling that novel names may change. Overall, we found that novel words prolonged the duration of each sampling event but did not affect sampling allocation (which objects children looked at) or sampling organization (how children transitioned from one object to the next). These results demonstrate that novel words change one important dynamic property of gaze: Novel words can entrain the cognitive system toward longer periods of sustained attention early in development.
Project description:The neurodevelopmental consequences of deafness on the functional neuroarchitecture of the conceptual system have not been intensively investigated so far. Using functional magnetic resonance imaging (fMRI), we therefore identified brain areas involved in conceptual processing in deaf and hearing participants. Conceptual processing was probed by a pictorial animacy decision task. Furthermore, brain areas sensitive to observing verbal signs and to observing non-verbal visual hand actions were identified in deaf participants. In hearing participants, brain areas responsive to environmental sounds and the observation of visual hand actions were determined. We found a stronger recruitment of superior and middle temporal cortex in deaf compared to hearing participants during animacy decisions. This region, which forms auditory cortex in hearing people according to the sound listening task, was also activated in deaf participants, when they observed sign language, but not when they observed non-verbal hand actions. These results indicate that conceptual processing in deaf people more strongly depends on language representations compared to hearing people. Furthermore, additionally enhanced activation in visual and motor areas of deaf versus hearing participants during animacy decisions and a more frequent report of visual and motor features in the property listing task suggest that the loss of the auditory channel is partially compensated by an increased importance of visual and motor information for constituting object knowledge. Hence, our results indicate that conceptual processing in deaf compared to hearing people is more strongly based on the language system, complemented by an enhanced contribution of the visuo-motor system.
Project description:Identifying an object's material properties supports recognition and action planning: we grasp objects according to how heavy, hard or slippery we expect them to be. Visual cues to material qualities such as gloss have recently received attention, but how they interact with haptic (touch) information has been largely overlooked. Here, we show that touch modulates gloss perception: objects that feel slippery are perceived as glossier (more shiny).Participants explored virtual objects that varied in look and feel. A discrimination paradigm (Experiment 1) revealed that observers integrate visual gloss with haptic information. Observers could easily detect an increase in glossiness when it was paired with a decrease in friction. In contrast, increased glossiness coupled with decreased slipperiness produced a small perceptual change: the visual and haptic changes counteracted each other. Subjective ratings (Experiment 2) reflected a similar interaction - slippery objects were rated as glossier and vice versa. The sensory system treats visual gloss and haptic friction as correlated cues to surface material. Although friction is not a perfect predictor of gloss, the visual system appears to know and use a probabilistic relationship between these variables to bias perception - a sensible strategy given the ambiguity of visual clues to gloss.
Project description:Previous studies have shown that language can modulate visual perception, by biasing and/or enhancing perceptual performance. However, it is still debated where in the brain visual and linguistic information are integrated, and whether the effects of language on perception are automatic and persist even in the absence of awareness of the linguistic material. Here, we aimed to explore the automaticity of language-perception interactions and the neural loci of these interactions in an fMRI study. Participants engaged in a visual motion discrimination task (upward or downward moving dots). Before each trial, a word prime was briefly presented that implied upward or downward motion (e.g., "rise", "fall"). These word primes strongly influenced behavior: congruent motion words sped up reaction times and improved performance relative to incongruent motion words. Neural congruency effects were only observed in the left middle temporal gyrus, showing higher activity for congruent compared to incongruent conditions. This suggests that higher-level conceptual areas rather than sensory areas are the locus of language-perception interactions. When motion words were rendered unaware by means of masking, they still affected visual motion perception, suggesting that language-perception interactions may rely on automatic feed-forward integration of perceptual and semantic material in language areas of the brain.
Project description:Expertise effects for nonface objects in face-selective brain areas may reflect stable aspects of neuronal selectivity that determine how observers perceive objects. However, bottom-up (e.g., clutter from irrelevant objects) and top-down manipulations (e.g., attentional selection) can influence activity, affecting the link between category selectivity and individual performance. We test the prediction that individual differences expressed as neural expertise effects for cars in face-selective areas are sufficiently stable to survive clutter and manipulations of attention. Additionally, behavioral work and work using event related potentials suggest that expertise effects may not survive competition; we investigate this using functional magnetic resonance imaging. Subjects varying in expertise with cars made 1-back decisions about cars, faces, and objects in displays containing one or 2 objects, with only one category attended. Univariate analyses suggest car expertise effects are robust to clutter, dampened by reducing attention to cars, but nonetheless more robust to manipulations of attention than competition. While univariate expertise effects are severely abolished by competition between cars and faces, multivariate analyses reveal new information related to car expertise. These results demonstrate that signals in face-selective areas predict expertise effects for nonface objects in a variety of conditions, although individual differences may be expressed in different dependent measures depending on task and instructions.
Project description:As the stream of experience unfolds, our memory system rapidly transforms current inputs into long-lasting meaningful memories. A putative neural mechanism that strongly influences how input elements are transformed into meaningful memory codes relies on the ability to integrate them with existing structures of knowledge or schemas. However, it is not yet clear whether schema-related integration neural mechanisms occur during online encoding. In the current investigation, we examined the encoding-dependent nature of this phenomenon in humans. We showed that actively integrating words with congruent semantic information provided by a category cue enhances memory for words and increases false recall. The memory effect of such active integration with congruent information was robust, even with an interference task occurring right after each encoding word list. In addition, via electroencephalography, we show in 2 separate studies that the onset of the neural signals of successful encoding appeared early (?400 ms) during the encoding of congruent words. That the neural signals of successful encoding of congruent and incongruent information followed similarly ?200 ms later suggests that this earlier neural response contributed to memory formation. We propose that the encoding of events that are congruent with readily available contextual semantics can trigger an accelerated onset of the neural mechanisms, supporting the integration of semantic information with the event input. This faster onset would result in a long-lasting and meaningful memory trace for the event but, at the same time, make it difficult to distinguish it from plausible but never encoded events (i.e., related false memories). SIGNIFICANCE STATEMENT:Conceptual or schema congruence has a strong influence on long-term memory. However, the question of whether schema-related integration neural mechanisms occur during online encoding has yet to be clarified. We investigated the neural mechanisms reflecting how the active integration of words with congruent semantic categories enhances memory for words and increases false recall of semantically related words. We analyzed event-related potentials during encoding and showed that the onset of the neural signals of successful encoding appeared early (?400 ms) during the encoding of congruent words. Our findings indicate that congruent events can trigger an accelerated onset of neural encoding mechanisms supporting the integration of semantic information with the event input.