Dimensional attention as a mechanism of executive function: Integrating flexibility, selectivity, and stability.
ABSTRACT: In this report, we present a neural process model that explains visual dimensional attention and changes in visual dimensional attention over development. The model is composed of an object representation system that binds visual features such as shape and color to spatial locations and a label learning system that associates labels such as "color" or "shape" with visual features. We have previously demonstrated that this model explains the development of flexible dimensional attention in a task that requires children to switch between shape and color rules for sorting cards. In the model, the development of flexible dimensional attention is a product of strengthening associations between labels and features. In this report, we generalize this model to also explain development of stable and selective dimensional attention. Specifically, we use the model to explain a previously reported developmental association between flexible dimensional attention and stable dimensional attention. Moreover, we generate predictions regarding developmental associations between flexible and selective dimensional attention. Results from an experiment with 3- and 4-year-olds supported model predictions: children who demonstrated flexibility also demonstrated higher levels of selectivity. Thus, the model provides a framework that integrates various functions of dimensional attention, including implicit and explicit functions, over development. This model also provides new avenues of research aimed at uncovering how cognitive functions such as dimensional attention emerge from the interaction between neural dynamics and task structure, as well as understanding how learning dimensional labels creates changes in dimensional attention, brain activation, and neural connectivity.
Project description:Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues , and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information . Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection.
Project description:The mechanisms of attention prioritize sensory input for efficient perceptual processing. Influential theories suggest that attentional biases are mediated via preparatory activation of task-relevant perceptual representations in visual cortex, but the neural evidence for a preparatory coding model of attention remains incomplete. In this experiment, we tested core assumptions underlying a preparatory coding model for attentional bias. Exploiting multivoxel pattern analysis of functional neuroimaging data obtained during a non-spatial attention task, we examined the locus, time-course, and functional significance of shape-specific preparatory attention in the human brain. Following an attentional cue, yet before the onset of a visual target, we observed selective activation of target-specific neural subpopulations within shape-processing visual cortex (lateral occipital complex). Target-specific modulation of baseline activity was sustained throughout the duration of the attention trial and the degree of target specificity that characterized preparatory activation patterns correlated with perceptual performance. We conclude that top-down attention selectively activates target-specific neural codes, providing a competitive bias favoring task-relevant representations over competing representations distributed within the same subregion of visual cortex.
Project description:When we search for a target in a crowded visual scene, we often use the distinguishing features of the target, such as color or shape, to guide our attention and eye movements. To investigate the neural mechanisms of feature-based attention, we simultaneously recorded neural responses in the frontal eye field (FEF) and area V4 while monkeys performed a visual search task. The responses of cells in both areas were modulated by feature attention, independent of spatial attention, and the magnitude of response enhancement was inversely correlated with the number of saccades needed to find the target. However, an analysis of the latency of sensory and attentional influences on responses suggested that V4 provides bottom-up sensory information about stimulus features, whereas the FEF provides a top-down attentional bias toward target features that modulates sensory processing in V4 and that could be used to guide the eyes to a searched-for target.
Project description:Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention.
Project description:A host of research has now shown that our explicit goals and intentions can, in large part, overcome the capture of visual attention by objects that differ from their surroundings in terms of size, shape, or color. Surprisingly however, there is little evidence for the role of implicit learning in mitigating capture effects despite the fact that such learning has been shown to strongly affect behavior in a host of other performance domains. Here, we employ a modified attention capture paradigm, based on the work of Theeuwes (1991, 1992), in which participants must search for an odd-shaped target amongst homogeneous distracters. On each trial, there is also a salient, but irrelevant odd-colored distracter. Across the experiments reported, we intermix two search contexts: for one set of distracters (e.g., squares) the shape singleton and color singleton coincide on a majority of trials (high proportion congruent condition), whereas for the other set of distracters (e.g., circles) the shape and color singletons are highly unlikely to coincide (low proportion congruent condition). Crucially, we find that observers learn to allow the capture of attention by the salient distracter to a greater extent in the high, compared to the low proportion congruent condition, albeit only when search is sufficiently difficult. Moreover, this effect of prior experience on search behavior occurs in the absence of awareness of our proportion manipulation. We argue that low-level properties of the search displays recruit representations of prior experience in a rapid, flexible, and implicit manner.
Project description:We investigated how attention to a visual feature modulates representations of other features. The feature-similarity gain model predicts a graded modulation, whereas an alternative model asserts an inhibitory surround in feature space. Although evidence for both types of modulations can be found, a consensus has not emerged in the literature. Here, we aimed to reconcile these different views by systematically measuring how attention modulates color perception. Based on previous literature, we also predicted that color categories would impact attentional modulation. Our results showed that both surround suppression and feature-similarity gain modulate perception of colors but they operate on different similarity scales. Furthermore, the region of the suppressive surround coincided with the color category boundary, suggesting a categorical sharpening effect. We implemented a neural population coding model to explain the observed behavioral effects, which revealed a hitherto unknown connection between neural tuning shift and surround suppression.
Project description:Objects in a scene can be distinct from one another along a multitude of visual attributes, such as color and shape, and the more distinct an object is from its surroundings, the easier it is to find it. However, exactly how this distinctiveness advantage arises in vision is not well understood. Here we studied whether and how visual distinctiveness along different visual attributes (color and shape, assessed in four experiments) combine to determine an object's overall distinctiveness in a scene. Unidimensional distinctiveness scores were used to predict performance in six separate experiments where a target object differed from distractor objects along both color and shape. Results showed that there is mathematical law determining overall distinctiveness as the simple sum of the distinctiveness scores along each visual attribute. Thus, the brain must compute distinctiveness scores independently for each visual attribute before summing them into the overall score that directs human attention.
Project description:Feature-based attention has a spatially global effect, i.e., responses to stimuli that share features with an attended stimulus are enhanced not only at the attended location but throughout the visual field. However, how feature-based attention modulates cortical neural responses at unattended locations remains unclear. Here we used functional magnetic resonance imaging (fMRI) to examine this issue as human participants performed motion- (Experiment 1) and color- (Experiment 2) based attention tasks. Results indicated that, in both experiments, the respective visual processing areas (middle temporal area [MT+] for motion and V4 for color) as well as early visual, parietal, and prefrontal areas all showed the classic feature-based attention effect, with neural responses to the unattended stimulus significantly elevated when it shared the same feature with the attended stimulus. Effective connectivity analysis using dynamic causal modeling (DCM) showed that this spatially global effect in the respective visual processing areas (MT+ for motion and V4 for color), intraparietal sulcus (IPS), frontal eye field (FEF), medial frontal gyrus (mFG), and primary visual cortex (V1) was derived by feedback from the inferior frontal junction (IFJ). Complementary effective connectivity analysis using Granger causality modeling (GCM) confirmed that, in both experiments, the node with the highest outflow and netflow degree was IFJ, which was thus considered to be the source of the network. These results indicate a source for the spatially global effect of feature-based attention in the human prefrontal cortex.
Project description:Somewhere along the cortical hierarchy, behaviorally relevant information is distilled from raw sensory inputs. We examined how this transformation progresses along multiple levels of the hierarchy by comparing neural representations in visual, temporal, parietal, and frontal cortices in monkeys categorizing across three visual domains (shape, motion direction, and color). Representations in visual areas middle temporal (MT) and V4 were tightly linked to external sensory inputs. In contrast, lateral prefrontal cortex (PFC) largely represented the abstracted behavioral relevance of stimuli (task rule, motion category, and color category). Intermediate-level areas, including posterior inferotemporal (PIT), lateral intraparietal (LIP), and frontal eye fields (FEF), exhibited mixed representations. While the distribution of sensory information across areas aligned well with classical functional divisions (MT carried stronger motion information, and V4 and PIT carried stronger color and shape information), categorical abstraction did not, suggesting these areas may participate in different networks for stimulus-driven and cognitive functions. Paralleling these representational differences, the dimensionality of neural population activity decreased progressively from sensory to intermediate to frontal cortex. This shows how raw sensory representations are transformed into behaviorally relevant abstractions and suggests that the dimensionality of neural activity in higher cortical regions may be specific to their current task.
Project description:The degree to which we perceive real-world objects as similar or dissimilar structures our perception and guides categorization behavior. Here, we investigated the neural representations enabling perceived similarity using behavioral judgments, fMRI and MEG. As different object dimensions co-occur and partly correlate, to understand the relationship between perceived similarity and brain activity it is necessary to assess the unique role of multiple object dimensions. We thus behaviorally assessed perceived object similarity in relation to shape, function, color and background. We then used representational similarity analyses to relate these behavioral judgments to brain activity. We observed a link between each object dimension and representations in visual cortex. These representations emerged rapidly within 200?ms of stimulus onset. Assessing the unique role of each object dimension revealed partly overlapping and distributed representations: while color-related representations distinctly preceded shape-related representations both in the processing hierarchy of the ventral visual pathway and in time, several dimensions were linked to high-level ventral visual cortex. Further analysis singled out the shape dimension as neither fully accounted for by supra-category membership, nor a deep neural network trained on object categorization. Together our results comprehensively characterize the relationship between perceived similarity of key object dimensions and neural activity.