Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas.
ABSTRACT: Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue.
Project description:Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties.
Project description:Behavioral studies in many species and studies in robotics have demonstrated two sources of information critical for visually-guided navigation: sense (left-right) information and egocentric distance (proximal-distal) information. A recent fMRI study found sensitivity to sense information in two scene-selective cortical regions, the retrosplenial complex (RSC) and the occipital place area (OPA), consistent with hypotheses that these regions play a role in human navigation. Surprisingly, however, another scene-selective region, the parahippocampal place area (PPA), was not sensitive to sense information, challenging hypotheses that this region is directly involved in navigation. Here we examined how these regions encode egocentric distance information (e.g., a house seen from close up versus far away), another type of information crucial for navigation. Using fMRI adaptation and a regions-of-interest analysis approach in human adults, we found sensitivity to egocentric distance information in RSC and OPA, while PPA was not sensitive to such information. These findings further support that RSC and OPA are directly involved in navigation, while PPA is not, consistent with the hypothesis that scenes may be processed by distinct systems guiding navigation and recognition.
Project description:Neuroimaging studies have identified multiple scene-selective regions in human cortex, but the precise role each region plays in scene processing is not yet clear. It was recently hypothesized that two regions, the occipital place area (OPA) and the retrosplenial complex (RSC), play a direct role in navigation, while a third region, the parahippocampal place area (PPA), does not. Some evidence suggests a further division of labor even among regions involved in navigation: While RSC is thought to support navigation through the broader environment, OPA may be involved in navigation through the immediately visible environment, although this role for OPA has never been tested. Here we predict that OPA represents first-person perspective motion information through scenes, a critical cue for such "visually-guided navigation", consistent with the hypothesized role for OPA. Response magnitudes were measured in OPA (as well as RSC and PPA) to i) video clips of first-person perspective motion through scenes ("Dynamic Scenes"), and ii) static images taken from these same movies, rearranged such that first-person perspective motion could not be inferred ("Static Scenes"). As predicted, OPA responded significantly more to the Dynamic than Static Scenes, relative to both RSC and PPA. The selective response in OPA to Dynamic Scenes was not due to domain-general motion sensitivity or to low-level information inherited from early visual regions. Taken together, these findings suggest the novel hypothesis that OPA may support visually-guided navigation, insofar as first-person perspective motion information is useful for such navigation, while RSC and PPA support other aspects of navigation and scene recognition.
Project description:Human observers can recognize real-world visual scenes with great efficiency. Cortical regions such as the parahippocampal place area (PPA) and retrosplenial complex (RSC) have been implicated in scene recognition, but the specific representations supported by these regions are largely unknown. We used functional magnetic resonance imaging adaptation (fMRIa) and multi-voxel pattern analysis (MVPA) to explore this issue, focusing on whether the PPA and RSC represent scenes in terms of general categories, or as specific scenic exemplars. Subjects were scanned while viewing images drawn from 10 outdoor scene categories in two scan runs and images of 10 familiar landmarks from their home college campus in two scan runs. Analyses of multi-voxel patterns revealed that the PPA and RSC encoded both category and landmark information, with a slight advantage for landmark coding in RSC. fMRIa, on the other hand, revealed a very different picture: both PPA and RSC adapted when landmark information was repeated, but category adaptation was only observed in a small subregion of the left PPA. These inconsistencies between the MVPA and fMRIa data suggests that these two techniques interrogate different aspects of the neuronal code. We propose three hypotheses about the mechanisms that might underlie adaptation and multi-voxel signals.
Project description:We internally represent the structure of our surroundings even when there is little layout information available in the visual image, such as when walking through fog or darkness. One way in which we disambiguate such scenes is through object cues; for example, seeing a boat supports the inference that the foggy scene is a lake. Recent studies have investigated the neural mechanisms by which object and scene processing interact to support object perception. The current study examines the reverse interaction by which objects facilitate the neural representation of scene layout. Photographs of indoor (closed) and outdoor (open) real-world scenes were blurred such that they were difficult to categorize on their own but easily disambiguated by the inclusion of an object. fMRI decoding was used to measure scene representations in scene-selective parahippocampal place area (PPA) and occipital place area (OPA). Classifiers were trained to distinguish response patterns to fully visible indoor and outdoor scenes, presented in an independent experiment. Testing these classifiers on blurred scenes revealed a strong improvement in classification in left PPA and OPA when objects were present, despite the reduced low-level visual feature overlap with the training set in this condition. These findings were specific to left PPA/OPA, with no evidence for object-driven facilitation in right PPA/OPA, object-selective areas, and early visual cortex. These findings demonstrate separate roles for left and right scene-selective cortex in scene representation, whereby left PPA/OPA represents inferred scene layout, influenced by contextual object cues, and right PPA/OPA represents a scene's visual features.
Project description:Thirty years of research suggests that environmental boundaries-e.g., the walls of an experimental chamber or room-exert powerful influence on navigational behavior, often to the exclusion of other cues [1-9]. Consistent with this behavioral work, neurons in brain structures that instantiate spatial memory often exhibit firing fields that are strongly controlled by environmental boundaries [10-15]. Despite the clear importance of environmental boundaries for spatial coding, however, a brain region that mediates the perception of boundary information has not yet been identified. We hypothesized that the occipital place area (OPA), a scene-selective region located near the transverse occipital sulcus , might provide this perceptual source by extracting boundary information from visual scenes during navigation. To test this idea, we used transcranial magnetic stimulation (TMS) to interrupt processing in the OPA while subjects performed a virtual-reality memory task that required them to learn the spatial locations of test objects that were either fixed in place relative to the boundary of the environment or moved in tandem with a landmark object. Consistent with our prediction, we found that TMS to the right OPA impaired spatial memory for boundary-tethered, but not landmark-tethered, objects. Moreover, this effect was found when the boundary was defined by a wall, but not when it was defined by a marking on the ground. These results show that the OPA is causally involved in boundary-based spatial navigation and suggest that the OPA is the perceptual source of the boundary information that controls navigational behavior.
Project description:The human brain contains areas that respond selectively to faces, bodies and scenes. Neuroimaging studies have shown that a subset of these areas preferentially respond more to moving than static stimuli, but the reasons for this functional dissociation remain unclear. In the present study, we simultaneously mapped the responses to motion in face-, body- and scene-selective areas in the right hemisphere using moving and static stimuli. Participants (N = 22) were scanned using functional magnetic resonance imaging (fMRI) while viewing videos containing bodies, faces, objects, scenes or scrambled objects, and static pictures from the beginning, middle and end of each video. Results demonstrated that lateral areas, including face-selective areas in the posterior and anterior superior temporal sulcus (STS), the extrastriate body area (EBA) and the occipital place area (OPA) responded more to moving than static stimuli. By contrast, there was no difference between the response to moving and static stimuli in ventral and medial category-selective areas, including the fusiform face area (FFA), occipital face area (OFA), amygdala, fusiform body area (FBA), retrosplenial complex (RSC) and parahippocampal place area (PPA). This functional dissociation between lateral and ventral/medial brain areas that respond selectively to different visual categories suggests that face-, body- and scene-selective networks may be functionally organized along a common dimension.
Project description:Functional imaging studies in human reliably identify a trio of scene-selective regions, one on each of the lateral [occipital place area (OPA)], ventral [parahippocampal place area (PPA)], and medial [retrosplenial complex (RSC)] cortical surfaces. Recently, we demonstrated differential retinotopic biases for the contralateral lower and upper visual fields within OPA and PPA, respectively. Here, using functional magnetic resonance imaging, we combine detailed mapping of both population receptive fields (pRF) and category-selectivity, with independently acquired resting-state functional connectivity analyses, to examine scene and retinotopic processing within medial parietal cortex. We identified a medial scene-selective region, which was contained largely within the posterior and ventral bank of the parieto-occipital sulcus (POS). While this region is typically referred to as RSC, the spatial extent of our scene-selective region typically did not extend into retrosplenial cortex, and thus we adopt the term medial place area (MPA) to refer to this visually defined scene-selective region. Intriguingly MPA co-localized with a region identified solely on the basis of retinotopic sensitivity using pRF analyses. We found that MPA demonstrates a significant contralateral visual field bias, coupled with large pRF sizes. Unlike OPA and PPA, MPA did not show a consistent bias to a single visual quadrant. MPA also co-localized with a region identified by strong differential functional connectivity with PPA and the human face-selective fusiform face area (FFA), commensurate with its functional selectivity. Functional connectivity with OPA was much weaker than with PPA, and similar to that with face-selective occipital face area (OFA), suggesting a closer link with ventral than lateral cortex. Consistent with prior research, we also observed differential functional connectivity in medial parietal cortex for anterior over posterior PPA, as well as a region on the lateral surface, the caudal inferior parietal lobule (cIPL). However, the differential connectivity in medial parietal cortex was found principally anterior of MPA. We suggest that there is posterior-anterior gradient within medial parietal cortex, with posterior regions in the POS showing retinotopically based scene-selectivity and more anterior regions showing connectivity that may be more reflective of abstract, navigationally pertinent and possibly mnemonic representations.
Project description:We used a multi-voxel classification analysis of functional magnetic resonance imaging (fMRI) data to determine to what extent item-specific information about complex natural scenes is represented in several category-selective areas of human extrastriate visual cortex during visual perception and visual mental imagery. Participants in the scanner either viewed or were instructed to visualize previously memorized natural scene exemplars, and the neuroimaging data were subsequently subjected to a multi-voxel pattern analysis (MVPA) using a support vector machine (SVM) classifier. We found that item-specific information was represented in multiple scene-selective areas: the occipital place area (OPA), parahippocampal place area (PPA), retrosplenial cortex (RSC), and a scene-selective portion of the precuneus/intraparietal sulcus region (PCu/IPS). Furthermore, item-specific information from perceived scenes was re-instantiated during mental imagery of the same scenes. These results support findings from previous decoding analyses for other types of visual information and/or brain areas during imagery or working memory, and extend them to the case of visual scenes (and scene-selective cortex). Taken together, such findings support models suggesting that reflective mental processes are subserved by the re-instantiation of perceptual information in high-level visual cortex. We also examined activity in the fusiform face area (FFA) and found that it, too, contained significant item-specific scene information during perception, but not during mental imagery. This suggests that although decodable scene-relevant activity occurs in FFA during perception, FFA activity may not be a necessary (or even relevant) component of one's mental representation of visual scenes.
Project description:Humans, like animals, rely on an accurate knowledge of one's spatial position and facing direction to keep orientated in the surrounding space. Although previous neuroimaging studies demonstrated that scene-selective regions (the parahippocampal place area or PPA, the occipital place area or OPA and the retrosplenial complex or RSC), and the hippocampus (HC) are implicated in coding position and facing direction within small-(room-sized) and large-scale navigational environments, little is known about how these regions represent these spatial quantities in a large open-field environment. Here, we used functional magnetic resonance imaging (fMRI) in humans to explore the neural codes of these navigationally-relevant information while participants viewed images which varied for position and facing direction within a familiar, real-world circular square. We observed neural adaptation for repeated directions in the HC, even if no navigational task was required. Further, we found that the amount of knowledge of the environment interacts with the PPA selectivity in encoding positions: individuals who needed more time to memorize positions in the square during a preliminary training task showed less neural attenuation in this scene-selective region. We also observed adaptation effects, which reflect the real distances between consecutive positions, in scene-selective regions but not in the HC. When examining the multi-voxel patterns of activity we observed that scene-responsive regions and the HC encoded both spatial information and that the RSC classification accuracy for positions was higher in individuals scoring higher to a self-reported questionnaire of spatial abilities. Our findings provide new insight into how the human brain represents a real, large-scale "vista" space, demonstrating the presence of neural codes for position and direction in both scene-selective and hippocampal regions, and revealing the existence, in the former regions, of a map-like spatial representation reflecting real-world distance between consecutive positions.