Short-term plasticity of visuo-haptic object recognition.
ABSTRACT: Functional magnetic resonance imaging (fMRI) studies have provided ample evidence for the involvement of the lateral occipital cortex (LO), fusiform gyrus (FG), and intraparietal sulcus (IPS) in visuo-haptic object integration. Here we applied 30 min of sham (non-effective) or real offline 1 Hz repetitive transcranial magnetic stimulation (rTMS) to perturb neural processing in left LO immediately before subjects performed a visuo-haptic delayed-match-to-sample task during fMRI. In this task, subjects had to match sample (S1) and target (S2) objects presented sequentially within or across vision and/or haptics in both directions (visual-haptic or haptic-visual) and decide whether or not S1 and S2 were the same objects. Real rTMS transiently decreased activity at the site of stimulation and remote regions such as the right LO and bilateral FG during haptic S1 processing. Without affecting behavior, the same stimulation gave rise to relative increases in activation during S2 processing in the right LO, left FG, bilateral IPS, and other regions previously associated with object recognition. Critically, the modality of S2 determined which regions were recruited after rTMS. Relative to sham rTMS, real rTMS induced increased activations during crossmodal congruent matching in the left FG for haptic S2 and the temporal pole for visual S2. In addition, we found stronger activations for incongruent than congruent matching in the right anterior parahippocampus and middle frontal gyrus for crossmodal matching of haptic S2 and in the left FG and bilateral IPS for unimodal matching of visual S2, only after real but not sham rTMS. The results imply that a focal perturbation of the left LO triggers modality-specific interactions between the stimulated left LO and other key regions of object processing possibly to maintain unimpaired object recognition. This suggests that visual and haptic processing engage partially distinct brain networks during visuo-haptic object matching.
Project description:The parietal operculum (OP) contains haptic memory on the geometry of objects that is readily transferrable to the motor cortex but a causal role of OP in memory-guided grasping is only speculative. We explored this issue by using online high-frequency repetitive transcranial magnetic stimulation (rTMS). The experimental task was performed by blindfolded participants acting on objects of variable size. Trials consisted in three phases: haptic exploration of an object, delay, and reach-grasp movement onto the explored object. Motor performance was evaluated by the kinematics of finger aperture. Online rTMS was applied to the left OP region separately in each of the three phases of the task. The results showed that rTMS altered grip aperture only when applied in the delay phase to the OP. In a second experiment a haptic discriminative (match-to-sample) task was carried out on objects similar to those used in the first experiment. Online rTMS was applied to the left OP. No psychophysical effects were induced by rTMS on the detection of explicit haptic object size. We conclude that neural activity in the OP region is necessary for proficient memory-guided haptic grasping. The function of OP seems to be critical while maintaining the haptic memory trace and less so while encoding it or retrieving it.
Project description:When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the "weight" given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots) with different "gains" between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber's law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modeled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimizing the design of visual-haptic devices.
Project description:Research has provided strong evidence of multisensory convergence of visual and haptic information within the visual cortex. These studies implement crossmodal matching paradigms to examine how systems use information from different sensory modalities for object recognition. Developmentally, behavioral evidence of visuohaptic crossmodal processing has suggested that communication within sensory systems develops earlier than across systems; nonetheless, it is unknown how the neural mechanisms driving these behavioral effects develop. To address this gap in knowledge, BOLD functional Magnetic Resonance Imaging (fMRI) was measured during delayed match-to-sample tasks that examined intramodal (visual-to-visual, haptic-to-haptic) and crossmodal (visual-to-haptic, haptic-to-visual) novel object recognition in children aged 7-8.5 years and adults. Tasks were further divided into sample encoding and test matching phases to dissociate the relative contributions of each. Results of crossmodal and intramodal object recognition revealed the network of known visuohaptic multisensory substrates, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS). Critically, both adults and children showed crossmodal enhancement within the LOC, suggesting a sensitivity to changes in sensory modality during recognition. These groups showed similar regions of activation, although children generally exhibited more widespread activity during sample encoding and weaker BOLD signal change during test matching than adults. Results further provided evidence of a bilateral region in the occipitotemporal cortex that was haptic-preferring in both age groups. This region abutted the bimodal LOtv, and was consistent with a medial to lateral organization that transitioned from a visual to haptic bias within the LOC. These findings converge with existing evidence of visuohaptic processing in the LOC in adults, and extend our knowledge of crossmodal processing in adults and children.
Project description:The concept of objects is fundamental to cognition and is defined by a consistent set of sensory properties and physical affordances. Although it is unknown how the abstract concept of an object emerges, most accounts assume that visual or haptic boundaries are crucial in this process. Here, we tested an alternative hypothesis that boundaries are not essential but simply reflect a more fundamental principle: consistent visual or haptic statistical properties. Using a novel visuo-haptic statistical learning paradigm, we familiarised participants with objects defined solely by across-scene statistics provided either visually or through physical interactions. We then tested them on both a visual familiarity and a haptic pulling task, thus measuring both within-modality learning and across-modality generalisation. Participants showed strong within-modality learning and 'zero-shot' across-modality generalisation which were highly correlated. Our results demonstrate that humans can segment scenes into objects, without any explicit boundary cues, using purely statistical information.
Project description:Visuo-haptic biases are observed when bringing your unseen hand to a visual target. The biases are different between, but consistent within participants. We investigated the usefulness of adjusting haptic guidance to these user-specific biases in aligning haptic and visual perception. By adjusting haptic guidance according to the biases, we aimed to reduce the conflict between the modalities. We first measured the biases using an adaptive procedure. Next, we measured performance in a pointing task using three conditions: 1) visual images that were adjusted to user-specific biases, without haptic guidance, 2) veridical visual images combined with haptic guidance, and 3) shifted visual images combined with haptic guidance. Adding haptic guidance increased precision. Combining haptic guidance with user-specific visual information yielded the highest accuracy and the lowest level of conflict with the guidance at the end point. These results show the potential of correcting for user-specific perceptual biases when designing haptic guidance.
Project description:It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped "glaven") for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object's shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions-e.g., the participants' performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.
Project description:Presenting different images to each eye triggers 'binocular rivalry' in which one image is visible and the other suppressed, with the visible image alternating every second or so. We previously showed that binocular rivalry between cross-oriented gratings is altered when the fingertip explores a grooved stimulus aligned with one of the rivaling gratings: the matching visual grating's dominance duration was lengthened and its suppression duration shortened. In a more robust test, we here measure visual contrast sensitivity during rivalry dominance and suppression, with and without exploration of the grooved surface, to determine if rivalry suppression strength is modulated by touch. We find that a visual grating undergoes 45% less suppression when observers touch an aligned grating, compared to a cross-oriented one. Touching an aligned grating also improved visual detection thresholds for the 'invisible' suppressed grating by 2.4 dB, relative to a vision-only condition. These results show that congruent haptic stimulation prevents a visual stimulus from becoming deeply suppressed in binocular rivalry. Moreover, because congruent touch acted on the phenomenally invisible grating, this visuo-haptic interaction must precede awareness and likely occurs early in visual processing.
Project description:Although visual cortical engagement in haptic shape perception is well established, its relationship with visual imagery remains controversial. We addressed this using functional magnetic resonance imaging during separate visual object imagery and haptic shape perception tasks. Two experiments were conducted. In the first experiment, the haptic shape task employed unfamiliar, meaningless objects, whereas familiar objects were used in the second experiment. The activations evoked by visual object imagery overlapped more extensively, and their magnitudes were more correlated, with those evoked during haptic shape perception of familiar, compared to unfamiliar, objects. In the companion paper (Deshpande et al., this issue), we used task-specific functional and effective connectivity analyses to provide convergent evidence: these analyses showed that the neural networks underlying visual imagery were similar to those underlying haptic shape perception of familiar, but not unfamiliar, objects. We conclude that visual object imagery is more closely linked to haptic shape perception when objects are familiar, compared to when they are unfamiliar.
Project description:Segregation of information flow along a dorsally directed pathway for processing object location and a ventrally directed pathway for processing object identity is well established in the visual and auditory systems, but is less clear in the somatosensory system. We hypothesized that segregation of location vs. identity information in touch would be evident if texture is the relevant property for stimulus identity, given the salience of texture for touch. Here, we used functional magnetic resonance imaging (fMRI) to investigate whether the pathways for haptic and visual processing of location and texture are segregated, and the extent of bisensory convergence. Haptic texture-selectivity was found in the parietal operculum and posterior visual cortex bilaterally, and in parts of left inferior frontal cortex. There was bisensory texture-selectivity at some of these sites in posterior visual and left inferior frontal cortex. Connectivity analyses demonstrated, in each modality, flow of information from unisensory non-selective areas to modality-specific texture-selective areas and further to bisensory texture-selective areas. Location-selectivity was mostly bisensory, occurring in dorsal areas, including the frontal eye fields and multiple regions around the intraparietal sulcus bilaterally. Many of these regions received input from unisensory areas in both modalities. Together with earlier studies, the activation and connectivity analyses of the present study establish that somatosensory processing flows into segregated pathways for location and object identity information. The location-selective somatosensory pathway converges with its visual counterpart in dorsal frontoparietal cortex, while the texture-selective somatosensory pathway runs through the parietal operculum before converging with its visual counterpart in visual and frontal cortex. Both segregation of sensory processing according to object property and multisensory convergence appear to be universal organizing principles.
Project description:In perceptual psychology, estimations of visual depth and size under different spatial layouts have been extensively studied. However, research evidence in virtual environments (VE) is relatively lacking. The emergence of human-computer interaction (HCI) and virtual reality (VR) has raised the question of how human operators perform actions based on the estimation of visual properties in VR, especially when the sensory cues associated with the same object are conflicting. We report on an experiment in which participants compared the size of a visual sphere to a haptic sphere, belonging to the same object in a VE. The sizes from the visual and haptic modalities were either identical or conflicting (with visual size being larger than haptic size, or vice versa). We used three standard haptic references (small, medium, and large sizes) and asked participants to compare the visual sizes with the given reference, by method of constant stimuli. Results show a dominant functional priority of the visual size perception. Moreover, observers demonstrated a central tendency effect: over-estimation for smaller haptic sizes but under-estimation for larger haptic sizes. The results are in-line with previous studies in real environments (RE). We discuss the current findings in the framework of adaptation level theory for haptic size reference. This work provides important implications for the optimal design of human-computer interactions when integrating 3D visual-haptic information in a VE.