ABSTRACT: Information integration across the senses is fundamental for effective interactions with our environment. The extent to which signals from different senses can interact in the absence of awareness is controversial. Combining the spatial ventriloquist illusion and dynamic continuous flash suppression (dCFS), we investigated in a series of two experiments whether visual signals that observers do not consciously perceive can influence spatial perception of sounds. Importantly, dCFS obliterated visual awareness only on a fraction of trials allowing us to compare spatial ventriloquism for physically identical flashes that were judged as visible or invisible. Our results show a stronger ventriloquist effect for visible than invisible flashes. Critically, a robust ventriloquist effect emerged also for invisible flashes even when participants were at chance when locating the flash. Collectively, our findings demonstrate that signals that we are not aware of in one sensory modality can alter spatial perception of signals in another sensory modality.
Project description:Many events from daily life are audiovisual (AV). Handclaps produce both visual and acoustic signals that are transmitted in air and processed by our sensory systems at different speeds, reaching the brain multisensory integration areas at different moments. Signals must somehow be associated in time to correctly perceive synchrony. This project aims at quantifying the mutual temporal attraction between senses and characterizing the different interaction modes depending on the offset. In every trial participants saw four beep-flash pairs regularly spaced in time, followed after a variable delay by a fifth event in the test modality (auditory or visual). A large range of AV offsets was tested. The task was to judge whether the last event came before/after what was expected given the perceived rhythm, while attending only to the test modality. Flashes were perceptually shifted in time toward beeps, the attraction being stronger for lagging than leading beeps. Conversely, beeps were not shifted toward flashes, indicating a nearly total auditory capture. The subjective timing of the visual component resulting from the AV interaction could easily be forward but not backward in time, an intuitive constraint stemming from minimum visual processing delays. Finally, matching auditory and visual time-sensitivity with beeps embedded in pink noise produced very similar mutual attractions of beeps and flashes. Breaking the natural auditory preference for timing allowed vision to take over as well, showing that this preference is not hardwired.
Project description:At any moment in time, streams of information reach the brain through the different senses. Given this wealth of noisy information, it is essential that we select information of relevance - a function fulfilled by attention - and infer its causal structure to eventually take advantage of redundancies across the senses. Yet, the role of selective attention during causal inference in cross-modal perception is unknown. We tested experimentally whether the distribution of attention across vision and touch enhances cross-modal spatial integration (visual-tactile ventriloquism effect, Expt. 1) and recalibration (visual-tactile ventriloquism aftereffect, Expt. 2) compared to modality-specific attention, and then used causal-inference modeling to isolate the mechanisms behind the attentional modulation. In both experiments, we found stronger effects of vision on touch under distributed than under modality-specific attention. Model comparison confirmed that participants used Bayes-optimal causal inference to localize visual and tactile stimuli presented as part of a visual-tactile stimulus pair, whereas simultaneously collected unity judgments - indicating whether the visual-tactile pair was perceived as spatially-aligned - relied on a sub-optimal heuristic. The best-fitting model revealed that attention modulated sensory and cognitive components of causal inference. First, distributed attention led to an increase of sensory noise compared to selective attention toward one modality. Second, attending to both modalities strengthened the stimulus-independent expectation that the two signals belong together, the prior probability of a common source for vision and touch. Yet, only the increase in the expectation of vision and touch sharing a common source was able to explain the observed enhancement of visual-tactile integration and recalibration effects with distributed attention. In contrast, the change in sensory noise explained only a fraction of the observed enhancements, as its consequences vary with the overall level of noise and stimulus congruency. Increased sensory noise leads to enhanced integration effects for visual-tactile pairs with a large spatial discrepancy, but reduced integration effects for stimuli with a small or no cross-modal discrepancy. In sum, our study indicates a weak a priori association between visual and tactile spatial signals that can be strengthened by distributing attention across both modalities.
Project description:Optical lightning measurements from the Lightning Imaging Sensor (LIS) are used to map the lateral development of lightning flashes and produce statistics that describe their motion through the electrified cloud. This is accomplished by monitoring the frame-by-frame (group-level) evolution of the optical signals produced during each flash. While the optical flash properties recorded by LIS gravitate towards the most exceptional optical signals produced during the flash, group-level data describe the evolution and lateral development of the flash resulting from physical lightning process that emits enough light out of the top of the cloud to be detected from orbit. The groups that comprise LIS flashes constitute examples of complex lateral flash structure that can extend 80 km in length with dozens to hundreds of visible branches. The lateral development of individual flashes is described in terms of its speed and direction of motion, whether the development extends the overall length of the flash or reilluminates an existing segment, and whether it is directed inbound or outbound with respect to the origin. Sixty-five percent of propagating groups are directed outbound from the origin, 22% extend the length of the flash, and 3-5% reilluminate an existing branch. LIS flashes are commonly oriented from east to west and develop at speeds ranging from 104 to 106 m/s, consistent with large-scale leader development. These results provide evidence that lightning imagers may be used in conjunction with Lightning Mapping Array systems to document physical lightning phenomena across global domains.
Project description:Signals in one sensory modality can influence perception of another, for example the bias of visual timing by audition: temporal ventriloquism. Strong accounts of temporal ventriloquism hold that the sensory representation of visual signal timing changes to that of the nearby sound. Alternatively, underlying sensory representations do not change. Rather, perceptual grouping processes based on spatial, temporal, and featural information produce best-estimates of global event properties. In support of this interpretation, when feature-based perceptual grouping conflicts with temporal information-based in scenarios that reveal temporal ventriloquism, the effect is abolished. However, previous demonstrations of this disruption used long-range visual apparent-motion stimuli. We investigated whether similar manipulations of feature grouping could also disrupt the classical temporal ventriloquism demonstration, which occurs over a short temporal range. We estimated the precision of participants' reports of which of two visual bars occurred first. The bars were accompanied by different cross-modal signals that onset synchronously or asynchronously with each bar. Participants' performance improved with asynchronous presentation relative to synchronous - temporal ventriloquism - however, unlike the long-range apparent motion paradigm, this was unaffected by different combinations of cross-modal feature, suggesting that featural similarity of cross-modal signals may not modulate cross-modal temporal influences in short time scales.
Project description:Neuroscience investigations are most often focused on the prediction of future perception or decisions based on prior brain states or stimulus presentations. However, the brain can also process information retroactively, such that later stimuli impact conscious percepts of the stimuli that have already occurred (called "postdiction"). Postdictive effects have thus far been mostly unimodal (such as apparent motion), and the models for postdiction have accordingly been limited to early sensory regions of one modality. We have discovered two related multimodal illusions in which audition instigates postdictive changes in visual perception. In the first illusion (called the "Illusory Audiovisual Rabbit"), the location of an illusory flash is influenced by an auditory beep-flash pair that follows the perceived illusory flash. In the second illusion (called the "Invisible Audiovisual Rabbit"), a beep-flash pair following a real flash suppresses the perception of the earlier flash. Thus, we showed experimentally that these two effects are influenced significantly by postdiction. The audiovisual rabbit illusions indicate that postdiction can bridge the senses, uncovering a relatively-neglected yet critical type of neural processing underlying perceptual awareness. Furthermore, these two new illusions broaden the Double Flash Illusion, in which a single real flash is doubled by two sounds. Whereas the double flash indicated that audition can create an illusory flash, these rabbit illusions expand audition's influence on vision to the suppression of a real flash and the relocation of an illusory flash. These new additions to auditory-visual interactions indicate a spatio-temporally fine-tuned coupling of the senses to generate perception.
Project description:Integration of sensory information across multiple senses is most likely to occur when signals are spatiotemporally coupled. Yet, recent research on audiovisual rate discrimination indicates that random sequences of light flashes and auditory clicks are integrated optimally regardless of temporal correlation. This may be due to 1) temporal averaging rendering temporal cues less effective; 2) difficulty extracting causal-inference cues from rapidly presented stimuli; or 3) task demands prompting integration without concern for the spatiotemporal relationship between the signals. We conducted a rate-discrimination task (Exp 1), using slower, more random sequences than previous studies, and a separate causal-judgement task (Exp 2). Unisensory and multisensory rate-discrimination thresholds were measured in Exp 1 to assess the effects of temporal correlation and spatial congruence on integration. The performance of most subjects was indistinguishable from optimal for spatiotemporally coupled stimuli, and generally sub-optimal in other conditions, suggesting observers used a multisensory mechanism that is sensitive to both temporal and spatial causal-inference cues. In Exp 2, subjects reported whether temporally uncorrelated (but spatially co-located) sequences were perceived as sharing a common source. A unified percept was affected by click-flash pattern similarity and the maximum temporal offset between individual clicks and flashes, but not on the proportion of synchronous click-flash pairs. A simulation analysis revealed that the stimulus-generation algorithms of previous studies is likely responsible for the observed integration of temporally independent sequences. By combining results from Exps 1 and 2, we found better rate-discrimination performance for sequences that are more likely to be integrated than those that are not. Our results support the principle that multisensory stimuli are optimally integrated when spatiotemporally coupled, and provide insight into the temporal features used for coupling in causal inference.
Project description:In temporal ventriloquism, auditory events can illusorily attract perceived timing of a visual onset [1-3]. We investigated whether timing of a static sound can also influence spatio-temporal processing of visual apparent motion, induced here by visual bars alternating between opposite hemifields. Perceived direction typically depends on the relative interval in timing between visual left-right and right-left flashes (e.g., rightwards motion dominating when left-to-right interflash intervals are shortest ). In our new multisensory condition, interflash intervals were equal, but auditory beeps could slightly lag the right flash, yet slightly lead the left flash, or vice versa. This auditory timing strongly influenced perceived visual motion direction, despite providing no spatial auditory motion signal whatsoever. Moreover, prolonged adaptation to such auditorily driven apparent motion produced a robust visual motion aftereffect in the opposite direction, when measured in subsequent silence. Control experiments argued against accounts in terms of possible auditory grouping, or possible attention capture. We suggest that the motion arises because the sounds change perceived visual timing, as we separately confirmed. Our results provide a new demonstration of multisensory influences on sensory-specific perception , with timing of a static sound influencing spatio-temporal processing of visual motion direction.
Project description:Neuroimaging studies of functional magnetic resonance imaging (fMRI) and electrophysiology provide the linkage between neural activity and the blood oxygenation level-dependent (BOLD) response. Here, BOLD responses to light flashes were imaged at 11.7T and compared with neural recordings from superior colliculus (SC) and primary visual cortex (V1) in rat brain--regions with different basal blood flow and energy demand. Our goal was to assess neurovascular coupling in V1 and SC as reflected by temporal/spatial variances of impulse response functions (IRFs) and assess, if any, implications for general linear modeling (GLM) of BOLD responses. Light flashes induced high magnitude neural/BOLD responses reproducibly from both regions. However, neural/BOLD responses from SC and V1 were markedly different. SC signals followed the boxcar shape of the stimulation paradigm at all flash rates, whereas V1 signals were characterized by onset/offset transients that exhibited different flash rate dependencies. We find that IRF(SC) is generally time-invariant across wider flash rate range compared with IRF(V1), whereas IRF(SC) and IRF(V1) are both space invariant. These results illustrate the importance of measured neural signals for interpretation of fMRI by showing that GLM of BOLD responses may lead to misinterpretation of neural activity in some cases.
Project description:The optical energy emitted by lightning flashes interacts with the surrounding cloud medium through scattering and absorption. The optical signals recorded by space-based lightning imagers describe a convolution of lightning flash energetics and radiative transfer effects in the intervening cloud layer. A thundercloud imaging technique is presented that characterizes cloud regions based on how they are illuminated by lightning. This technique models the spatial distribution of optical energy in radiant lightning pulses to determine whether and to what extent each illuminated cloud pixel behaves like a homogeneous planar cloud layer. A gridded product is constructed that differentiates flashes that illuminate convective cells from stratiform flashes with long horizontal channels and anvil flashes whose optical emissions reflect off of nearby cloud surfaces. Producing this imagery with a rolling 15-min window allows us to visualize changes in convection with a rapid (20 s) update cycle.
Project description:Optical lightning observations from space reveal a wide range of flash structure. Lightning imagers such as the Geostationary Lightning Mapper and Lightning Imaging Sensor measure flash appearance by recording transient changes in cloud top illumination. The spatial and temporal optical energy distributions reported by these instruments depend on the physical structure of the flash and the distribution of hydrometeors within the thundercloud that scatter and absorb the optical emissions. This study explores how flash appearance changes according to the scale and organization of the parent thunderstorms with a focus on mesoscale convective systems. Clouds near the storm edge are frequently illuminated by large optical flashes that remain stationary between groups. These flashes appear large because their emissions can reflect off the exposed surfaces of nearby clouds to reach the satellite. Large stationary flashes also occur in small isolated thunderstorms. Optical flashes that propagate horizontally, meanwhile, are most frequently observed in electrified stratiform regions where extensive layered charge structures promote lateral development. Highly radiant "superbolts" occur in two scenarios: embedded within raining stratiform regions or in nonraining boundary/anvil clouds where optical emissions can take a relatively clear path to the satellite.