Project description:PurposeStereopsis, the ability of humans to perceive depth through distinct visual stimuli in each eye, is foundational to autostereoscopic technology in computing. However, ensuring stable head position during assessments has been challenging. This study evaluated the utility of artificial intelligence (AI)-enhanced face tracking technology in overcoming this challenge by ensuring that each eye consistently receives its intended image.MethodsThe Lume Pad 2, an autostereoscopic tablet with AI-enhanced face tracking, was utilized to simulate quantitative parts of the Stereo Fly test and TNO Stereotests for contour and random dot stereopsis. The study recruited 30 children (14 males and 16 females, mean age of 9.2 ± 0.3 years, age range of 6-12 years) and 30 adults (10 males and 20 females, mean age of 29.4 ± 1.0 years, age range of 21-42 years) to assess the tablet's inter-session reliability. Agreement between conventional and the autostereoscopic tablet-simulated stereotests was tested in a larger group of 181 children (91 males and 90 females, mean age of 9.1 ± 0.4 years, age range of 6-12 years) and 160 adults (69 males and 91 females, mean age of 38.6 ± 2.1 years, age range of 21-65 years). Inter-session reliability and agreement were analyzed using weighted Kappa coefficient and non-parametric Bland-Altman analysis.ResultsThe autostereoscopic tablet demonstrated high inter-session reliability (κ all > 0.80), except for the simulated TNO Stereotest in adults, which demonstrated moderate inter-session reliability (κ = 0.571). Non-parametric Bland-Altman analysis revealed zero median differences, confirming consistent inter-session reliability. Similar patterns were observed in comparing AI-based and conventional methods, with both the weighted Kappa coefficient (κ all > 0.80) and non-parametric Bland-Altman analysis indicating significant agreement. The agreement between methodologies was confirmed by permissible differences, which were smaller than the minimum step range.ConclusionThe integration of AI-based autostereoscopic technology with sub-pixel precision demonstrates significant potential for clinical stereopsis measurements.
Project description:Rbfox1 is a splicing regulator that has been associated with various neurological conditions such as autism spectrum disorder, mental retardation, epilepsy, attention-deficit/hyperactivity disorder, and schizophrenia. We show that in adult rodent retinas, Rbfox1 is expressed in all types of retinal ganglion cells (RGCs) as well as in certain subsets of amacrine cells (ACs) within the inner nuclear and ganglion cell layers. In developing retinas, Rbfox1 can be detected as early as E12. At that age, Rbfox1 is localized in the cytoplasm of differentiated RGCs. Between P0 and P5, strong expression of Rbfox1 in the inner plexiform layer was observed. This coincided with switching of Rbfox1 localization in RGC somas from cytoplasmic to a predominantly nuclear. Dynamic changes in Rbfox1 expression during first 10 postnatal days are correlated with the stage II spontaneous retinal waves of excitation, which in mice begins around the time of birth and continues for as long as two weeks. By P10, dendritic staining of Rbfox1 was dramatically reduced and remained so in the fully developed retina. In Rbfox1 knockout (KO) animals no detectable changes in retinal gross morphology were observed two months after Rbfox1 downregulation. However, the visual cliff test revealed marked abnormalities of depth perception of these animals. Retinal transcriptome analysis of Rbfox1 KO mice identified a number of genes that are involved in establishing neural circuits and synaptic transmission, including Vamp1, Vamp2, Snap25, Trak2, and Slc1A7, suggesting a role of Rbfox1 in the regulation of genes that facilitate AC and RGC synaptic communication.
Project description:Today, we human beings are facing with high-quality virtual world of a completely new nature. For example, we have a digital display consisting of a high enough resolution that we cannot distinguish from the real world. However, little is known how such high-quality representation contributes to the sense of realness, especially to depth perception. What is the neural mechanism of processing such fine but virtual representation? Here, we psychophysically and physiologically examined the relationship between stimulus resolution and depth perception, with using luminance-contrast (shading) as a monocular depth cue. As a result, we found that a higher resolution stimulus facilitates depth perception even when the stimulus resolution difference is undetectable. This finding is against the traditional cognitive hierarchy of visual information processing that visual input is processed continuously in a bottom-up cascade of cortical regions that analyze increasingly complex information such as depth information. In addition, functional magnetic resonance imaging (fMRI) results reveal that the human middle temporal (MT+) plays a significant role in monocular depth perception. These results might provide us with not only the new insight of our neural mechanism of depth perception but also the future progress of our neural system accompanied by state-of- the-art technologies.
Project description:Both humans and computational methods struggle to discriminate the depths of objects hidden beneath foliage. However, such discrimination becomes feasible when we combine computational optical synthetic aperture sensing with the human ability to fuse stereoscopic images. For object identification tasks, as required in search and rescue, wildlife observation, surveillance, and early wildfire detection, depth assists in differentiating true from false findings, such as people, animals, or vehicles vs. sun-heated patches at the ground level or in the tree crowns, or ground fires vs. tree trunks. We used video captured by a drone above dense woodland to test users' ability to discriminate depth. We found that this is impossible when viewing monoscopic video and relying on motion parallax. The same was true with stereoscopic video because of the occlusions caused by foliage. However, when synthetic aperture sensing was used to reduce occlusions and disparity-scaled stereoscopic video was presented, whereas computational (stereoscopic matching) methods were unsuccessful, human observers successfully discriminated depth. This shows the potential of systems which exploit the synergy between computational methods and human vision to perform tasks that neither can perform alone.
Project description:The three-dimensional arrangement of atoms in molecules is essential for understanding their properties and behavior. Traditional 2D representations and digital 3D models presented on 2D media often fall short in conveying the complexity of molecular structures. Autostereoscopic displays, often marketed as "holographic" displays, pose a potential solution to this challenge. These displays, with their multi-view and single-view configurations, promise to advance chemistry education and research by offering accurate 3D representations with depth and parallax. In this perspective, I delve into the possibilities and limitations of autostereoscopic displays in chemistry, discussing the underlying technology and potential applications, from research to teaching and science communication. Multi-view autostereoscopic displays excel in facilitating collaborative work by enabling multiple viewers to simultaneously perceive the same 3D structure from different angles. However, they currently suffer from low resolution and high cost, which could limit their immediate widespread adoption. Conversely, single-view autostereoscopic displays with eye-tracking, while limited to one viewer at a time, provide higher resolution at a lower cost, thus suggesting that they might become the technology of the future given the balance of price to performance. Despite current limitations, autostereoscopic displays possess undeniable potential for shaping the future of chemistry education and research.
Project description:Dynamic environments often contain features that change at slightly different times. Here we investigated how sensitivity to these slight timing differences depends on spatial relationships among stimuli. Stimuli comprised bilaterally presented plaid pairs that rotated, or radially expanded and contracted to simulate depth movement. Left and right hemifield stimuli initially moved in the same or opposite directions, then reversed directions at various asynchronies. College students judged whether the direction reversed first on the left or right-a temporal order judgment (TOJ). TOJ thresholds remained similar across conditions that required tracking only one depth plane, or bilaterally synchronized depth planes. However, when stimuli required simultaneously tracking multiple depth planes-counter-phased across hemifields-TOJ thresholds doubled or tripled. This effect depended on perceptual set. Increasing the certainty with which participants simultaneously tracked multiple depth planes reduced TOJ thresholds by 45 percent. Even complete certainty, though, failed to reduce multiple-depth-plane TOJ thresholds to levels obtained with single or bilaterally synchronized depth planes. Overall, the results demonstrate that global depth perception can alter local timing sensitivity. More broadly, the findings reflect a coarse-to-fine spatial influence on how we sense time.
Project description:Stereoscopic capacities vary widely across the normal population. It has become increasingly apparent, however, that mechanisms underlying stereoscopic depth perception retain a considerable degree of plasticity through adulthood. Here, we contrast the capacity for neurostimulation in the form of continuous theta-burst stimulation (cTBS) over strategically-chosen sites in the visual cortex to bring about improvements in stereoscopic depth perception. cTBS was delivered to occipital cortex (V1/V2), lateral occipital complex (LOC), along with a control site (Cz). We measured performance on depth and luminance discrimination tasks before and after stimulation. We found a significant improvement in depth (but not luminance) discrimination performance following cTBS over LOC. By contrast, cTBS over occipital cortex and Cz did not affect performance on either task. These findings suggest that ventral (lateral-occipital) cortex is a key node for governing plasticity of stereoscopic vision in visually normal human observers. We speculate that cTBS exerts inhibitory influences that may suppress internal noise within the nervous system, leading to an improved read-out of depth features.
Project description:BackgroundAugmented Reality (AR)-based interventions are applied in neurorehabilitation with increasing frequency. Depth perception is required for the intended interaction within AR environments. Until now, however, it is unclear whether patients after stroke with impaired visuospatial perception (VSP) are able to perceive depth in the AR environment.MethodsDifferent aspects of VSP (stereovision and spatial localization/visuoconstruction) were assessed in 20 patients after stroke (mean age: 64 ± 14 years) and 20 healthy subjects (HS, mean age: 28 ± 8 years) using clinical tests. The group of HS was recruited to assess the validity of the developed AR tasks in testing stereovision. To measure perception of holographic objects, three distance judgment tasks and one three-dimensionality task were designed. The effect of impaired stereovision on performance in each AR task was analyzed. AR task performance was modeled by aspects of VSP using separate regression analyses for HS and for patients.ResultsIn HS, stereovision had a significant effect on the performance in all AR distance judgment tasks (p = 0.021, p = 0.002, p = 0.046) and in the three-dimensionality task (p = 0.003). Individual quality of stereovision significantly predicted the accuracy in each distance judgment task and was highly related to the ability to perceive holograms as three-dimensional (p = 0.001). In stroke-survivors, impaired stereovision had a specific deterioration effect on only one distance judgment task (p = 0.042), whereas the three-dimensionality task was unaffected (p = 0.317). Regression analyses confirmed a lacking impact of patients' quality of stereovision on AR task performance, while spatial localization/visuoconstruction significantly prognosticated the accuracy in distance estimation of geometric objects in two AR tasks.ConclusionImpairments in VSP reduce the ability to estimate distance and to perceive three-dimensionality in an AR environment. While stereovision is key for task performance in HS, spatial localization/visuoconstruction is predominant in patients. Since impairments in VSP are present after stroke, these findings might be crucial when AR is applied for neurorehabilitative treatment. In order to maximize the therapy outcome, the design of AR games should be adapted to patients' impaired VSP. Trial registration: The trial was not registered, as it was an observational study.
Project description:Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements ("optic flow") during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs). However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions.
Project description:For over 50 years, ocean scientists have oddly represented ocean oxygen consumption rates as a function of depth but not temperature in most biogeochemical models. This unique tradition or tactic inhibits useful discussion of climate change impacts, where specific and fundamental temperature-dependent terms are required. Tracer-based determinations of oxygen consumption rates in the deep sea are nearly universally reported as a function of depth in spite of their well-known microbial basis. In recent work, we have shown that a carefully determined profile of oxygen consumption rates in the Sargasso Sea can be well represented by a classical Arrhenius function with an activation energy of 86.5 kJ mol-1, leading to a Q10 of 3.63. This indicates that for 2°C warming, we will have a 29% increase in ocean oxygen consumption rates, and for 3°C warming, a 47% increase, potentially leading to large-scale ocean hypoxia should a sufficient amount of organic matter be available to microbes. Here, we show that the same principles apply to a worldwide collation of tracer-based oxygen consumption rate data and that some 95% of ocean oxygen consumption is driven by temperature, not depth, and thus will have a strong climate dependence. The Arrhenius/Eyring equations are no simple panacea and they require a non-equilibrium steady state to exist. Where transient events are in progress, this stricture is not obeyed and we show one such possible example.This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'.