Project description:When a highly salient distractor is present in a search array, it speeds target absent visual search and increases errors during target present visual search, suggesting lowered quitting thresholds (Moher in Psychol Sci 31(1):31-42, 2020). Missing a critical target in the presence of a highly salient distractor can have dire consequences in real-world search tasks where accurate target detection is crucial, such as baggage screening. As such, the current study examined whether emphasizing either accuracy or speed would eliminate the distractor-generated quitting threshold effect (QTE). Three blocks of a target detection search task which included a highly salient distractor on half of all trials were used. In one block, participants received no instructions or feedback regarding performance. In the remaining two blocks, they received instructions and trial-by-trial feedback that either emphasized response speed or response accuracy. Overall, the distractor lowered quitting thresholds, regardless of whether response speed or response accuracy was emphasized in a block of trials. However, the effect of the distractor on target misses was smaller when accuracy was emphasized. It, therefore, appears that while the distractor QTE is not easily eradicated by explicit instructions and feedback, it can be shifted. As such, future research should examine the applicability of these and similar strategies in real-world search scenarios.
Project description:BackgroundDifferent sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process.Principal findingsHere we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation.ConclusionOur data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.
Project description:This study investigated how humans process probabilistic-associated information when encountering varying levels of uncertainty during implicit visual statistical learning. A novel probabilistic cueing validation paradigm was developed to probe the representation of cues with high (75% probability), medium (50%), low (25%), or zero levels of predictiveness in response to preceding targets that appeared with high (75%), medium (50%), or low (25%) transitional probabilities (TPs). Experiments 1 and 2 demonstrated a significant negative association between cue probe identification accuracy and cue predictiveness when these cues appeared after high-TP but not medium-TP or low-TP targets, establishing exploration-like cue processing triggered by lower-uncertainty rather than high-uncertainty inputs. Experiment 3 ruled out the confounding factor of probe repetition and extended this finding by demonstrating (1) enhanced representation of low-predictive and zero-predictive but not high-predictive cues across blocks after high-TP targets and (2) enhanced representation of high-predictive but not low-predictive and zero-predictive cues across blocks after low-TP targets for learners who exhibited above-chance awareness of cue-target transition. These results suggest that during implicit statistical learning, input characteristics alter cue-processing mechanisms, such that exploration-like and exploitation-like mechanisms are triggered by lower-uncertainty and higher-uncertainty cue-target sequences, respectively.
Project description:Sensory reweighting is a characteristic of postural control functioning adopted to accommodate environmental changes. The use of mono or binocular cues induces visual reduction/increment of moving room influences on postural sway, suggesting a visual reweighting due to the quality of available sensory cues. Because in our previous study visual conditions were set before each trial, participants could adjust the weight of the different sensory systems in an anticipatory manner based upon the reduction in quality of the visual information. Nevertheless, in daily situations this adjustment is a dynamical process and occurs during ongoing movement. The purpose of this study was to examine the effect of visual transitions in the coupling between visual information and body sway in two different distances from the front wall of a moving room. Eleven young adults stood upright inside of a moving room in two distances (75 and 150 cm) wearing a liquid crystal lenses goggles, which allow individual lenses transition from opaque to transparent and vice-versa. Participants stood still during five minutes for each trial and the lenses status changed every one minute (no vision to binocular vision, no vision to monocular vision, binocular vision to monocular vision, and vice-versa). Results showed that farther distance and monocular vision reduced the effect of visual manipulation on postural sway. The effect of visual transition was condition dependent, with a stronger effect when transitions involved binocular vision than monocular vision. Based upon these results, we conclude that the increased distance from the front wall of the room reduced the effect of visual manipulation on postural sway and that sensory reweighting is stimulus quality dependent, with binocular vision producing a much stronger down/up-weighting than monocular vision.
Project description:The amnesic symptoms that accompany vestibular dysfunction point to a functional relationship between the vestibular and visual memory systems. However, little is known about the underpinning cognitive processes. As a starting point, we sought evidence for a type of cross-modal interaction commonly observed between other sensory modalities in which the identification of a target (in this case, visual) is facilitated if earlier coupled to a unique, temporally coincident stimulus from another sensory domain (in this case, vestibular). Participants first performed a visual detection task in which stimuli appeared at random locations within a computerised grid. Unknown to participants, the onset of one particular stimulus was accompanied by a brief, sub-sensory pulse of galvanic vestibular stimulation (GVS). Across two visual search experiments, both old and new targets were identified faster when presented in the grid location at which the GVS-paired visual stimulus had appeared in the earlier detection task. This location advantage appeared to be based on relative rather than absolute spatial co-ordinates since the effect held when the search grid was rotated 90°. Together these findings indicate that when individuals return to a familiar visual scene (here, a 2D grid), visual judgements are facilitated when targets appear at a location previously associated with a unique, task-irrelevant vestibular cue. This novel case of multisensory interplay has broader implications for understanding how vestibular signals inform cognitive processes and helps constrain the growing therapeutic application of GVS.
Project description:Eye-tracking studies using arrays of objects have demonstrated that some high-level processing of object semantics can occur in extra-foveal vision, but its role on the allocation of early overt attention is still unclear. This eye-tracking visual search study contributes novel findings by examining the role of object-to-object semantic relatedness and visual saliency on search responses and eye-movement behaviour across arrays of increasing size (3, 5, 7). Our data show that a critical object was looked at earlier and for longer when it was semantically unrelated than related to the other objects in the display, both when it was the search target (target-present trials) and when it was a target's semantically related competitor (target-absent trials). Semantic relatedness effects manifested already during the very first fixation after array onset, were consistently found for increasing set sizes, and were independent of low-level visual saliency, which did not play any role. We conclude that object semantics can be extracted early in extra-foveal vision and capture overt attention from the very first fixation. These findings pose a challenge to models of visual attention which assume that overt attention is guided by the visual appearance of stimuli, rather than by their semantics.
Project description:Visual search is a ubiquitous activity in real-world environments. Yet, traditionally, visual search is investigated in tightly controlled paradigms, where head-restricted participants locate a minimalistic target in a cluttered array that is presented on a computer screen. Do traditional visual search tasks predict performance in naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality technology to test the degree to which classic and naturalistic search are limited by a common factor, set size, and the degree to which individual differences in classic search behavior predict naturalistic search behavior in a large sample of individuals (N = 75). In a naturalistic search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic search task, participants searched for a target within a simple array of colored letters using only eye-movements. In each task, we found that participants' search performance was impacted by increases in set size-the number of items in the visual display. Critically, we observed that participants' efficiency in classic search tasks-the degree to which set size slowed performance-indeed predicted efficiency in real-world scenes. These results demonstrate that classic, computer-based visual search tasks are excellent models of active, real-world search behavior.
Project description:PubMed is a free search engine for biomedical literature accessed by millions of users from around the world each day. With the rapid growth of biomedical literature-about two articles are added every minute on average-finding and retrieving the most relevant papers for a given query is increasingly challenging. We present Best Match, a new relevance search algorithm for PubMed that leverages the intelligence of our users and cutting-edge machine-learning technology as an alternative to the traditional date sort order. The Best Match algorithm is trained with past user searches with dozens of relevance-ranking signals (factors), the most important being the past usage of an article, publication date, relevance score, and type of article. This new algorithm demonstrates state-of-the-art retrieval performance in benchmarking experiments as well as an improved user experience in real-world testing (over 20% increase in user click-through rate). Since its deployment in June 2017, we have observed a significant increase (60%) in PubMed searches with relevance sort order: it now assists millions of PubMed searches each week. In this work, we hope to increase the awareness and transparency of this new relevance sort option for PubMed users, enabling them to retrieve information more effectively.
Project description:The onset of vision occurs when neural circuits in the visual cortex are immature, lacking both the full complement of connections and the response selectivity that defines functional maturity. Direction-selective responses are particularly vulnerable to the effects of early visual deprivation, but it remains unclear how stimulus-driven neural activity guides the emergence of cortical direction selectivity. Here we report observations from a motion training protocol that allowed us to monitor the impact of experience on the development of direction-selective responses in visually naive ferrets. Using intrinsic signal imaging techniques, we found that training with a single axis of motion induced the rapid emergence of direction columns that were confined to cortical regions preferentially activated by the training stimulus. Using two-photon calcium imaging techniques, we found that single neurons in visually naive animals exhibited weak directional biases and lacked the strong local coherence in the spatial organization of direction preference that was evident in mature animals. Training with a moving stimulus, but not with a flashed stimulus, strengthened the direction-selective responses of individual neurons and preferentially reversed the direction biases of neurons that deviated from their neighbours. Both effects contributed to an increase in local coherence. We conclude that early experience with moving visual stimuli drives the rapid emergence of direction-selective responses in the visual cortex.
Project description:The ability to remember and to navigate to safe places is necessary for survival. Place navigation is known to involve medial entorhinal cortex (MEC)-hippocampal connections. However, learning-dependent changes in neuronal activity in the distinct circuits remain unknown. Here, by using optic fiber photometry in freely behaving mice, we discovered the experience-dependent induction of a persistent-task-associated (PTA) activity. This PTA activity critically depends on learned visual cues and builds up selectively in the MEC layer II-dentate gyrus, but not in the MEC layer III-CA1 pathway, and its optogenetic suppression disrupts navigation to the target location. The findings suggest that the visual system, the MEC layer II, and the dentate gyrus are essential hubs of a memory circuit for visually guided navigation.