Attention Distribution While Detecting Conflicts between Converging Objects: An Eye-Tracking Study.
ABSTRACT: In many domains, including air traffic control, observers have to detect conflicts between moving objects. However, it is unclear what the effect of conflict angle is on observers' conflict detection performance. In addition, it has been speculated that observers use specific viewing techniques while performing a conflict detection task, but evidence for this is lacking. In this study, participants (N = 35) observed two converging objects while their eyes were recorded. They were tasked to continuously indicate whether a conflict between the two objects was present. Independent variables were conflict angle (30, 100, 150 deg), update rate (discrete, continuous), and conflict occurrence. Results showed that 30 deg conflict angles yielded the best performance, and 100 deg conflict angles the worst. For 30 deg conflict angles, participants applied smooth pursuit while attending to the objects. In comparison, for 100 and especially 150 deg conflict angles, participants showed a high fixation rate and glances towards the conflict point. Finally, the continuous update rate was found to yield shorter fixation durations and better performance than the discrete update rate. In conclusion, shallow conflict angles yield the best performance, an effect that can be explained using basic perceptual heuristics, such as the 'closer is first' strategy. Displays should provide continuous rather than discrete update rates.
Project description:Sample entropy (SE) has relative consistency using biologically-derived, discrete data >500 data points. For certain populations, collecting this quantity is not feasible and continuous data has been used. The effect of using continuous versus discrete data on SE is unknown, nor are the relative effects of sampling rate and input parameters m (comparison vector length) and r (tolerance). Eleven subjects walked for 10-minutes and continuous joint angles (480Hz) were calculated for each lower-extremity joint. Data were downsampled (240, 120, 60Hz) and discrete range-of-motion was calculated. SE was quantified for angles and range-of-motion at all sampling rates and multiple combinations of parameters. A differential relationship between joints was observed between range-of-motion and joint angles. Range-of-motion SE showed no difference; whereas, joint angle SE significantly decreased from ankle to knee to hip. To confirm findings from biological data, continuous signals with manipulations to frequency, amplitude, and both were generated and underwent similar analysis to the biological data. In general, changes to m, r, and sampling rate had a greater effect on continuous compared to discrete data. Discrete data was robust to sampling rate and m. It is recommended that different data types not be compared and discrete data be used for SE.
Project description:Although we perceive a richly detailed visual world, our ability to identify individual objects is severely limited in clutter, particularly in peripheral vision. Models of such "crowding" have generally been driven by the phenomenological misidentifications of crowded targets: using stimuli that do not easily combine to form a unique symbol (e.g. letters or objects), observers typically confuse the source of objects and report either the target or a distractor, but when continuous features are used (e.g. orientated gratings or line positions) observers report a feature somewhere between the target and distractor. To reconcile these accounts, we develop a hybrid method of adjustment that allows detailed analysis of these multiple error categories. Observers reported the orientation of a target, under several distractor conditions, by adjusting an identical foveal target. We apply new modelling to quantify whether perceptual reports show evidence of positional uncertainty, source confusion, and featural averaging on a trial-by-trial basis. Our results show that observers make a large proportion of source-confusion errors. However, our study also reveals the distribution of perceptual reports that underlie performance in this crowding task more generally: aggregate errors cannot be neatly labelled because they are heterogeneous and their structure depends on target-distractor distance.
Project description:Many daily situations require us to track multiple objects and people. This ability has traditionally been investigated in observers tracking objects in a plane. This simplification of reality does not address how observers track objects when targets move in three dimensions. Here, we study how observers track multiple objects in 2D and 3D while manipulating the average speed of the objects and the average distance between them. We show that performance declines as speed increases and distance decreases and that overall tracking accuracy is always higher in 3D than in 2D. The effects of distance and dimensionality interact to produce a more than additive improvement in performance during tracking in 3D compared to 2D. We propose an ideal observer model that uses the object dynamics and noisy observations to track the objects. This model provides a good fit to the data and explains the key findings of our experiment as originating from improved inference of object identity by adding the depth dimension.
Project description:The aim of this study was to investigate and quantify contributions of kinetic energy and viscous dissipation to airway resistance during inspiration and expiration at various flow rates in airway models of different bifurcation angles. We employed symmetric airway models up to the 20th generation with the following five different bifurcation angles at a tracheal flow rate of 20?L/min: 15?deg, 25?deg, 35?deg, 45?deg, and 55?deg. Thus, a total of ten computational fluid dynamics (CFD) simulations for both inspiration and expiration were conducted. Furthermore, we performed additional four simulations with tracheal flow rate values of 10 and 40?L/min for a bifurcation angle of 35?deg to study the effect of flow rate on inspiration and expiration. Using an energy balance equation, we quantified contributions of the pressure drop associated with kinetic energy and viscous dissipation. Kinetic energy was found to be a key variable that explained the differences in airway resistance on inspiration and expiration. The total pressure drop and airway resistance were larger during expiration than inspiration, whereas wall shear stress and viscous dissipation were larger during inspiration than expiration. The dimensional analysis demonstrated that the coefficients of kinetic energy and viscous dissipation were strongly correlated with generation number. In addition, the viscous dissipation coefficient was significantly correlated with bifurcation angle and tracheal flow rate. We performed multiple linear regressions to determine the coefficients of kinetic energy and viscous dissipation, which could be utilized to better estimate the pressure drop in broader ranges of successive bifurcation structures.
Project description:Local solid shape applies to the surface curvature of small surface patches-essentially regions of approximately constant curvatures-of volumetric objects that are smooth volumetric regions in Euclidean 3-space. This should be distinguished from local shape in pictorial space. The difference is categorical. Although local solid shape has naturally been explored in haptics, results in vision are not forthcoming. We describe a simple experiment in which observers judge shape quality and magnitude of cinematographic presentations. Without prior training, observers readily use continuous shape index and Casorati curvature scales with reasonable resolution.
Project description:Visual crowding is a breakdown in object identification that occurs in cluttered scenes, a process that represents the principle restriction on visual performance in the periphery. When crowded objects are presented experimentally, a key finding is that observers frequently report nearby flanking items instead of the target. This observation has led to the proposal that crowding reflects increased noise in the positional code for objects; although how the presence of nearby objects might disrupt positional encoding remains unclear. We quantified this disruption using cross-like stimuli, where observers judged whether the horizontal target line was positioned above or below the stimulus midpoint. Overall, observers were poorer at judging position in the presence of crowding flankers. However, offsetting horizontal lines in the flankers also led observers to report that the horizontal line in the target was shifted in the same direction, an effect that held for subthreshold flanker offsets. In short, crowding induced both random and systematic errors in observers' judgment of position, with or without the detection of flanker structure. Computational modeling reveals that perceived position in the presence of flankers follows a weighted average of noisy target- and flanker-line positions, rather than a substitution of flanker-features into the target, as has been proposed previously. Together, our results suggest that crowding is a preattentive process that uses averaging to regularize the noisy representation of position in the periphery.
Project description:We show that these R-loop objects impose specific physical constraints on the DNA, as revealed by the presence of stereotypical angles in the surrounding DNA. Biochemical probing and mutagenesis experiments revealed that the formation of R-loop objects at Airn is dictated by the extruded non-template strand, suggesting that R-loops possess intrinsic sequence-driven properties. Consistent with this, we show that R-loops formed at the fission yeast gene sum3 do not form detectable R-loop objects. Our results reveal that R-loops differ by their architectures and that the organization of the non-template strand is a fundamental characteristic of R-loops, which could explain that only a subset of R-loops is associated with replication-dependent DNA breaks. Overall design: Characterization of R-loop at Airn using Atomic Force Microscopy (AFM) and Single Molecule R-loop Footprinting sequencing (SMRF-seq)
Project description:Limits on the storage capacity of working memory significantly affect cognitive abilities in a wide range of domains, but the nature of these capacity limits has been elusive. Some researchers have proposed that working memory stores a limited set of discrete, fixed-resolution representations, whereas others have proposed that working memory consists of a pool of resources that can be allocated flexibly to provide either a small number of high-resolution representations or a large number of low-resolution representations. Here we resolve this controversy by providing independent measures of capacity and resolution. We show that, when presented with more than a few simple objects, human observers store a high-resolution representation of a subset of the objects and retain no information about the others. Memory resolution varied over a narrow range that cannot be explained in terms of a general resource pool but can be well explained by a small set of discrete, fixed-resolution representations.
Project description:Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects-a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat.Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a 'scale invariant' model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds.Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects.
Project description:Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.