Project description:IntroductionAttention-deficit/hyperactivity disorder (ADHD) affects a significant proportion of the pediatric population, making early detection crucial for effective intervention. Eye movements are controlled by brain regions associated with neuropsychological functions, such as selective attention, response inhibition, and working memory, and their deficits are related to the core characteristics of ADHD. Herein, we aimed to develop a screening model for ADHD using machine learning (ML) and eye-tracking features from tasks that reflect neuropsychological deficits in ADHD.MethodsFifty-six children (mean age 8.38 ± 1.58, 45 males) diagnosed with ADHD based on the Diagnostic and Statistical Manual of Mental Disorders, fifth edition were recruited along with seventy-nine typically developing children (TDC) (mean age 8.80 ± 1.82, 33 males). Eye-tracking data were collected using a digital device during the performance of five behavioral tasks measuring selective attention, working memory, and response inhibition (pro-saccade task, anti-saccade task, memory-guided saccade task, change detection task, and Stroop task). ML was employed to select relevant eye-tracking features for ADHD, and to subsequently construct an optimal model classifying ADHD from TDC.ResultsWe identified 33 eye-tracking features in the five tasks with the potential to distinguish children with ADHD from TDC. Participants with ADHD showed increased saccade latency and degree, and shorter fixation time in eye-tracking tasks. A soft voting model integrating extra tree and random forest classifiers demonstrated high accuracy (76.3%) at identifying ADHD using eye-tracking features alone. A comparison of the model using only eye-tracking features with models using the Advanced Test of Attention or Stroop test showed no significant difference in the area under the curve (AUC) (p = 0.419 and p=0.235, respectively). Combining demographic, behavioral, and clinical data with eye-tracking features improved accuracy, but did not significantly alter the AUC (p=0.208).DiscussionOur study suggests that eye-tracking features hold promise as ADHD screening tools, even when obtained using a simple digital device. The current findings emphasize that eye-tracking features could be reliable indicators of impaired neurobiological functioning in individuals with ADHD. To enhance utility as a screening tool, future research should be conducted with a larger sample of participants with a more balanced gender ratio.
Project description:A growing number of virtual reality devices now include eye tracking technology, which can facilitate oculomotor and cognitive research in VR and enable use cases like foveated rendering. These applications require different tracking performance, often measured as spatial accuracy and precision. While manufacturers report data quality estimates for their devices, these typically represent ideal performance and may not reflect real-world data quality. Additionally, it is unclear how accuracy and precision change across sessions within the same participant or between devices, and how performance is influenced by vision correction. Here, we measured spatial accuracy and precision of the Vive Pro Eye built-in eye tracker across a range of 30 visual degrees horizontally and vertically. Participants completed ten measurement sessions over multiple days, allowing to evaluate calibration reliability. Accuracy and precision were highest for central gaze and decreased with greater eccentricity in both axes. Calibration was successful in all participants, including those wearing contacts or glasses, but glasses yielded significantly lower performance. We further found differences in accuracy (but not precision) between two Vive Pro Eye headsets, and estimated participants' inter-pupillary distance. Our metrics suggest high calibration reliability and can serve as a baseline for expected eye tracking performance in VR experiments.
Project description:ObjectivesEven though vestibular rehabilitation therapy (VRT) using head-mounted display (HMD) has been highlighted recently as a popular virtual reality platform, we should consider that HMD itself do not provide interactive environment for VRT. This study aimed to test the feasibility of interactive components using eye tracking assisted strategy through neurophysiologic evidence.MethodsHMD implemented with an infrared-based eye tracker was used to generate a virtual environment for VRT. Eighteen healthy subjects participated in our experiment, wherein they performed a saccadic eye exercise (SEE) under two conditions of feedback-on (F-on, visualization of eye position) and feedback-off (F-off, non-visualization of eye position). Eye position was continuously monitored in real time on those two conditions, but this information was not provided to the participants. Electroencephalogram recordings were used to estimate neural dynamics and attention during SEE, in which only valid trials (correct responses) were included in electroencephalogram analysis.ResultsSEE accuracy was higher in the F-on than F-off condition (P=0.039). The power spectral density of beta band was higher in the F-on condition on the frontal (P=0.047), central (P=0.042), and occipital areas (P=0.045). Beta-event-related desynchronization was significantly more pronounced in the F-on (-0.19 on frontal and -0.22 on central clusters) than in the F-off condition (0.23 on frontal and 0.05 on central) on preparatory phase (P=0.005 for frontal and P=0.024 for central). In addition, more abundant functional connectivity was revealed under the F-on condition.ConclusionConsidering substantial gain may come from goal directed attention and activation of brain-network while performing VRT, our preclinical study from SEE suggests that eye tracking algorithms may work efficiently in vestibular rehabilitation using HMD.
Project description:IntroductionDepression is a prevalent mental illness that is primarily diagnosed using psychological and behavioral assessments. However, these assessments lack objective and quantitative indices, making rapid and objective detection challenging. In this study, we propose a novel method for depression detection based on eye movement data captured in response to virtual reality (VR).MethodsEye movement data was collected and used to establish high-performance classification and prediction models. Four machine learning algorithms, namely eXtreme Gradient Boosting (XGBoost), multilayer perceptron (MLP), Support Vector Machine (SVM), and Random Forest, were employed. The models were evaluated using five-fold cross-validation, and performance metrics including accuracy, precision, recall, area under the curve (AUC), and F1-score were assessed. The predicted error for the Patient Health Questionnaire-9 (PHQ-9) score was also determined.ResultsThe XGBoost model achieved a mean accuracy of 76%, precision of 94%, recall of 73%, and AUC of 82%, with an F1-score of 78%. The MLP model achieved a classification accuracy of 86%, precision of 96%, recall of 91%, and AUC of 86%, with an F1-score of 92%. The predicted error for the PHQ-9 score ranged from -0.6 to 0.6.To investigate the role of computerized cognitive behavioral therapy (CCBT) in treating depression, participants were divided into intervention and control groups. The intervention group received CCBT, while the control group received no treatment. After five CCBT sessions, significant changes were observed in the eye movement indices of fixation and saccade, as well as in the PHQ-9 scores. These two indices played significant roles in the predictive model, indicating their potential as biomarkers for detecting depression symptoms.DiscussionThe results suggest that eye movement indices obtained using a VR eye tracker can serve as useful biomarkers for detecting depression symptoms. Specifically, the fixation and saccade indices showed promise in predicting depression. Furthermore, CCBT demonstrated effectiveness in treating depression, as evidenced by the observed changes in eye movement indices and PHQ-9 scores. In conclusion, this study presents a novel approach for depression detection using eye movement data captured in VR. The findings highlight the potential of eye movement indices as biomarkers and underscore the effectiveness of CCBT in treating depression.
Project description:We present GazeBaseVR, a large-scale, longitudinal, binocular eye-tracking (ET) dataset collected at 250 Hz with an ET-enabled virtual-reality (VR) headset. GazeBaseVR comprises 5,020 binocular recordings from a diverse population of 407 college-aged participants. Participants were recorded up to six times each over a 26-month period, each time performing a series of five different ET tasks: (1) a vergence task, (2) a horizontal smooth pursuit task, (3) a video-viewing task, (4) a self-paced reading task, and (5) a random oblique saccade task. Many of these participants have also been recorded for two previously published datasets with different ET devices, and 11 participants were recorded before and after COVID-19 infection and recovery. GazeBaseVR is suitable for a wide range of research on ET data in VR devices, especially eye movement biometrics due to its large population and longitudinal nature. In addition to ET data, additional participant details are provided to enable further research on topics such as fairness.
Project description:ObjectiveDistractions inordinately impair attention in children with Attention-Deficit Hyperactivity Disorder (ADHD) but examining this behavior under real-life conditions poses a challenge for researchers and clinicians. Virtual reality (VR) technologies may mitigate the limitations of traditional laboratory methods by providing a more ecologically relevant experience. The use of eye-tracking measures to assess attentional functioning in a VR context in ADHD is novel. In this proof of principle project, we evaluate the temporal dynamics of distraction via eye-tracking measures in a VR classroom setting with 20 children diagnosed with ADHD between 8 and 12 years of age.MethodWe recorded continuous eye movements while participants performed math, Stroop, and continuous performance test (CPT) tasks with a series of "real-world" classroom distractors presented. We analyzed the impact of the distractors on rates of on-task performance and on-task, eye-gaze (i.e., looking at a classroom whiteboard) versus off-task eye-gaze (i.e., looking away from the whiteboard).ResultsWe found that while children did not always look at distractors themselves for long periods of time, the presence of a distractor disrupted on-task gaze at task-relevant whiteboard stimuli and lowered rates of task performance. This suggests that children with attention deficits may have a hard time returning to tasks once those tasks are interrupted, even if the distractor itself does not hold attention. Eye-tracking measures within the VR context can reveal rich information about attentional disruption.ConclusionsLeveraging virtual reality technology in combination with eye-tracking measures is well-suited to advance the understanding of mechanisms underlying attentional impairment in naturalistic settings. Assessment within these immersive and well-controlled simulated environments provides new options for increasing our understanding of distractibility and its potential impact on the development of interventions for children with ADHD.
Project description:Patients undergoing Magnetic Resonance Imaging (MRI) often experience anxiety and sometimes distress prior to and during scanning. Here a full MRI compatible virtual reality (VR) system is described and tested with the aim of creating a radically different experience. Potential benefits could accrue from the strong sense of immersion that can be created with VR, which could create sense experiences designed to avoid the perception of being enclosed and could also provide new modes of diversion and interaction that could make even lengthy MRI examinations much less challenging. Most current VR systems rely on head mounted displays combined with head motion tracking to achieve and maintain a visceral sense of a tangible virtual world, but this technology and approach encourages physical motion, which would be unacceptable and could be physically incompatible for MRI. The proposed VR system uses gaze tracking to control and interact with a virtual world. MRI compatible cameras are used to allow real time eye tracking and robust gaze tracking is achieved through an adaptive calibration strategy in which each successive VR interaction initiated by the subject updates the gaze estimation model. A dedicated VR framework has been developed including a rich virtual world and gaze-controlled game content. To aid in achieving immersive experiences physical sensations, including noise, vibration and proprioception associated with patient table movements, have been made congruent with the presented virtual scene. A live video link allows subject-carer interaction, projecting a supportive presence into the virtual world.
Project description:Purpose: Virtual reality (VR) and eye tracking may provide detailed insights into spatial cognition. We hypothesized that virtual reality and eye tracking may be used to assess sub-types of spatial neglect in stroke patients not readily available from conventional assessments. Method: Eighteen stroke patients with spatial neglect and 16 age and gender matched healthy subjects wearing VR headsets were asked to look around freely in a symmetric 3D museum scene with three pictures. Asymmetry of performance was analyzed to reveal group-level differences and possible neglect sub-types on an individual level. Results: Four out of six VR and eye tracking measures revealed significant differences between patients and controls in this free-viewing task. Gaze-asymmetry between-pictures (including fixation time and count) and head orientation were most sensitive to spatial neglect behavior on a group level analysis. Gaze-asymmetry and head orientation each identified 10 out of 18 (56%), compared to 12 out of 18 (67%) for the best conventional test. Two neglect patients without deviant performance on conventional measures were captured by the VR and eyetracking measures. On the individual level, five stroke patients revealed deviant gaze-asymmetry within-pictures and six patients revealed deviant eye orientation in either direction that were not captured by the group-level analysis. Conclusion: This study is a first step in using VR in combination with eye tracking measures as individual differential neglect subtype diagnostics. This may pave the way for more sensitive and elaborate sub-type diagnostics of spatial neglect that may respond differently to various treatment approaches.
Project description:Previous studies have confirmed the significant effects of single forest stand attributes, such as forest type (FT), understory vegetation cover (UVC), and understory vegetation height (UVH) on visitors' visual perception. However, rarely study has yet clearly determined the relationship between vegetation permeability and visual perception, while the former is formed by the interaction of multiple forest stand attributes (i.e., FT, UVC, UVH). Based on a mixed factor matrix of FT (i.e., coniferous forests and broadleaf), UVC level (i.e., 10, 60, and 100%), and UVH level (0.1, 1, and 3 m), the study creates 18 immersive virtual forest videos with different stand attributes. Virtual reality eye-tracking technology and questionnaires are used to collect visual perception data from viewing virtual forest videos. The study finds that vegetation permeability which is formed by the interaction effect of canopy density (i.e., FT) and understory density (i.e., UVC, UVH), significantly affects participant's visual perception: in terms of visual physiology characteristics, pupil size is significantly negatively correlated with vegetation permeability when participants are viewing virtual reality forest; in terms of visual psychological characteristics, the understory density formed by the interaction of UVC and UVH has a significant impact on visual attractiveness and perceived safety and the impact in which understory density is significantly negatively correlated with perceived safety. Apart from these, the study finds a significant negative correlation between average pupil diameter and perceived safety when participants are viewing virtual reality forests. The findings may be beneficial for the maintenance and management of forest parks, as well as provide insights into similar studies to explore urban public green spaces.
Project description:IntroductionEarly detection of cognitive impairment enables interventions to slow cognitive decline. Existing neuropsychological paper-and-pencil tests may not adequately assess cognition in real-life environments. A fully-immersive and automated virtual reality (VR) system-Cognitive Assessment using VIrtual REality (CAVIRE)-was developed to assess all six cognitive domains. This case-control study aims to evaluate the ability of CAVIRE to differentiate cognitively-healthy individuals from those with cognitive impairment.MethodsOne hundred nine Asian individuals 65-84 years of age were recruited at a primary care setting in Singapore. Based on the Montreal Cognitive Assessment (MoCA), participants were grouped as either Cognitively Healthy (MoCA ≥26, n = 60) or Cognitively Impaired (MoCA <26, n = 49). Subsequently, all participants completed the CAVIRE assessment.ResultsCognitively-healthy participants achieved higher VR scores and required shorter completion time across all six cognitive domains (all p's < 0.005). Receiver-operating characteristic curve analysis showed area under the curve of 0.7267.DiscussionThe results demonstrated the potential of CAVIRE as a cognitive screening tool in primary care.HighlightsCAVIRE is a virtual reality (VR) system that assesses the six cognitive domains.CAVIRE can distinguish healthy individuals from individuals with cognitive impairment.It has potential as a cognitive screening tool for older people in primary care.