Project description:The Stroop test evaluates the ability to inhibit cognitive interference. This interference occurs when the processing of one stimulus characteristic affects the simultaneous processing of another attribute of the same stimulus. Eye movements are an indicator of the individual attention load required for inhibiting cognitive interference. We used an eye tracker to collect eye movements data from more than 60 subjects each performing four different but similar tasks (some with cognitive interference and some without). After the extraction of features related to fixations, saccades and gaze trajectory, we trained different Machine Learning models to recognize tasks performed in the different conditions (i.e., with interference, without interference). The models achieved good classification performances when distinguishing between similar tasks performed with or without cognitive interference. This suggests the presence of characterizing patterns common among subjects, which can be captured by machine learning algorithms despite the individual variability of visual behavior.
Project description:Autism spectrum disorder is a group of disorders marked by difficulties with social skills, repetitive activities, speech, and nonverbal communication. Deficits in paying attention to, and processing, social stimuli are common for children with autism spectrum disorders. It is uncertain whether eye-tracking technologies can assist in establishing an early biomarker of autism based on the children's atypical visual preference patterns. In this study, we used machine learning methods to test the applicability of eye-tracking data in children to aid in the early screening of autism. We looked into the effectiveness of various machine learning techniques to discover the best model for predicting autism using visualized eye-tracking scan path images. We adopted three traditional machine learning models and a deep neural network classifier to run experimental trials. This study employed a publicly available dataset of 547 graphical eye-tracking scan paths from 328 typically developing and 219 autistic children. We used image augmentation to populate the dataset to prevent the model from overfitting. The deep neural network model outperformed typical machine learning approaches on the populated dataset, with 97% AUC, 93.28% sensitivity, 91.38% specificity, 94.46% NPV, and 90.06% PPV (fivefold cross-validated). The findings strongly suggest that eye-tracking data help clinicians for a quick and reliable autism screening.
Project description:ObjectiveTo explore the potential of using artificial intelligence (AI)-based eye tracking technology on a tablet for screening Attention-deficit/hyperactivity disorder (ADHD) symptoms in children.MethodsWe recruited 112 children diagnosed with ADHD (ADHD group; mean age: 9.40 ± 1.70 years old) and 325 typically developing children (TD group; mean age: 9.45 ± 1.59 years old). We designed a data-driven end-to-end convolutional neural network appearance-based model to predict eye gaze to permit eye-tracking under low resolution and sampling rates. The participants then completed the eye tracking task on a tablet, which consisted of a simple fixation task as well as 14 prosaccade (looking toward target) and 14 antisaccade (looking away from target) trials, measuring attention and inhibition, respectively.ResultsTwo-way MANOVA analyses demonstrated that diagnosis and age had significant effects on performance on the fixation task [diagnosis: F(2, 432) = 8.231, ***p < 0.001; Wilks' Λ = 0.963; age: F(2, 432) = 3.999, *p < 0.019; Wilks' Λ = 0.982], prosaccade task [age: F(16, 418) = 3.847, ***p < 0.001; Wilks' Λ = 0.872], and antisaccade task [diagnosis: F(16, 418) = 1.738, *p = 0.038; Wilks' Λ = 0.938; age: F(16, 418) = 4.508, ***p < 0.001; Wilks' Λ = 0.853]. Correlational analyses revealed that participants with higher SNAP-IV score were more likely to have shorter fixation duration and more fixation intervals (r = -0.160, 95% CI [0.250, 0.067], ***p < 0.001), poorer scores on adjusted prosaccade accuracy, and poorer scores on antisaccade accuracy (Accuracy: r = -0.105, 95% CI [-0.197, -0.011], *p = 0.029; Adjusted accuracy: r = -0.108, 95% CI [-0.200, -0.015], *p = 0.024).ConclusionOur AI-based eye tracking technology implemented on a tablet could reliably discriminate eye movements of the TD group and the ADHD group, providing a potential solution for ADHD screening outside of clinical settings.
Project description:IntroductionBased on such physiological data as pupillometry collected in an eye-tracking experiment, the study has further confirmed the effect of directionality on cognitive loads during L1 and L2 textual translations by novice translators, a phenomenon called "translation asymmetry" suggested by the Inhibitory Control Model, while revealing that machine learning-based approaches can be usefully applied to the field of Cognitive Translation and Interpreting Studies.MethodsDirectionality was the only factor guiding the eye-tracking experiment where 14 novice translators with the language combination of Chinese and English were recruited to conduct L1 and L2 translations while their pupillometry were recorded. They also filled out a Language and Translation Questionnaire with which categorical data on their demographics were obtained.ResultsA nonparametric related-samples Wilcoxon signed rank test on pupillometry verified the effect of directionality, suggested by the model, during bilateral translations, verifying "translation asymmetry" at a textual level. Further, using the pupillometric data, together with the categorical information, the XGBoost machine-learning algorithm yielded a model that could reliably and effectively predict translation directions.ConclusionThe study has shown that translation asymmetry suggested by the model was valid at a textual level, and that machine learning-based approaches can be gainfully applied to Cognitive Translation and Interpreting Studies.
Project description:The innovative Eye Movement Modelling Examples (EMMEs) method can be used in medicine as an educational training tool for the assessment and verification of students and professionals. Our work was intended to analyse the possibility of using eye tracking tools to verify the skills and training of people engaged in laboratory medicine on the example of parasitological diagnostics. Professionally active laboratory diagnosticians working in a multi-profile laboratory (non-parasitological) (n = 16), laboratory diagnosticians no longer working in this profession (n = 10), and medical analyst students (n = 56), participated in the study. The studied group analysed microscopic images of parasitological preparations made with the cellSens Dimension Software (Olympus) system. Eye activity parameters were obtained using a stationary, video-based eye tracker Tobii TX300 which has a 3-ms temporal resolution. Eye movement activity parameters were analysed along with time parameters. The results of our studies have shown that the eye tracking method is a valuable tool for the analysis of parasitological preparations. Detailed quantitative and qualitative analysis confirmed that the EMMEs method may facilitate learning of the correct microscopic image scanning path. The analysis of the results of our studies allows us to conclude that the EMMEs method may be a valuable tool in the preparation of teaching materials in virtual microscopy. These teaching materials generated with the use of eye tracking, prepared by experienced professionals in the field of laboratory medicine, can be used during various training, simulations and courses in medical parasitology and contribute to the verification of education results, professional skills, and elimination of errors in parasitological diagnostics.
Project description:Given the heterogeneous nature of attention-deficit/hyperactivity disorder (ADHD) and the absence of established biomarkers, accurate diagnosis and effective treatment remain a challenge in clinical practice. This study investigates the predictive utility of multimodal data, including eye tracking, EEG, actigraphy, and behavioral indices, in differentiating adults with ADHD from healthy individuals. Using a support vector machine model, we analyzed independent training (n = 50) and test (n = 36) samples from two clinically controlled studies. In both studies, participants performed an attention task (continuous performance task) in a virtual reality seminar room while encountering virtual distractions. Task performance, head movements, gaze behavior, EEG, and current self-reported inattention, hyperactivity, and impulsivity were simultaneously recorded and used for model training. Our final model based on the optimal number of features (maximal relevance minimal redundancy criterion) achieved a promising classification accuracy of 81% in the independent test set. Notably, the extracted EEG-based features had no significant contribution to this prediction and therefore were not included in the final model. Our results suggest the potential of applying ecologically valid virtual reality environments and integrating different data modalities for enhancing robustness of ADHD diagnosis.
Project description:Analyzing the gaze accuracy characteristics of an eye tracker is a critical task as its gaze data is frequently affected by non-ideal operating conditions in various consumer eye tracking applications. In previous research on pattern analysis of gaze data, efforts were made to model human visual behaviors and cognitive processes. What remains relatively unexplored are questions related to identifying gaze error sources as well as quantifying and modeling their impacts on the data quality of eye trackers. In this study, gaze error patterns produced by a commercial eye tracking device were studied with the help of machine learning algorithms, such as classifiers and regression models. Gaze data were collected from a group of participants under multiple conditions that commonly affect eye trackers operating on desktop and handheld platforms. These conditions (referred here as error sources) include user distance, head pose, and eye-tracker pose variations, and the collected gaze data were used to train the classifier and regression models. It was seen that while the impact of the different error sources on gaze data characteristics were nearly impossible to distinguish by visual inspection or from data statistics, machine learning models were successful in identifying the impact of the different error sources and predicting the variability in gaze error levels due to these conditions. The objective of this study was to investigate the efficacy of machine learning methods towards the detection and prediction of gaze error patterns, which would enable an in-depth understanding of the data quality and reliability of eye trackers under unconstrained operating conditions. Coding resources for all the machine learning methods adopted in this study were included in an open repository named MLGaze to allow researchers to replicate the principles presented here using data from their own eye trackers.
Project description:Real-time gaze tracking provides crucial input to psychophysics studies and neuromarketing applications. Many of the modern eye-tracking solutions are expensive mainly due to the high-end processing hardware specialized for processing infrared-camera pictures. Here, we introduce a deep learning-based approach which uses the video frames of low-cost web cameras. Using DeepLabCut (DLC), an open-source toolbox for extracting points of interest from videos, we obtained facial landmarks critical to gaze location and estimated the point of gaze on a computer screen via a shallow neural network. Tested for three extreme poses, this architecture reached a median error of about one degree of visual angle. Our results contribute to the growing field of deep-learning approaches to eye-tracking, laying the foundation for further investigation by researchers in psychophysics or neuromarketing.
Project description:Fetal alcohol-spectrum disorder (FASD) is underdiagnosed and often misdiagnosed as attention-deficit/hyperactivity disorder (ADHD). Here, we develop a screening tool for FASD in youth with ADHD symptoms. To develop the prediction model, medical record data from a German University outpatient unit are assessed including 275 patients aged 0-19 years old with FASD with or without ADHD and 170 patients with ADHD without FASD aged 0-19 years old. We train 6 machine learning models based on 13 selected variables and evaluate their performance. Random forest models yield the best prediction models with a cross-validated AUC of 0.92 (95% confidence interval [0.84, 0.99]). Follow-up analyses indicate that a random forest model with 6 variables - body length and head circumference at birth, IQ, socially intrusive behaviour, poor memory and sleep disturbance - yields equivalent predictive accuracy. We implement the prediction model in a web-based app called FASDetect - a user-friendly, clinically scalable FASD risk calculator that is freely available at https://fasdetect.dhc-lab.hpi.de .