Use of Eye-Tracking Technology by Medical Students Taking the Objective Structured Clinical Examination: Descriptive Study.
ABSTRACT: BACKGROUND:The objective structured clinical examination (OSCE) is a test used throughout Spain to evaluate the clinical competencies, decision making, problem solving, and other skills of sixth-year medical students. OBJECTIVE:The main goal of this study is to explore the possible applications and utility of portable eye-tracking systems in the setting of the OSCE, particularly questions associated with attention and engagement. METHODS:We used a portable Tobii Glasses 2 eye tracker, which allows real-time monitoring of where the students were looking and records the voice and ambient sounds. We then performed a qualitative and a quantitative analysis of the fields of vision and gaze points attracting attention as well as the visual itinerary. RESULTS:Eye-tracking technology was used in the OSCE with no major issues. This portable system was of the greatest value in the patient simulators and mannequin stations, where interaction with the simulated patient or areas of interest in the mannequin can be quantified. This technology proved useful to better identify the areas of interest in the medical images provided. CONCLUSIONS:Portable eye trackers offer the opportunity to improve the objective evaluation of candidates and the self-evaluation of the stations used as well as medical simulations by examiners. We suggest that this technology has enough resolution to identify where a student is looking at and could be useful for developing new approaches for evaluating specific aspects of clinical competencies.
Project description:The Objective Structured Clinical Examination (OSCE) is increasingly used at medical schools to assess practical competencies. To compare the outcomes of students at different medical schools, we introduced standardized OSCE stations with identical checklists.We investigated examiner bias at standardized OSCE stations for knee- and shoulder-joint examinations, which were implemented into the surgical OSCE at five different medical schools. The checklists for the assessment consisted of part A for knowledge and performance of the skill and part B for communication and interaction with the patient. At each medical faculty, one reference examiner also scored independently to the local examiner. The scores from both examiners were compared and analysed for inter-rater reliability and correlation with the level of clinical experience. Possible gender bias was also evaluated.In part A of the checklist, local examiners graded students higher compared to the reference examiner; in part B of the checklist, there was no trend to the findings. The inter-rater reliability was weak, and the scoring correlated only weakly with the examiner's level of experience. Female examiners rated generally higher, but male examiners scored significantly higher if the examinee was female.These findings of examiner effects, even in standardized situations, may influence outcome even when students perform equally well. Examiners need to be made aware of these biases prior to examining.
Project description:BACKGROUND:Generation Z is starting to reach college age. They have adopted technology from an early age and have a deep dependence on it; therefore, they have become more drawn to the virtual world. M-learning has experienced huge growth in recent years, both in the medical context and in medical and health sciences education. Ultrasound imaging is an important diagnosis technique in physiotherapy, especially in sports pathology. M-learning systems could be useful tools for improving the comprehension of ultrasound concepts and the acquisition of professional competencies. OBJECTIVE:The purpose of this study was to evaluate the efficacy and use of an interactive platform accessible through mobile devices-Ecofisio-using ultrasound imaging for the development of professional competencies in the evaluation and diagnosis of sports pathologies. METHODS:Participants included 110 undergraduate students who were placed into one of two groups of a randomized controlled multicenter study: control group (ie, traditional learning) and experimental group (ie, Ecofisio mobile app). Participants' theoretical knowledge was assessed using a multiple-choice questionnaire (MCQ); students were also assessed by means of the Objective Structured Clinical Examination (OSCE). Moreover, a satisfaction survey was completed by the students. RESULTS:The statistical analyses revealed that Ecofisio was effective in most of the processes evaluated when compared with the traditional learning method: all OSCE stations, P<.001; MCQ, 43 versus 15 students passed in the Ecofisio and control groups, respectively, P<.001. Moreover, the results revealed that the students found the app to be attractive and useful. CONCLUSIONS:The Ecofisio mobile app may be an effective way for physiotherapy students to obtain adequate professional competencies regarding evaluation and diagnosis of sports pathologies. TRIAL REGISTRATION:ClinicalTrials.gov NCT04138511; https://clinicaltrials.gov/ct2/show/NCT04138511.
Project description:BACKGROUND:Patient safety (PS) receives limited attention in health professional curricula. We developed and pilot tested four Objective Structured Clinical Examination (OSCE) stations intended to reflect socio-cultural dimensions in the Canadian Patient Safety Institute's Safety Competency Framework. SETTING AND PARTICIPANTS:18 third year undergraduate medical and nursing students at a Canadian University. METHODS:OSCE cases were developed by faculty with clinical and PS expertise with assistance from expert facilitators from the Medical Council of Canada. Stations reflect domains in the Safety Competency Framework (ie, managing safety risks, culture of safety, communication). Stations were assessed by two clinical faculty members. Inter-rater reliability was examined using weighted ? values. Additional aspects of reliability and OSCE performance are reported. RESULTS:Assessors exhibited excellent agreement (weighted ? scores ranged from 0.74 to 0.82 for the four OSCE stations). Learners' scores varied across the four stations. Nursing students scored significantly lower (p<0.05) than medical students on three stations (nursing student mean scores=1.9, 1.9 and 2.7; medical student mean scores=2.8, 2.9 and 3.5 for stations 1, 2 and 3, respectively where 1=borderline unsatisfactory, 2=borderline satisfactory and 3=competence demonstrated). 7/18 students (39%) scored below 'borderline satisfactory' on one or more stations. CONCLUSIONS:Results show (1) four OSCE stations evaluating socio-cultural dimensions of PS achieved variation in scores and (2) performance on this OSCE can be evaluated with high reliability, suggesting a single assessor per station would be sufficient. Differences between nursing and medical student performance are interesting; however, it is unclear what factors explain these differences.
Project description:Tobii eye tracking was compared with webcam-based observer scoring on an animation viewing measure of attention (Early Childhood Vigilance Test; ECVT) to evaluate the feasibility of automating measurement and scoring. Outcomes from both scoring approaches were compared with the Mullen Scales of Early Learning (MSEL), Color-Object Association Test (COAT), and Behavior Rating Inventory of Executive Function for preschool children (BRIEF-P).A total of 44 children 44 to 65 months of age were evaluated with the ECVT, COAT, MSEL, and BRIEF-P. Tobii ×2-30 portable infrared cameras were programmed to monitor pupil direction during the ECVT 6-min animation and compared with observer-based PROCODER webcam scoring.Children watched 78% of the cartoon (Tobii) compared with 67% (webcam scoring), although the 2 measures were highly correlated (r = .90, p = .001). It is possible for 2 such measures to be highly correlated even if one is consistently higher than the other (Bergemann et al., 2012). Both ECVT Tobii and webcam ECVT measures significantly correlated with COAT immediate recall (r = .37, p = .02 vs. r = .38, p = .01, respectively) and total recall (r = .33, p = .06 vs. r = .42, p = .005) measures. However, neither the Tobii eye tracking nor PROCODER webcam ECVT measures of attention correlated with MSEL composite cognitive performance or BRIEF-P global executive composite.ECVT scoring using Tobii eye tracking is feasible with at-risk very young African children and consistent with webcam-based scoring approaches in their correspondence to one another and other neurocognitive performance-based measures. By automating measurement and scoring, eye tracking technologies can improve the efficiency and help better standardize ECVT testing of attention in younger children. This holds promise for other neurodevelopmental tests where eye movements, tracking, and gaze length can provide important behavioral markers of neuropsychological and neurodevelopmental processes associated with such tests. (PsycINFO Database Record
Project description:Core competencies have progressively gained importance in medical education. In other contexts, especially personnel selection and development, assessment centers (ACs) are used to assess competencies, but there is only a limited number of studies on competency-based ACs in medical education. To the best of our knowledge, the present study provides the first data on the criterion-related validity of a competency-based AC in medical education.We developed an AC tailored to measure core competencies relevant to medical education (social-ethical, communicative, self, and teaching) and tested its validity in n=30 first-year medical students using 3- to 4-year follow-up measures such as (a) objective structured clinical examinations (OSCE) on basic clinical skills (n=26), (b) OSCE on communication skills (n=21), and (c) peer feedback (n=18). The AC contained three elements: interview, group discussion, and role play. Additionally, a self-report questionnaire was provided as a basis for the interview.Baseline AC average score and teaching competency correlated moderately with the communication OSCE average score (r=0.41, p=0.03, and r=0.38, p=0.04, respectively). Social-ethical competency in the AC showed a very strong convergent association with the communication OSCE average score (r=0.60, p<0.01). The AC total score also showed a moderate correlation with the overall peer feedback score provided in Year 4 (r=0.38, p=0.06). In addition, communicative competency correlated strongly with the overall peer feedback (r=0.50, p=0.02). We found predominantly low and insignificant correlations between the AC and the OSCE on basic clinical skills (r=-0.33 to 0.30, all p's>0.05).The results showed that competency-based ACs can be used at a very early stage of medical training to successfully predict future performance in core competencies.
Project description:This article presents data from 278 six-month-old infants who completed a visual expectation paradigm in which audiovisual stimuli were first presented randomly (random phase), and then in a spatial pattern (pattern phase). Infants' eye gaze behaviour was tracked with a 60 Hz Tobii eye-tracker in order to measure two types of looking behaviour: reactive looking (i.e., latency to shift eye gaze in reaction to the appearance of stimuli) and anticipatory looking (i.e., percentage of time spent looking at the location where the next stimulus is about to appear during the inter-stimulus interval). Data pertaining to missing data and task order effects are presented. Further analyses show that infants' reactive looking was faster in the pattern phase, compared to the random phase, and their anticipatory looking increased from random to pattern phases. Within the pattern phase, infants' reactive looking showed a quadratic trend, with reactive looking time latencies peaking in the middle portion of the phase. Similarly, within the pattern phase, infants' anticipatory looking also showed a quadratic trend, with anticipatory looking peaking during the middle portion of the phase.
Project description:Increasingly, communicative competencies are becoming a permanent feature of training and assessment in German-speaking medical schools (n=43; Germany, Austria, Switzerland - "D-A-CH"). In support of further curricular development of communicative competencies, the survey by the "Communicative and Social Competencies" (KusK) committee of the German Society for Medical Education (GMA) systematically appraises the scope of and form in which teaching and assessment take place.The iterative online questionnaire, developed in cooperation with KusK, comprises 70 questions regarding instruction (n=14), assessment (n=48), local conditions (n=5), with three fields for further remarks. Per location, two to three individuals who were familiar with the respective institute's curriculum were invited to take part in the survey.Thirty-nine medical schools (40 degree programmes) took part in the survey. Communicative competencies are taught in all of the programmes. Ten degree programmes have a longitudinal curriculum for communicative competencies; 25 programmes offer this in part. Sixteen of the 40 programmes use the Basler Consensus Statement for orientation. In over 80% of the degree programmes, communicative competencies are taught in the second and third year of studies. Almost all of the programmes work with simulated patients (n=38) and feedback (n=37). Exams are exclusively summative (n=11), exclusively formative (n=3), or both summative and formative (n=16) and usually take place in the fifth or sixth year of studies (n=22 and n=20). Apart from written examinations (n=15) and presentations (n=9), practical examinations are primarily administered (OSCE, n=31); WPA (n=8), usually with self-developed scales (OSCE, n=19). With regards to the examiners' training and the manner of results-reporting to the students, there is a high variance.Instruction in communicative competencies has been implemented at all 39 of the participating medical schools. For the most part, communicative competencies instruction in the D-A-C-H region takes place in small groups and is tested using the OSCE. The challenges for further curricular development lie in the expansion of feedback, the critical evaluation of appropriate assessment strategies, and in the quality assurance of exams.
Project description:Background:A residency program's intern cohort is comprised of individuals from different medical schools that place varying levels of emphasis on Core Entrustable Professional Activities for Entering Residency (CEPAERs). Program directors have expressed concerns about the preparedness of medical school graduates. Though guiding principles for implementation of the CEPAERs have been published, studies using this framework to assess interns' baseline skills during orientation are limited. Objective:A CEPAER-based objective structured clinical examination (OSCE) was implemented with the aims to (1) assess each intern's baseline clinical skills and provide formative feedback; (2) determine an intern's readiness for resident responsibilities; (3) inform individualized education plans; and (4) address identified gaps through curricular change. Methods:During orientation, all 33 interns from internal medicine (categorical, preliminary, and medicine-psychiatry) participated in the OSCE. Six 20-minute stations evaluated 8 EPAs. Faculty completed a global assessment, and standardized patients completed a communications checklist and global assessment. All interns completed a self-assessment of baseline skills and a post-OSCE survey. Results:Stations assessing handoffs, informed consent, and subjective, objective, assessment, and plan (SOAP) note were the lowest-performing stations. Interns performed lower in skills for which they did not report previous training. Formal instruction was incorporated into didactic sessions for the lowest-performing stations. The majority of interns indicated that the assessment was useful, and immediate feedback was beneficial. Conclusions:This OSCE during orientation offers just-in-time baseline information regarding interns' critical skills and may lead to individualized feedback as well as continuous curricular improvement.
Project description:Multiple mini-interviews (MMI) become increasingly popular for the selection of medical students. In this work, we examine the validity evidence for the Hamburg MMI.We conducted three follow-up studies for the 2014 cohort of applicants to medical school over the course of two years. We calculated Spearman's rank correlation (?) between MMI results and (1) emotional intelligence measured by the Trait Emotional Intelligence Questionnaire (TEIQue-SF) and the Situational Test of Emotion Management (STEM), (2) supervisors' and practice team members' evaluations of psychosocial competencies and suitability for the medical profession after a one-week 1:1 teaching in a general practice (GP) and (3) objective structured clinical examination (OSCE) scores.There were no significant correlations between MMI results and the TEIQue-SF (??=?.07, p?>?.05) or the STEM (??=?.05, p?>?.05). MMI results could significantly predict GP evaluations of psychosocial competencies (??=?.32, p?<?.05) and suitability for the medical profession (??=?.42, p?<?.01) as well as OSCE scores (??=?.23, p?<?.05). The MMI remained a significant predictor of these outcomes in a robust regression model including gender and age as control variables.Our findings suggest that MMIs can measure competencies that are relevant in a practical context. However, these competencies do not seem to be related to emotional intelligence as measured by self-report or situational judgement test.
Project description:BACKGROUND:Peer-assisted learning (PAL) refers to a learning activity whereby students of similar academic level teach and learn from one another. Groupe de perfectionnement des habiletés cliniques (Clinical Skills Improvement Group), a student organization at Université Laval, Canada, propelled PAL into the digital era by creating a collaborative virtual patient platform. Medical interviews can be completed in pairs (a student-patient and a student-doctor) through an interactive Web-based application, which generates a score (weighted for key questions) and automated feedback. OBJECTIVES:The aim of the study was to measure the pedagogical impact of the application on the score at medical interview stations at the summative preclerkship Objective Structured Clinical Examination (OSCE). METHODS:We measured the use of the application (cases completed, mean score) in the 2 months preceding the OSCE. We also accessed the results of medical interview stations at the preclerkship summative OSCE. We analyzed whether using the application was associated with higher scores and/or better passing grades (≥60%) at the OSCE. Finally, we produced an online form where students could comment on their appreciation of the application. RESULTS:Of the 206 students completing the preclerkship summative OSCE, 170 (82.5%) were registered users on the application, completing a total of 3133 cases (18 by active user in average, 7 minutes by case in average). The appreciation questionnaire was answered online by 45 students who mentioned appreciating the intuitive, easy-to-use, and interactive design, the diversity of cases, and the automated feedback. Using the application was associated with reduced reported stress, improved scores (P=.04), and improved passing rates (P=.11) at the preclerkship summative OSCE. CONCLUSIONS:This study suggests that PAL can go far beyond small-group teaching, showing students' potential to create helpful pedagogical tools for their peers.