A randomised controlled trial of feedback to improve patient satisfaction and consultation skills in medical students.
ABSTRACT: BACKGROUND:The use of feedback has been integral to medical student learning, but rigorous evidence to evaluate its education effect is limited, especially in the role of patient feedback in clinical teaching and practice improvement. The aim of the Patient Teaching Associate (PTA) Feedback Study was to evaluate whether additional written consumer feedback on patient satisfaction improved consultation skills among medical students and whether multisource feedback (MSF) improved student performance. METHODS:In this single site, double-blinded randomised controlled trial, 71 eligible medical students from two universities in their first clinical year were allocated to intervention or control and followed up for one semester. They participated in five simulated student-led consultations in a teaching clinic with patient volunteers living with chronic illness. Students in the intervention group received additional written feedback on patient satisfaction combined with guided self-reflection. The control group received usual immediate formative multisource feedback from tutors, patients and peers. Student characteristics, baseline patient-rated satisfaction scores and tutor-rated consultation skills were measured. RESULTS:Follow-up assessments were complete in 70 students attending the MSF program. At the final consultation episodes, both groups improved patient-rated rapport (P =?0.002), tutor-rated patient-centeredness and tutor-rated overall consultation skills (P =?0.01). The intervention group showed significantly better tutor-rated patient-centeredness (P =?0.003) comparing with the control group. Distress relief, communication comfort, rapport reported by patients and tutor-rated clinical skills did not differ significantly between the two groups. CONCLUSIONS:The innovative multisource feedback program effectively improved consultation skills in medical students. Structured written consumer feedback combined with guided student reflection further improved patient-centred practice and effectively enhanced the benefit of an MSF model. This strategy might provide a valuable adjunct to communication skills education for medical students. TRIAL REGISTRATION:Australian New Zealand Clinical Trials Registry Number ACTRN12613001055796 .
Project description:BACKGROUND:Feedback is a crucial part of medical education and with on-going digitalisation, video feedback has been increasingly in use. Potentially shameful physician-patient-interactions might particularly benefit from it, allowing a meta-perspective view of ones own performance from a distance. We thus wanted to explore different approaches on how to deliver specifically video feedback by investigating the following hypotheses: 1. Is the physical presence of a person delivering the feedback more desired, and associated with improved learning outcomes compared to using a checklist? 2. Are different approaches of video feedback associated with different levels of shame in students with a simple checklist likely to be perceived as least and receiving feedback in front of a group of fellow students being perceived as most embarrassing? METHODS:Second-year medical students had to manage a consultation with a simulated patient. Students received structured video feedback according to one randomly assigned approach: checklist (CL), group (G), student tutor (ST), or teacher (T). Shame (ESS, TOSCA, subjective rating) and effectiveness (subjective ratings, remembered feedback points) were measured. T-tests for dependent samples and ANOVAs were used for statistical analysis. RESULTS:n = 64 students could be included. Video feedback was in hindsight rated significantly less shameful than before. Subjectively, there was no significant difference between the four approaches regarding effectiveness or the potential to arise shame. Objective learning success showed CL to be significantly less effective than the other approaches; additionally, T showed a trend towards being more effective than G or ST. CONCLUSIONS:There was no superior approach as such. But CL could be shown to be less effective than G, ST and T. Feelings of shame were higher before watching one's video feedback than in hindsight. There was no significant difference regarding the different approaches. It does not seem to make any differences as to who is delivering the video feedback as long as it is a real person. This opens possibilities to adapt curricula to local standards, preferences, and resource limitations. Further studies should investigate, whether the present results can be reproduced when also assessing external evaluation and long-term effects.
Project description:Background: Imparting communication skills has been given great importance in medical curricula. In addition to standardized assessments, students should communicate with real patients in actual clinical situations during workplace-based assessments and receive structured feedback on their performance. The aim of this project was to pilot a formative testing method for workplace-based assessment. Our investigation centered in particular on whether or not physicians view the method as feasible and how high acceptance is among students. In addition, we assessed the reliability of the method. Method: As part of the project, 16 students held two consultations each with chronically ill patients at the medical practice where they were completing GP training. These consultations were video-recorded. The trained mentoring physician rated the student's performance and provided feedback immediately following the consultations using the Berlin Global Rating scale (BGR). Two impartial, trained raters also evaluated the videos using BGR. For qualitative and quantitative analysis, information on how physicians and students viewed feasibility and their levels of acceptance was collected in written form in a partially standardized manner. To test for reliability, the test-retest reliability was calculated for both of the overall evaluations given by each rater. The inter-rater reliability was determined for the three evaluations of each individual consultation. Results: The formative assessment method was rated positively by both physicians and students. It is relatively easy to integrate into daily routines. Its significant value lies in the personal, structured and recurring feedback. The two overall scores for each patient consultation given by the two impartial raters correlate moderately. The degree of uniformity among the three raters in respect to the individual consultations is low. Discussion: Within the scope of this pilot project, only a small sample of physicians and students could be surveyed to a limited extent. There are indications that the assessment can be improved by integrating more information on medical context and student self-assessments. Despite the current limitations regarding test criteria, it is clear that workplace-based assessment of communication skills in the clinical setting is a valuable addition to the communication curricula of medical schools.
Project description:BACKGROUND: Giving and receiving feedback are critical skills and should be taught early in the process of medical education, yet few studies discuss the effect of feedback curricula for first-year medical students. OBJECTIVES: To study short-term and long-term skills and attitudes of first-year medical students after a multidisciplinary feedback curriculum. DESIGN: Prospective pre- vs. post-course evaluation using mixed-methods data analysis. PARTICIPANTS: First-year students at a public university medical school. INTERVENTIONS: We collected anonymous student feedback to faculty before, immediately after, and 8 months after the curriculum and classified comments by recommendation (reinforcing/corrective) and specificity (global/specific). Students also self-rated their comfort with and quality of feedback. We assessed changes in comments (skills) and self-rated abilities (attitudes) across the three time points. MEASUREMENTS AND MAIN RESULTS: Across the three time points, students' evaluation contained more corrective specific comments per evaluation [pre-curriculum mean (SD) 0.48 (0.99); post-curriculum 1.20 (1.7); year-end 0.95 (1.5); p = 0.006]. Students reported increased skill and comfort in giving and receiving feedback and at providing constructive feedback (p < 0.001). However, the number of specific comments on year-end evaluations declined [pre 3.35 (2.0); post 3.49 (2.3); year-end 2.8 (2.1)]; p = 0.008], as did students' self-rated ability to give specific comments. CONCLUSION: Teaching feedback to early medical students resulted in improved skills of delivering corrective specific feedback and enhanced comfort with feedback. However, students' overall ability to deliver specific feedback decreased over time.
Project description:<h4>Objectives</h4>To determine whether an app-based software system to support production and storage of assessment feedback summaries makes workplace-based assessment easier for clinical tutors and enhances the educational impact on medical students.<h4>Methods</h4>We monitored our workplace assessor app's usage by Year 3 to 5 medical students in 2014-15 and conducted focus groups with Year 4 medical students and interviews with clinical tutors who had used the apps. Analysis was by constant comparison using a framework based on elements of van der Vleuten's utility index.<h4>Results</h4>The app may enhance the content of feedback for students. Using a screen may be distracting if the app is used during feedback discussions. Educational impact was reduced by students' perceptions that an easy-to-produce feedback summary is less valuable than one requiring more tutor time and effort. Tutors' typing, dictation skills and their familiarity with mobile devices varied. This influenced their willingness to use the assessment and feedback mobile app rather than the equivalent web app. Electronic feedback summaries had more real and perceived uses than anticipated both for tutors and students including perceptions that they were for the school rather than the student.<h4>Conclusions</h4>Electronic workplace-based assessment systems can be acceptable to tutors and can make giving detailed written feedback more practical but can interrupt the social interaction required for the feedback conversation. Tutor training and flexible systems will be required to minimise unwanted consequences. The educational impact on both tutors and students of providing pre-formulated advice within the app is worth further study.
Project description:Multisource feedback (MSF) has potential value in learner assessment, but has not been broadly implemented nor studied in emergency medicine (EM). This study aimed to adapt existing MSF instruments for emergency department implementation, measure feasibility, and collect initial validity evidence to support score interpretation for learner assessment.Residents from eight U.S. EM residency programs completed a self-assessment and were assessed by eight physicians, eight nonphysician colleagues, and 25 patients using unique instruments. Instruments included a five-point rating scale to assess interpersonal and communication skills, professionalism, systems-based practice, practice-based learning and improvement, and patient care. MSF feasibility was measured by percentage of residents who collected the target number of instruments. To develop internal structure validity evidence, Cronbach's alpha was calculated as a measure of internal consistency.A total of 125 residents collected a mean of 7.0 physician assessments (n = 752), 6.7 nonphysician assessments (n = 775), and 17.8 patient assessments (n = 2,100) with respective response rates of 67.2, 75.2, and 77.5%. Cronbach's alpha values for physicians, nonphysicians, patients, and self were 0.97, 0.97, 0.96, and 0.96, respectively.This study demonstrated that MSF implementation is feasible, although challenging. The tool and its scale demonstrated excellent internal consistency. EM educators may find the adaptation process and tools applicable to their learners.
Project description:Scientific writing is an important communication and learning tool in neuroscience, yet it is a skill not adequately cultivated in introductory undergraduate science courses. Proficient, confident scientific writers are produced by providing specific knowledge about the writing process, combined with a clear student understanding about how to think about writing (also known as metacognition). We developed a rubric for evaluating scientific papers and assessed different methods of using the rubric in inquiry-based introductory biology classrooms. Students were either 1) given the rubric alone, 2) given the rubric, but also required to visit a biology subject tutor for paper assistance, or 3) asked to self-grade paper components using the rubric. Students who were required to use a peer tutor had more negative attitudes towards scientific writing, while students who used the rubric alone reported more confidence in their science writing skills by the conclusion of the semester. Overall, students rated the use of an example paper or grading rubric as the most effective ways of teaching scientific writing, while rating peer review as ineffective. Our paper describes a concrete, simple method of infusing scientific writing into inquiry-based science classes, and provides clear avenues to enhance communication and scientific writing skills in entry-level classes through the use of a rubric or example paper, with the goal of producing students capable of performing at a higher level in upper level neuroscience classes and independent research.
Project description:BACKGROUND: While the Accreditation Council for Graduate Medical Education recommends multisource feedback (MSF) of resident performance, there is no uniformly accepted MSF tool for emergency medicine (EM) trainees, and the process of obtaining MSF in EM residencies is untested. OBJECTIVE: To determine the feasibility of an MSF program and evaluate the intraclass and interclass correlation of a previously reported resident professionalism evaluation, the Humanism Scale (HS). METHODS: To assess 10 third-year EM residents, we distributed an anonymous 9-item modified HS (EM-HS) to emergency department nursing staff, faculty physicians, and patients. The evaluators rated resident performance on a 1 to 9 scale (needs improvement to outstanding). Residents were asked to complete a self-evaluation of performance, using the same scale. ANALYSIS: Generalizability coefficients (Eρ(2)) were used to assess the reliability within evaluator classes. The mean score for each of the 9 questions provided by each evaluator class was calculated for each resident. Correlation coefficients were used to evaluate correlation between rater classes for each question on the EM-HS. Eρ(2) and correlation values greater than 0.70 were deemed acceptable. RESULTS: EM-HSs were obtained from 44 nurses and 12 faculty physicians. The residents had an average of 13 evaluations by emergency department patients. Reliability within faculty and nurses was acceptable, with Eρ(2) of 0.79 and 0.83, respectively. Interclass reliability was good between faculty and nurses. CONCLUSIONS: An MSF program for EM residents is feasible. Intraclass reliability was acceptable for faculty and nurses. However, reliable feedback from patients requires a larger number of patient evaluations.
Project description:BACKGROUND:Multisource feedback (MSF) is increasingly being used to assess trainee performance, with different assessor groups fulfilling a crucial role in utility of assessment data. However, in health professions education, research on assessor behaviors in MSF is limited. When assessing trainee performance in work settings, assessors use multidimensional conceptualizations of what constitutes effective performance, also called personal performance theories, to distinguish between various behaviors and sub competencies., This may not only explain assessor variability in Multi Source Feedback, but also result in differing acceptance (and use) of assessment data for developmental purposes. The purpose of this study was to explore performance theories of various assessor groups (residents and nurses) when assessing performance of residents. METHODS:A constructivist, inductive qualitative research approach and semi-structured interviews following MSF were used to explore performance theories of 14 nurses and 15 residents in the department of internal medicine at Aga Khan University (AKU). Inductive thematic content analysis of interview transcripts was used to identify and compare key dimensions in residents' and nurses' performance theories used in evaluation of resident performance. RESULTS:Seven major themes, reflecting key dimensions of assessors' performance theories, emerged from the qualitative data, namely; communication skills, patient care, accessibility, teamwork skills, responsibility, medical knowledge and professional attitude. There were considerable overlaps, but also meaningful differences in the performance theories of residents and the nurses, especially with respect to accessibility, teamwork and medical knowledge. CONCLUSION:Residents' and nurses' performance theories for assessing resident performance overlap to some extent, yet also show meaningful differences with respect to the performance dimensions they pay attention to or consider most important. In MSF, different assessor groups may therefore hold different performance theories, depending on their role. Our results further our understanding of assessor source effects in MSF. Implications of our findings are related to implementation of MSF, design of rating scales as well as interpretation and use of MSF data for selection and performance improvement.
Project description:In higher education, student ratings are often used to evaluate and improve the quality of courses and professors' instructional skills. Unfortunately, student-rating questionnaires rarely generate specific feedback for professors to improve their instructional skills. The impact of student ratings on professors' instructional skills has proven to be low. This study concerns the psychometric properties of the Instructional Skills Questionnaire (ISQ), a new theory-based student-rating-of-teaching questionnaire with specific questions concerning lecturing skills. The ISQ is administered after a single lecture. This way, it serves as a formative feedback instrument for university professors during courses to assist them to improve and (re-) evaluate their skills if necessary. The ISQ contains seven dimensions of professors' instructional skills and three student (self perceived) learning outcomes. In this study, Dutch students in 75 courses rated three 90-minute lectures (T1, T2 and T3) of their respective professors using the ISQ. In total, 14,298 ISQ-forms were used to rate 225 lectures. The teacher level reliabilities of the seven dimensions were found to be good at each measurement occasion. In addition, confirmatory multilevel factor analysis confirmed a seven dimensional factor structure at the teacher level at each measurement occasion. Furthermore, specific teacher level factors significantly predicted students' (self-assessed) learning outcomes. These results partly supported the proposed theoretical framework on the relationship between the ISQ teaching dimensions and the student learning process, and provided evidence for the construct validity of the instrument. In sum, the ISQ is found to be a reliable and valid instrument, which can be used by professors and faculty development centers to assess and improve university teaching.
Project description:Sufficient teaching and assessing clinical skills in the undergraduate setting becomes more and more important. In a surgical skills-lab course at the Medical University of Innsbruck fourth year students were teached with DOPS (direct observation of procedural skills). We analyzed whether DOPS worked or not in this setting, which performance levels could be reached compared to tutor teaching (one tutor, 5 students) and which curricular side effects could be observed.In a prospective randomized trial in summer 2013 (April - June) four competence-level-based skills were teached in small groups during one week: surgical abdominal examination, urethral catheterization (phantom), rectal-digital examination (phantom), handling of central venous catheters. Group A was teached with DOPS, group B with a classical tutor system. Both groups underwent an OSCE (objective structured clinical examination) for assessment. 193 students were included in the study. Altogether 756 OSCE´s were carried out, 209 (27,6%) in the DOPS- and 547 (72,3%) in the tutor-group.Both groups reached high performance levels. In the first month there was a statistically significant difference (p<0,05) in performance of 95% positive OSCE items in the DOPS-group versus 88% in the tutor group. In the following months the performance rates showed no difference anymore and came to 90% in both groups. In practical skills the analysis revealed a high correspondence between positive DOPS (92,4%) and OSCE (90,8%) results.As shown by our data DOPS furnish high performance of clinical skills and work well in the undergraduate setting. Due to the high correspondence of DOPS and OSCE results DOPS should be considered as preferred assessment tool in a students skills-lab. The approximation of performance-rates within the months after initial superiority of DOPS could be explained by an interaction between DOPS and tutor system: DOPS elements seem to have improved tutoring and performance rates as well. DOPS in students 'skills-lab afford structured feedback and assessment without increased personnel and financial resources compared to classic small group training.In summary, this study shows that DOPS represent an efficient method in teaching clinical skills. Their effects on didactic culture reach beyond the positive influence of performance rates.