McMaster Modular Assessment Program (McMAP) Through the Years: Residents' Experience With an Evolving Feedback Culture Over a 3-year Period.
ABSTRACT: Background:Assessing resident competency in emergency department settings requires observing a substantial number of work-based skills and tasks. The McMaster Modular Assessment Program (McMAP) is a novel, workplace-based assessment (WBA) system that uses task-specific and global low-stakes assessments of resident performance. We describe the evaluation of a WBA program 3 years after implementation. Methods:We used a qualitative approach, conducting focus groups with resident physicians in all 5 postgraduate years (n = 26) who used McMAP as part of McMaster University's emergency medicine residency program. Responses were triangulated using a follow-up written survey. Data were analyzed using theory-based thematic analysis. An audit trail was reviewed to ensure that all themes were captured. Results:Findings were organized at the level of the learner (residents), faculty, and system. Residents identified elements of McMAP that were perceived as supporting or inhibiting learning. Residents shared their opinions on the feasibility of completing daily WBAs, perceptions and utilization of rating scales, and the value of structured feedback (written and verbal) from faculty. Residents also commented extensively on the evolving and improving feedback culture that has been created within our system. Conclusion:The study describes an evolving culture of feedback that promotes the process of informed self-assessment. A programmatic approach to WBAs can foster opportunities for feedback although barriers must still be overcome to fully realize the potential of a continuous WBA system. A professional culture change is required to implement and encourage the routine use of WBAs. Barriers, such as familiarity with assessment system logistics, faculty member discomfort with providing feedback, and empowering residents to ask faculty for direct observations and assessments must be addressed to realize the potential of a programmatic WBA system. Findings may inform future research in identifying key components of successful implementation of a programmatic workplace-based assessment system.
Project description:<h4>Background</h4>Entrustable professional activities (EPAs) in competency-based, undergraduate medical education (UME) have led to new formative workplace-based assessments (WBA) using entrustment-supervision scales in clerkships. We conducted an observational, prospective cohort study to explore the usefulness of a WBA designed to assess core EPAs in a psychiatry clerkship.<h4>Methods</h4>We analyzed changes in self-entrustment ratings of students and the supervisors' ratings per EPA. Timing and frequencies of learner-initiated WBAs based on a prospective entrustment-supervision scale and resultant narrative feedback were analyzed quantitatively and qualitatively. Predictors for indirect supervision levels were explored via regression analysis, and narrative feedback was coded using thematic content analysis. Students evaluated the WBA after each clerkship rotation.<h4>Results</h4>EPA 1 ("Take a patient's history"), EPA 2 ("Assess physical & mental status") and EPA 8 ("Document & present a clinical encounter") were most frequently used for learner-initiated WBAs throughout the clerkship rotations in a sample of 83 students. Clinical residents signed off on the majority of the WBAs (71%). EPAs 1, 2, and 8 showed the largest increases in self-entrustment and received most of the indirect supervision level ratings. We found a moderate, positive correlation between self-entrusted supervision levels at the end of the clerkship and the number of documented entrustment-supervision ratings per EPA (p < 0.0001). The number of entrustment ratings explained 6.5% of the variance in the supervisors' ratings for EPA 1. Narrative feedback was documented for 79% (n = 214) of the WBAs. Most narratives addressed the Medical Expert role (77%, n = 208) and used reinforcement (59%, n = 161) as a feedback strategy. Students perceived the feedback as beneficial.<h4>Conclusions</h4>Using formative WBAs with an entrustment-supervision scale and prompts for written feedback facilitated targeted, high-quality feedback and effectively supported students' development toward self-entrusted, indirect supervision levels.
Project description:<h4>Background</h4>Availability of reliable, valid, and feasible workplace-based assessment (WBA) tools is important to allow faculty to make important and complex judgments about resident competence. The Minicard is a WBA direct observation tool designed to provide formative feedback while supporting critical competency decisions.<h4>Objective</h4>The purpose of this study was to collect validity and feasibility evidence for use of the Minicard for formative assessment of internal medicine residents.<h4>Methods</h4>We conducted a retrospective cohort analysis of Minicard observations from 2005-2011 in 1 institution to obtain validity evidence, including content (settings, observation rates, independent raters); response process (rating distributions across the scale and ratings by month in the program); consequences (qualitative assessment of action plans); and feasibility (time to collect observations).<h4>Results</h4>Eighty faculty observers recorded 3715 observations of 73 residents in the inpatient ward (43%), clinic (39%), intensive care (15%), and emergency department (3%) settings. Internal medicine residents averaged 28 (SD=8.4) observations per year from 9 (SD=4.1) independent observers. Minicards had an average of 5 (SD=5.1) discrete recorded observations per card. Rating distributions covered the entire rating scale, and increased significantly over the time in training. Half of the observations included action plans with action-oriented feedback, 11% had observational feedback, 9% had minimal feedback, and 30% had no recorded plan. Observations averaged 15.6 (SD=9.5) minutes.<h4>Conclusions</h4>Validity evidence for the Minicard direct observation tool demonstrates its ability to facilitate identification of "struggling" residents and provide feedback, supporting its use for the formative assessment of internal medicine residents.
Project description:<h4>Background</h4>Research suggests that workplace-based assessment (WBA) tools using entrustment anchors provide more reliable assessments than those using traditional anchors. There is a lack of evidence describing how and why entrustment anchors work.<h4>Objective</h4>The purpose of this study is to better understand the experience of residents and faculty with respect to traditional and entrustment anchors.<h4>Methods</h4>We used constructivist grounded theory to guide data collection and analysis (March-December 2017) and semistructured interviews to gather reflections on anchors. Phase 1 involved residents and faculty (n = 12) who had only used assessment tools with traditional anchors. Phase 2 involved participants who had used tools with entrustment anchors (n = 10). Data were analyzed iteratively.<h4>Results</h4>Participants expressed that the pragmatic language of entrustment anchors made WBA (1) <i>concrete</i> and justifiable; (2) <i>transparent</i> as they explicitly link clinical assessment and learning progress; and (3) <i>align with training outcomes</i>, enabling better feedback. Participants with no prior experience using entrustment anchors outlined contextual concerns regarding their use. Participants with experience described how they addressed these concerns. Participants expressed that entrustment anchors leave a gap in assessment information because they do not provide normative data.<h4>Conclusions</h4>Insights from this analysis contribute to a theoretical framework of benefits and challenges related to the adoption of entrustment anchors. This richer understanding of faculty and resident perspectives of entrustment anchors may assist WBA developers in creating more acceptable tools and inform the necessary faculty development initiatives that must accompany the use of these new WBA tools. ?.
Project description:<h4>Background</h4>The Accreditation Council for Graduate Medical Education requires each residency program to have a Program Evaluation Committee (PEC) but does not specify how the PEC should be designed. We sought to develop a PEC that promotes resident leadership and provides actionable feedback.<h4>Methods</h4>Participants were residents and faculty in the Traditional Internal Medicine residency program at Yale School of Medicine (YSM). One resident and one faculty member facilitated a 1-h structured group discussion to obtain resident feedback on each rotation. PEC co-facilitators summarized the feedback in written form, then met with faculty Firm Chiefs overseeing each rotation and with residency program leadership to discuss feedback and generate action plans. This PEC process was implemented in all inpatient and outpatient rotations over a 4-year period. Upon conclusion of the second and fourth years of the PEC initiative, surveys were sent to faculty Firm Chiefs to assess their perceptions regarding the utility of the PEC format in comparison to other, more traditional forms of programmatic feedback. PEC residents and faculty were also surveyed about their experiences as PEC participants.<h4>Results</h4>The PEC process identified many common themes across inpatient and ambulatory rotations. Positives included a high caliber of teaching by faculty, highly diverse and educational patient care experiences, and a strong emphasis on interdisciplinary care. Areas for improvement included educational curricula on various rotations, interactions between medical and non-medical services, technological issues, and workflow problems. In survey assessments, PEC members viewed the PEC process as a rewarding mentorship experience that provided residents with an opportunity to engage in quality improvement and improve facilitation skills. Firm chiefs were more likely to review and make rotation changes in response to PEC feedback than to traditional written resident evaluations but preferred to receive both forms of feedback rather than either alone CONCLUSIONS: The PEC process at YSM has transformed our program's approach to feedback delivery by engaging residents in the feedback process and providing them with mentored quality improvement and leadership experiences while generating actionable feedback for program-wide change. This has led to PEC groups evaluating additional aspects of residency education.
Project description:<b>Background: </b>The physical examination (PE) skills of residents are often not improved since medical school. Unfortunately, how residents learn PE is not well understood. There is a paucity of research on the factors involved and the differences between resident and faculty perspectives. The authors sought to determine resident and faculty perceptions about the value of PE, the major barriers to learning PE, and the most effective teaching methods.<br><br><b>Methods: </b>Based on a rigorous process of literature review and semi-structured interviews, the authors developed an online survey which was sent to 406 internal medicine residents and 93 faculty at 3 institutions. Residents and faculty answered questions about both their own opinions and about their perception of the other group's opinions.<br><br><b>Results: </b>About 283 residents (70%) and 61 faculty (66%) completed the survey. Both residents and faculty rated the importance of PE similarly. Residents rated being too busy, followed by a lack of feedback, as the most significant barriers to learning PE. Faculty rated a lack of feedback, followed by a lack of resident accountability, as the most significant barriers. Both groups rated the availability of abnormal findings as the least significant barrier. Both groups agreed that faculty demonstration at the bedside was the most effective teaching method.<br><br><b>Conclusion: </b>This survey can serve as a needs assessment for educational interventions to improve the PE skills of residents by focusing on areas of agreement between residents and faculty, specifically faculty demonstration at the bedside combined with feedback about residents' skills.
Project description:<h4>Background</h4>The principle of workplace based assessment (WBA) is to assess trainees at work with feedback integrated into the program simultaneously. A student driven WBA model was introduced and perception evaluation of this teaching method was done subsequently by taking feedback from the faculty as well as the postgraduate trainees (PGs) of a residency program.<h4>Methods</h4>Descriptive multimethod study was conducted. A WBA program was designed for PGs in Chemical Pathology on Moodle and forms utilized were case-based discussion (CBD), direct observation of practical skills (DOPS) and evaluation of clinical events (ECE). Consented assessors and PGs were trained on WBA through a workshop. Pretest and posttest to assess PGs knowledge before and after WBA were conducted. Every time a WBA form was filled, perception of PGs and assessors towards WBA, time taken to conduct single WBA and feedback were recorded. Faculty and PGs qualitative feedback on perception of WBA was taken via interviews. WBA tools data and qualitative feedback were used to evaluate the acceptability and feasibility of the new tools.<h4>Results</h4>Six eligible PGs and seventeen assessors participated in this study. A total of 79 CBDs (assessors n = 7 and PGs n = 6), 12 ECEs (assessors n = 6 and PGs n = 5), and 20 DOPS (assessors n = 6 and PGs n = 6) were documented. PGs average pretest score was 55.6%, which was improved to 96.4% in posttest; p value< 0.05. Scores of annual assessment before and after implementation of WBA also showed significant improvement, p value 0.039, Overall mean time taken to evaluate PG's was 12.6 ± 9.9 min and feedback time 9.2 ± 7.4 min. Mean WBA process satisfaction of assessors and PGs on Likert scale of 1 to 10 was 8 ± 1 and 8.3 ± 0.8 respectively.<h4>Conclusion</h4>Both assessors and fellows were satisfied with introduction and implementation of WBA. It gave the fellows opportunity to interact with assessors more often and learn from their rich experience. Gain in knowledge of PGs was identified from the statistically significant improvement in PGs' assessment scores after WBA implementation.
Project description:BACKGROUND: In July 2013, emergency medicine residency programs implemented the Milestone assessment as part of the Next Accreditation System. OBJECTIVE: We hypothesized that applying the Milestone framework to real-time feedback in the emergency department (ED) could affect current feedback processes and culture. We describe the development and implementation of a Milestone-based, learner-centered intervention designed to prompt real-time feedback in the ED. METHODS: We developed and implemented the Milestones Passport, a feedback intervention incorporating subcompetencies, in our residency program in July 2013. Our primary outcomes were feasibility, including faculty and staff time and costs, number of documented feedback encounters in the first 2 months of implementation, and user-reported time required to complete the intervention. We also assessed learner and faculty acceptability. RESULTS: Development and implementation of the Milestones Passport required 10 hours of program coordinator time, 120 hours of software developer time, and 20 hours of faculty time. Twenty-eight residents and 34 faculty members generated 257 Milestones Passport feedback encounters. Most residents and faculty reported that the encounters required fewer than 5 minutes to complete, and 48% (12 of 25) of the residents and 68% (19 of 28) of faculty reported satisfaction with the Milestones Passport intervention. Faculty satisfaction with overall feedback in the ED improved after the intervention (93% versus 54%, P = .003), whereas resident satisfaction with feedback did not change significantly. CONCLUSIONS: The Milestones Passport feedback intervention was feasible and acceptable to users; however, learner satisfaction with the Milestone assessment in the ED was modest.
Project description:INTRODUCTION:Mobile apps that utilize the framework of entrustable professional activities (EPAs) to capture and deliver feedback are being implemented. If EPA apps are to be successfully incorporated into programmatic assessment, a better understanding of how they are experienced by the end-users will be necessary. The authors conducted a qualitative study using the Consolidated Framework for Implementation Research (CFIR) to identify enablers and barriers to engagement with an EPA app. METHODS:Structured interviews of faculty and residents were conducted with an interview guide based on the CFIR. Transcripts were independently coded by two study authors using directed content analysis. Differences were resolved via consensus. The study team then organized codes into themes relevant to the domains of the CFIR. RESULTS:Eight faculty and 10 residents chose to participate in the study. Both faculty and residents found the app easy to use and effective in facilitating feedback immediately after the observed patient encounter. Faculty appreciated how the EPA app forced brief, distilled feedback. Both faculty and residents expressed positive attitudes and perceived the app as aligned with the department's philosophy. Barriers to engagement included faculty not understanding the EPA framework and scale, competing clinical demands, residents preferring more detailed feedback and both faculty and residents noting that the app's feedback should be complemented by a tool that generates more systematic, nuanced, and comprehensive feedback. Residents rarely if ever returned to the feedback after initial receipt. DISCUSSION:This study identified key enablers and barriers to engagement with the EPA app. The findings provide guidance for future research and implementation efforts focused on the use of mobile platforms to capture direct observation feedback.
Project description:Burnout, depression, and suicidality among residents of all specialties have become a critical focus for the medical education community, especially among learners in graduate medical education. In 2017 the Accreditation Council for Graduate Medical Education (ACGME) updated the Common Program Requirements to focus more on resident wellbeing. To address this issue, one working group from the 2017 Resident Wellness Consensus Summit (RWCS) focused on wellness program innovations and initiatives in emergency medicine (EM) residency programs.Over a seven-month period leading up to the RWCS event, the Programmatic Initiatives workgroup convened virtually in the Wellness Think Tank, an online, resident community consisting of 142 residents from 100 EM residencies in North America. A 15-person subgroup (13 residents, two faculty facilitators) met at the RWCS to develop a public, central repository of initiatives for programs, as well as tools to assist programs in identifying gaps in their overarching wellness programs.An online submission form and central database of wellness initiatives were created and accessible to the public. Wellness Think Tank members collected an initial 36 submissions for the database by the time of the RWCS event. Based on general workplace, needs-assessment tools on employee wellbeing and Kern's model for curriculum development, a resident-based needs-assessment survey and an implementation worksheet were created to assist residency programs in wellness program development.The Programmatic Initiatives workgroup from the resident-driven RWCS event created tools to assist EM residency programs in identifying existing initiatives and gaps in their wellness programs to meet the ACGME's expanded focus on resident wellbeing.
Project description:<h4>Background</h4>Residency programs are developing new methods to assess resident competence and to improve the quality of formative assessment and feedback to trainees. Simulation is a valuable tool for giving formative feedback to residents.<h4>Objective</h4>To develop an objective structured clinical examination (OSCE) to improve formative assessment of senior pediatrics residents.<h4>Methods</h4>We developed a multistation examination using various simulation formats to assess the skills of senior pediatrics residents in communication and acute resuscitation. We measured several logistical factors (staffing and program costs) to determine the feasibility of such a program.<h4>Results</h4>Thirty-one residents participated in the assessment program over a 3-month period. Residents received formative feedback comparing their performance to both a standard task checklist and to peers' performance. The program required 16 faculty members per session, and had a cost of $624 per resident.<h4>Conclusions</h4>A concentrated assessment program using simulation can be a valuable tool to assess residents' skills in communication and acute resuscitation and provide directed formative feedback. However, such a program requires considerable financial and staffing resources.