Project description:BackgroundStudies from low- and middle-income countries (LMIC) indicate that the use of audio computer-assisted self-interviewing (ACASI) is associated with more accurate reporting of sensitive behaviors (e.g. substance use and sexual risk behaviors) compared with interviewer-administered questionnaires. There is a lack of published information on the process of designing, developing, and implementing ACASI in LMIC. In this paper we describe our experience implementing an ACASI system for use with a population of orphans and vulnerable children in Zambia.MethodsA questionnaire of mental health, substance use, and HIV risk behaviors was converted into an ACASI system, tested in pilot and validity studies, and implemented for use in a randomized controlled trial. Successes, barriers, and challenges associated with each stage in the development and implementation of ACASI are described.ResultsWe were able to convert a lengthy and complex survey into an ACASI system that was feasible for use in Zambia. Lessons learned include the importance of: (1) piloting the written and electronic versions; (2) proper and extensive training for study assessors to use ACASI and for those doing voice recordings; and (3) attention to logistics such as appropriate space, internet, and power.ConclusionsWe found that ACASI was feasible and acceptable in Zambia with proper planning, training, and supervision. Given mounting evidence indicating that ACASI provides more accurate self-report data and immediate data download compared with interview-administered measures, it may be an effective and economical alternative for behavioral health research studies in LMIC.
Project description:BackgroundThe store-and-forward camera-based evaluation of the eye, or teleophthalmology, is an effective way to identify diabetic retinopathy, the leading cause of blindness in the United States, but uptake has been slow. Understanding the barriers to and facilitators of implementing teleophthalmology programs from those actively adopting, running, and sustaining such programs is important for widespread adoption.ObjectiveThis study aims to understand the factors that are important in introducing teleophthalmology to improve access to diagnostic eye care for patients with diabetes in primary care clinics by using implementation science.MethodsThis qualitative study in 3 urban, low-income, largely racial and ethnic minority-serving safety-net primary care clinics in Rochester, New York, interviewed nurses and physicians on implementing a teleophthalmology program by using questions informed by the Practical, Robust Implementation and Sustainability Model and the Consolidated Framework for Implementation Research.ResultsPrimary care nurses operationalizing the program in their clinics saw increased work burden and a lack of self-efficacy as barriers. Continuous training on the teleophthalmology process for nurses, physicians, and administrative staff through in-service and peer training by champions and superusers were identified by interviewees as needs. Facilitators included the perceived convenience for the patient and a perceived educational advantage to the program, as it gave an opportunity for providers to discuss the importance of eye care with patients. Concerns in making and tracking referrals to ophthalmology because of challenges related to care coordination were highlighted. The financial aspects of the program (eg, patient coverage and care provider reimbursement) were unclear to many staff members, influencing adoption and sustainability.ConclusionsStreamlining processes and workflows, training and assigning adequate staff, effectively coordinating care between primary care and eye care to improve follow-ups, and ensuring financial viability can all help streamline the adoption of teleophthalmology.
Project description:BackgroundImplementation science aims to accelerate the public health impact of evidence-based interventions. However, implementation science has had too little focus on the role of health policy - and its inseparable politics, polity structures, and policymakers - in the implementation and sustainment of evidence-based healthcare. Policies can serve as determinants, implementation strategies, the evidence-based "thing" to be implemented, or another variable in the causal pathway to healthcare access, quality, and patient outcomes. Research describing the roles of policy in dissemination and implementation (D&I) efforts is needed to resolve persistent knowledge gaps about policymakers' evidence use, how evidence-based policies are implemented and sustained, and methods to de-implement policies that are ineffective or cause harm. Few D&I theories, models, or frameworks (TMF) explicitly guide researchers in conceptualizing where, how, and when policy should be empirically investigated. We conducted and reflected on the results of a scoping review to identify gaps of existing Exploration, Preparation, Implementation, and Sustainment (EPIS) framework-guided policy D&I studies. We argue that rather than creating new TMF, researchers should optimize existing TMF to examine policy's role in D&I. We describe six recommendations to help researchers optimize existing D&I TMF. Recommendations are applied to EPIS, as one example for advancing TMF for policy D&I.Recommendations(1) Specify dimensions of a policy's function (policy goals, type, contexts, capital exchanged). (2) Specify dimensions of a policy's form (origin, structure, dynamism, outcomes). (3) Identify and define the nonlinear phases of policy D&I across outer and inner contexts. (4) Describe the temporal roles that stakeholders play in policy D&I over time. (5) Consider policy-relevant outer and inner context adaptations. (6) Identify and describe bridging factors necessary for policy D&I success.ConclusionResearchers should use TMF to meaningfully conceptualize policy's role in D&I efforts to accelerate the public health impact of evidence-based policies or practices and de-implement ineffective and harmful policies. Applying these six recommendations to existing D&I TMF advances existing theoretical knowledge, especially EPIS application, rather than introducing new models. Using these recommendations will sensitize researchers to help them investigate the multifaceted roles policy can play within a causal pathway leading to D&I success.
Project description:RationaleThe host-pathogen relationship is inherently dynamic and constantly evolving. Applying an implementation science lens to policy evaluation suggests that policy impacts are variable depending upon key implementation outcomes (feasibility, acceptability, appropriateness costs) and conditions and contexts.Covid-19 case studyExperiences with non-pharmaceutical interventions (NPIs) including masking, testing, and social distancing/business and school closures during the COVID-19 pandemic response highlight the importance of considering public health policy impacts through an implementation science lens of constantly evolving contexts, conditions, evidence, and public perceptions. As implementation outcomes (feasibility, acceptability) changed, the effectiveness of these interventions changed thereby altering public health policy impact. Sustainment of behavioral change may be a key factor determining the duration of effectiveness and ultimate impact of pandemic policy recommendations, particularly for interventions that require ongoing compliance at the level of the individual.Practical framework for assessing and evaluating pandemic policyUpdating public health policy recommendations as more data and alternative interventions become available is the evidence-based policy approach and grounded in principles of implementation science and dynamic sustainability. Achieving the ideal of real-time policy updates requires improvements in public health data collection and analysis infrastructure and a shift in public health messaging to incorporate uncertainty and the necessity of ongoing changes. In this review, the Dynamic Infectious Diseases Public Health Response Framework is presented as a model with a practical tool for iteratively incorporating implementation outcomes into public health policy design with the aim of sustaining benefits and identifying when policies are no longer functioning as intended and need to be adapted or de-implemented.Conclusions and implicationsReal-time decision making requires sensitivity to conditions on the ground and adaptation of interventions at all levels. When asking about the public health effectiveness and impact of non-pharmaceutical interventions, the focus should be on when, how, and for how long they can achieve public health impact. In the future, rather than focusing on models of public health intervention effectiveness that assume static impacts, policy impacts should be considered as dynamic with ongoing re-evaluation as conditions change to meet the ongoing needs of the ultimate end-user of the intervention: the public.
Project description:BackgroundOrganizational readiness is a key factor for successful implementation of evidence-based interventions (EBIs), but a valid and reliable measure to assess readiness across contexts and settings is needed. The R = MC2 heuristic posits that organizational readiness stems from an organization's motivation, capacity to implement a specific innovation, and its general capacity. This paper describes a process used to examine the face and content validity of items in a readiness survey developed to assess organizational readiness (based on R = MC2) among federally qualified health centers (FQHC) implementing colorectal cancer screening (CRCS) EBIs.MethodsWe conducted 20 cognitive interviews with FQHC staff (clinical and non-clinical) in South Carolina and Texas. Participants were provided a subset of items from the readiness survey to review. A semi-structured interview guide was developed to elicit feedback from participants using "think aloud" and probing techniques. Participants were recruited using a purposive sampling approach and interviews were conducted virtually using Zoom and WebEx. Participants were asked 1) about the relevancy of items, 2) how they interpreted the meaning of items or specific terms, 3) to identify items that were difficult to understand, and 4) how items could be improved. Interviews were transcribed verbatim and coded in ATLAS.ti. Findings were used to revise the readiness survey.ResultsKey recommendations included reducing the survey length and removing redundant or difficult to understand items. Additionally, participants recommended using consistent terms throughout (e.g., other units/teams vs. departments) the survey and changing pronouns (e.g., people, we) to be more specific (e.g., leadership, staff). Moreover, participants recommended specifying ambiguous terms (e.g., define what "better" means).ConclusionUse of cognitive interviews allowed for an engaged process to refine an existing measure of readiness. The improved and finalized readiness survey can be used to support and improve implementation of CRCS EBIs in the clinic setting and thus reduce the cancer burden and cancer-related health disparities.
Project description:BackgroundTo meet the growing demand for implementation science expertise, building capacity is a priority. Various training opportunities have emerged to meet this need. To ensure rigor and achievement of specific implementation science competencies, it is critical to systematically evaluate training programs.MethodsThe Penn Implementation Science Institute (PennISI) offers 4 days (20 h) of virtual synchronous training on foundational and advanced topics in implementation science. Through a pre-post design, this study evaluated the sixth PennISI, delivered in 2022. Surveys measures included 43 implementation science training evaluation competencies grouped into four thematic domains (e.g., items related to implementation science study design grouped into the "design, background, and rationale" competency category), course-specific evaluation criteria, and open-ended questions to evaluate change in knowledge and suggestions for improving future institutes. Mean composite scores were created for each of the competency themes. Descriptive statistics and thematic analysis were completed.ResultsOne hundred four (95.41% response rate) and 55 (50.46% response rate) participants completed the pre-survey and post-survey, respectively. Participants included a diverse cohort of individuals primarily affiliated with US-based academic institutions and self-reported as having novice or beginner-level knowledge of implementation science at baseline (81.73%). In the pre-survey, all mean composite scores for implementation science competencies were below one (i.e., beginner-level). Participants reported high value from the PennISI across standard course evaluation criteria (e.g., mean score of 3.77/4.00 for overall quality of course). Scores for all competency domains increased to a score between beginner-level and intermediate-level following training. In both the pre-survey and post-survey, competencies related to "definition, background, and rationale" had the highest mean composite score, whereas competencies related to "design and analysis" received the lowest score. Qualitative themes offered impressions of the PennISI, didactic content, PennISI structure, and suggestions for improvement. Prior experience with or knowledge of implementation science influenced many themes.ConclusionsThis evaluation highlights the strengths of an established implementation science institute, which can serve as a model for brief, virtual training programs. Findings provide insight for improving future program efforts to meet the needs of the heterogenous implementation science community (e.g., different disciplines and levels of implementation science knowledge). This study contributes to ensuring rigorous implementation science capacity building through the evaluation of programs.
Project description:This study evaluated the effectiveness of Project PLUS, a 6-session Motivational Interviewing and Cognitive Behavioral intervention to reduce substance use and improve antiretroviral therapy (ART) adherence among PLWH. In a quasi-experimental design, 84 participants from a network of three comprehensive care clinics in New York City received the intervention immediately post-baseline (the Immediate condition) and 90 were assigned to a Waitlist control. Viral load and CD4 data were extracted from electronic medical records (EMR) for a No-Intervention comparison cohort (n = 120). Latent growth curve analyses did not show a consistent pattern of significant between-group differences post-intervention or across time in ART adherence or substance use severity between Immediate and Waitlist participants. Additionally, Immediate intervention participants did not differ significantly from the Waitlist or No-Treatment groups on viral load or CD4 post-intervention or across time. The potential to detect intervention effects may have been limited by the use of a quasi-experimental design, the high quality of standard care at these clinics, or inadequate intervention dose.Trial Registration: ClinicalTrials.gov (NIH U.S. National Library of Medicine) Identifier: NCT02390908; https://clinicaltrials.gov/ct2/show/NCT02390908.
Project description:In implementation science (IS), conducting well-targeted and reproducible literature searches is challenging due to non-specific and varying terminology that is fragmented over multiple disciplines. A list of journals that publish IS-relevant content for use in search strings can support this process. We conducted a cross-sectional online survey of 56 Australian, European, and North American IS experts to identify and prioritize relevant journals that publish IS articles. Journals' relevance was assessed by providing each with a list of 12 journals, to which they were encouraged to add additional journal names and comments as free text. We also assessed which journals had published special IS-focused issues-identified via PubMed and Google searches-over the last 20 years. Data were analyzed descriptively. Between February 28 and March 15, 2020, a purposive sample of 34/56 experts participated in the survey (response rate: 60.7%). Implementation Science and BMC Health Services Research were perceived as relevant by 97.1% of participants; other journals' relevance varied internationally. Experts proposed 50 additional journals from various clinical fields and health science disciplines. We identified 12 calls and 53 special issues on IS published within various journals and research fields. Experts' comments confirmed the described challenges in identifying IS literature. This report presents experts' ratings of IS journals, which can be included in strategies supporting searches of IS evidence. However, challenges in identifying IS evidence remain geographically and interdisciplinary. Further investment is needed to develop reproducible search strings to capture IS evidence as an important step in improving IS research quality.
Project description:Gaps in the implementation of effective interventions impact nearly all cancer prevention and control strategies in the US including Massachusetts. To close these implementation gaps, evidence-based interventions must be rapidly and equitably implemented in settings serving racially, ethnically, socioeconomically, and geographically diverse populations. This paper provides a brief overview of The Implementation Science Center for Cancer Control Equity (ISCCCE) and describes how we have operationalized our commitment to a robust community-engaged center that aims to close these gaps. We describe how ISCCCE is organized and how the principles of community-engaged research are embedded across the center. Principles of community engagement have been operationalized across all components of ISCCCE. We have intentionally integrated these principles throughout all structures and processes and have developed evaluation strategies to assess whether the quality of our partnerships reflects the principles. ISCCCE is a comprehensive community-engaged infrastructure for studying efficient, pragmatic, and equity-focused implementation and adaptation strategies for cancer prevention in historically and currently disadvantaged communities with built-in methods to evaluate the quality of community engagement. This engaged research center is designed to maximize the impact and relevance of implementation research on cancer control in community health centers.