Project description:Implementation science scholars argue that knowing 'what works' in public health is insufficient to change practices, without understanding 'how', 'where' and 'why' something works. In the peer reviewed literature on conflict-affected settings, challenges to produce research, make decisions informed by evidence, or deliver services are documented, but what about the understanding of 'how', 'where' and 'why' changes occur? We explored these questions through a scoping review of peer-reviewed literature based on core dimensions of the Extended Normalization Process Theory. We selected papers that provided data on how something might work (who is involved and how?), where (in what organizational arrangements or contexts?) and why (what was done?). We searched the Global Health, Medline, Embase databases. We screened 2054 abstracts and 128 full texts. We included 22 papers (of which 15 related to mental health interventions) and analysed them thematically. We had the results revised critically by co-authors experienced in operational research in conflict-affected settings. Using an implementation science lens, we found that: (a) implementing actors are often engaged after research is produced to discuss feasibility; (b) new interventions or delivery modalities need to be flexible; (c) disruptions affect how research findings can lead to sustained practices; (d) strong leadership and stable resources are crucial for frontline actors; (e) creating a safe learning space to discuss challenges is difficult; (f) feasibility in such settings needs to be balanced. Lastly, communities and frontline actors need to be engaged as early as possible in the research process. We used our findings to adapt the Extended Normalization Process Theory for operational research in settings affected by conflicts. Other theories used by researchers to document the implementation processes need to be studied further.
Project description:Study Design: Scoping review Objective: To study the design, clinical setting and outcome measures used in spinal cord injury rehabilitation publications. Methods: A literature search on PubMed and Medline was conducted focusing on articles published between 1990-2016 and using "traumatic SCI", "functional outcomes", "rehabilitation", "work" and "return to work" as outcomes. Studies were categorized based on design (intervention, including RCTs vs. non-intervention studies), settings (inpatient vs. outpatient vs. transition), and outcome measures used (impairment vs. function vs. participation/integration vs. quality of life vs. symptoms). Work-related studies were categorized independently. Results: Five hundred forty-four articles met the inclusion criteria. Of these, 234 were interventional studies, including 23 RCTs. Studies were evenly divided among inpatient, outpatient and transition settings. Of the 234 interventional studies, 143 used functional evaluations. Sixty-one different functional instruments were used, with a predominant use of the Functional Independence Measure (61 times) and an additional use of SCI-specific measures, i.e. Spinal Cord Independence Measure and Craig Handicap Assessment and Reporting Technique (13 times each). Fifty-one studies measured mobility, while only three measured hand functions. The work-related sub-analysis revealed 32 intervention studies (no RCTs), of which 15 used functional evaluations and only three focused on tetraplegia. Conclusion: Our study revealed a paucity of intervention trials and RCTs, indicating a dearth of knowledge that would be needed to establish evidence-based practice guidelines. This is particularly true for tetraplegia. While standard measures of function were frequently used, providing valuable data, there is no consensus about what exact outcome measure to use. Using newer measurement techniques, for instance based on the application of item response theory, should be considered to enhance uniformity.
Project description:ClinicalTrials.gov requires reporting of result summaries for many drug and device trials.To evaluate the consistency of reporting of trials that are registered in the ClinicalTrials.gov results database and published in the literature.ClinicalTrials.gov results database and matched publications identified through ClinicalTrials.gov and a manual search of 2 electronic databases.10% random sample of phase 3 or 4 trials with results in the ClinicalTrials.gov results database, completed before 1 January 2009, with 2 or more groups.One reviewer extracted data about trial design and results from the results database and matching publications. A subsample was independently verified.Of 110 trials with results, most were industry-sponsored, parallel-design drug studies. The most common inconsistency was the number of secondary outcome measures reported (80%). Sixteen trials (15%) reported the primary outcome description inconsistently, and 22 (20%) reported the primary outcome value inconsistently. Thirty-eight trials inconsistently reported the number of individuals with a serious adverse event (SAE); of these, 33 (87%) reported more SAEs in ClinicalTrials.gov. Among the 84 trials that reported SAEs in ClinicalTrials.gov, 11 publications did not mention SAEs, 5 reported them as zero or not occurring, and 21 reported a different number of SAEs. Among 29 trials that reported deaths in ClinicalTrials.gov, 28% differed from the matched publication.Small sample that included earliest results posted to the database.Reporting discrepancies between the ClinicalTrials.gov results database and matching publications are common. Which source contains the more accurate account of results is unclear, although ClinicalTrials.gov may provide a more comprehensive description of adverse events than the publication.Agency for Healthcare Research and Quality.
Project description:ImportanceClinical trial registries are intended to increase clinical research transparency by nonselectively identifying and documenting clinical trial designs and outcomes. Inconsistencies in reported data undermine the utility of such registries and have previously been noted in general medical literature.ObjectiveTo assess whether inconsistencies in reported data exist between ophthalmic literature and clinical trial registries.Design, setting, and participantsIn this retrospective, cross-sectional study, interventional clinical trials published from January 1, 2014, to December 31, 2014, in the American Journal of Ophthalmology, JAMA Ophthalmology, and Ophthalmology were reviewed. Observational, retrospective, uncontrolled, and post hoc reports were excluded, yielding a sample size of 106 articles. Data collection was performed from January through September 2016. Data review and adjudication continued through January 2017.Main outcomes and measuresIf possible, articles were matched to registry entries listed in the ClinicalTrials.gov database or in 1 of 16 international registries indexed by the World Health Organization International Clinical Trials Registry Platform version 3.2 search engine. Each article-registry pair was assessed for inconsistencies in design, results, and funding (each of which was further divided into subcategories) by 2 reviewers and adjudicated by a third.ResultsOf 106 trials that met the study criteria, matching registry entries were found for 68 (64.2%), whereas no matching registry entries were found for 38 (35.8%). Inconsistencies were identified in study design, study results, and funding sources, including specific interventions in 8 (11.8%), primary outcome measure (POM) designs in 32 (47.1%), and POM results in 48 (70.6%). In addition, numerous data pieces were unreported, including analysis methods in 52 (76.5%) and POM results in 38 (55.9%).Conclusions and relevanceClinical trial registries were underused in this sample of ophthalmology clinical trials. For studies with registry data, inconsistency rates between published and registered data were similar to those previously reported for general medical literature. In most cases, inconsistencies involved missing data, but explicit discrepancies in methods and/or data were also found. Transparency and credibility of published trials may be improved by closer attention to their registration and reporting.
Project description:BackgroundHealth systems resilience (HSR) research is a rapidly expanding field, in which key concepts are discussed and theoretical frameworks are emerging with vibrant debate. Fragile and conflict-affected settings (FCAS) are contexts exposed to compounding stressors, for which resilience is an important characteristic. However, only limited evidence has been generated in such settings. We conducted a scoping review to: (a) identify the conceptual frameworks of HSR used in the analysis of shocks and stressors in FCAS; (b) describe the representation of different actors involved in health care governance and service provision in these settings; and (c) identify health systems operations as they relate to absorption, adaptation, and transformation in FCAS.MethodsWe used standard, extensive search methods. The search captured studies published between 2006 and January 2022. We included all peer reviewed and grey literature that adopted a HSR lens in the analysis of health responses to crises. Thematic analysis using both inductive and deductive approaches was conducted, adopting frameworks related to resilience characteristics identified by Kruk et al., and the resilience capacities described by Blanchet et al. RESULTS: Thirty-seven studies met our inclusion criteria. The governance-centred, capacity-oriented framework for HSR emerged as the most frequently used lens of analysis to describe the health responses to conflict and chronic violence specifically. Most studies focused on public health systems' resilience analysis, while the private health sector is only examined in complementarity with the former. Communities are minimally represented, despite their widely acknowledged role in supporting HSR. The documentation of operations enacting HSR in FCAS is focused on absorption and adaptation, while transformation is seldom described. Absorptive, adaptive, and transformative interventions are described across seven different domains: safety and security, society, health system governance, stocks and supplies, built environment, health care workforce, and health care services.ConclusionsOur review findings suggest that the governance-centred framework can be useful to better understand HSR in FCAS. Future HSR research should document adaptive and transformative strategies that advance HSR, particularly in relation to actions intended to promote the safety and security of health systems, the built environment for health, and the adoption of a social justice lens.
Project description:This article discusses the open-identity label, i.e., the practice of disclosing reviewers' names in published scholarly books, a common practice in Central and Eastern European countries. This study's objective is to verify whether the open-identity label is a type of peer-review label (like those used in Finland and Flanders, i.e., the Flemish part of Belgium), and as such, whether it can be used as a delineation criterion in various systems used to evaluate scholarly publications. We have conducted a two-phase sequential explanatory study. In the first phase, interviews with 20 of the 40 largest Polish publishers of scholarly books were conducted to investigate how Polish publishers control peer reviews and whether the open-identity label can be used to identify peer-reviewed books. In the other phase, two questionnaires were used to analyze perceptions of peer-review and open-identity labelling among authors (n = 600) and reviewers (n = 875) of books published by these 20 publishers. Integrated results allowed us to verify publishers' claims concerning their peer-review practices. Our findings reveal that publishers actually control peer reviews by providing assessment criteria to reviewers and sending reviews to authors. Publishers rarely ask for permission to disclose reviewers' names, but it is obvious to reviewers that this practice of disclosing names is part of peer reviewing. This study also shows that only the names of reviewers who accepted manuscripts for publication are disclosed. Thus, most importantly, our analysis shows that the open-identity label that Polish publishers use is a type of peer-review label like those used in Flanders and Finland, and as such, it can be used to identify peer-reviewed scholarly books.
Project description:Children and young people are disproportionately vulnerable to harm during crises, yet child public health expertise is limited in humanitarian settings and outcomes and impact data are lacking. This review characterises child public health indicators that are routinely collected, required by donors, and recommended for use in fragile, conflict-affected, and vulnerable (FCV) settings. We conducted database and grey literature searches and collected indicators from technical agencies, partnerships, donors, and nongovernmental organisations providing child public health services in FCV settings. Indicators were included if they were child-specific or disaggregated for ≤18 years. Indicators were coded into domains of health status, health service, social determinants, and health behaviours and analysed for trends in thematic focus and clarity. A total of 668 indicators were included. Routinely collected indicators (N = 152) focused on health status and health services. Donors required only 14 indicators. Technical bodies and academics recommended 502 indicators for routine measurement. Prioritised topics included nutrition, paediatrics, infectious diseases, mortality, and maternal-newborn care. There were notable gaps in indicators for child development and disability. Child protection indicators were not routinely collected, despite being the focus of 39% of recommended indicators. There were overlaps and duplications, varied age disaggregations, and 49% of indicators required interpretation to measure. The review demonstrates that it is feasible to routinely measure child public health outcomes in FCV settings. Recommendations from technical agencies and partnerships are characterised by numerous indicators with duplication, poor definitions, and siloed sector-specific focus. There are gaps in measurement of critical child public health topics. To improve safety and effectiveness of interventions for child public health, consensus is needed on priority topics and a shortlist of quality, standardised indicators that governmental and nongovernmental actors can be reasonably expected to measure. Indicators should be prioritised to support decision-making and include proxy indicators for periods when routine measurement is hampered.
Project description:ObjectiveWe undertook this investigation to characterize conflict of interest (COI) policies of biomedical journals with respect to authors, peer-reviewers, and editors, and to ascertain what information about COI disclosures is publicly available.MethodsWe performed a cross-sectional survey of a convenience sample of 135 editors of peer-reviewed biomedical journals that publish original research. We chose an international selection of general and specialty medical journals that publish in English. Selection was based on journal impact factor, and the recommendations of experts in the field. We developed and pilot tested a 3-part web-based survey. The survey included questions about the presence of specific policies for authors, peer-reviewers, and editors, specific restrictions on authors, peer-reviewers, and editors based on COI, and the public availability of these disclosures. Editors were contacted a minimum of 3 times.ResultsThe response rate for the survey was 91 (67%) of 135, and 85 (93%) of 91 journals reported having an author COI policy. Ten (11%) journals reported that they restrict author submissions based on COI (e.g., drug company authors' papers on their products are not accepted). While 77% report collecting COI information on all author submissions, only 57% publish all author disclosures. A minority of journals report having a specific policy on peer-reviewer 46% (42/91) or editor COI 40% (36/91); among these, 25% and 31% of journals state that they require recusal of peer-reviewers and editors if they report a COI. Only 3% of respondents publish COI disclosures of peer-reviewers, and 12% publish editor COI disclosures, while 11% and 24%, respectively, reported that this information is available upon request.ConclusionMany more journals have a policy regarding COI for authors than they do for peer-reviewers or editors. Even author COI policies are variable, depending on the type of manuscript submitted. The COI information that is collected by journals is often not published; the extent to which such "secret disclosure" may impact the integrity of the journal or the published work is not known.
Project description:ObjectivesThe aim of this study was to gain a deeper understanding of the objectives and outcomes of patient-driven innovations that have been published in the scientific literature, focusing on (A) the unmet needs that patient-driven innovations address and (B) the outcomes for patients and healthcare that have been reported.MethodsWe performed an inductive qualitative content analysis of scientific publications that were included in a scoping review of patient-driven innovations, previously published by our research group. The review was limited to English language publications in peer-reviewed journals, published in the years 2008-2020.ResultsIn total, 83 publications covering 21 patient-driven innovations were included in the analysis. Most of the innovations were developed for use on an individual or community level without healthcare involvement. We created three categories of unmet needs that were addressed by these innovations: access to self-care support tools, open sharing of information and knowledge, and patient agency in self-care and healthcare decisions. Eighteen (22%) publications reported outcomes of patient-driven innovations. We created two categories of outcomes: impact on self-care, and impact on peer interaction and healthcare collaboration.ConclusionsThe patient-driven innovations illustrated a diversity of innovative approaches to facilitate patients' and informal caregivers' daily lives, interactions with peers and collaborations with healthcare. As our findings indicate, patients and informal caregivers are central stakeholders in driving healthcare development and research forward to meet the needs that matter to patients and informal caregivers. However, only few studies reported on outcomes of patient-driven innovations. To support wider implementation, more evaluation studies are needed, as well as research into regulatory approval processes, dissemination and governance of patient-driven innovations.
Project description:BackgroundIt is uncertain what could be the best training methods for infection prevention and control when an infectious disease threat is active or imminent in especially vulnerable or resource-scarce settings.MethodsA scoping review was undertaken to find and summarise relevant information about training modalities, replicability and effectiveness of IPC training programmes for clinical staff as reported in multiple study designs. Eligible settings were conflict-affected or in countries classified as low-income or lower-middle income (World Bank 2022 classifications). Search terms for LILACS and Scopus were developed with input of an expert working group. Initially found articles were dual-screened independently, data were extracted especially about infection threat, training outcomes, needs assessment and teaching modalities. Backwards and forwards citation searches were done to find additional studies. Narrative summary describes outcomes and aspects of the training programmes. A customised quality assessment tool was developed to describe whether each study could be informative for developing specific future training programmes in relevant vulnerable settings, based on six questions about replicability and eight questions about other biases.FindingsIncluded studies numbered 29, almost all (n = 27) were pre-post design, two were trials. Information within the included studies to enable replicability was low (average score 3.7/6). Nearly all studies reported significant improvement in outcomes suggesting that the predominant study design (pre-post) is inadequate to assess improvement with low bias, that any and all such training is beneficial, or that publication bias prevented reporting of less successful interventions and thus a informative overview.ConclusionIt seems likely that many possible training formats and methods can lead to improved worker knowledge, skills and / or practice in infection prevention and control. Definitive evidence in favour of any specific training format or method is hard to demonstrate due to incomplete descriptions, lack of documentation about unsuccessful training, and few least-biased study designs (experimental trials). Our results suggest that there is a significant opportunity to design experiments that could give insights in favour of or against specific training methods. "Sleeping" protocols for randomised controlled trials could be developed and then applied quickly when relevant future events arise, with evaluation for outcomes such as knowledge, practices, skills, confidence, and awareness.