Project description:This is the protocol for a Campbell review. The aim of this study is to comprehensively assess the quality and nature of the search methods and reporting across Campbell systematic reviews. The search methods used in systematic reviews provide the foundation for establishing the body of literature from which conclusions are drawn and recommendations made. Searches should be comprehensive and reporting of search methods should be transparent and reproducible. Campbell Collaboration systematic reviews strive to adhere to the best methodological guidance available for this type of searching. The current work aims to provide a comprehensive assessment of the quality of the search methods and reporting in Campbell Collaboration systematic reviews. Our specific objectives include the following: To examine how searches are currently conducted in Campbell systematic reviews. To identify any machine learning or automation methods used, or emerging and less commonly used approaches to web searching. To examine how search strategies, search methods and search reporting adhere to the Methodological Expectations of Campbell Collaboration Intervention Reviews (MECCIR) and PRISMA guidelines. The findings will be used to identify opportunities for advancing current practices in Campbell reviews through updated guidance, peer review processes and author training and support.
Project description:Abstract This is the protocol for a Campbell systematic review. The objectives are as follows: To identify methods used to assess the risk of outcome reporting bias (ORB) in studies included in recent Campbell systematic reviews of intervention effects. The review will answer the following questions: What proportion of recent Campbell reviews included assessment of ORB? How did recent reviews define levels of risk of ORB (what categories, labels, and definitions did they use)? To what extent and how did these reviews use study protocols as sources of data on ORB? To what extent and how did reviews document reasons for judgments about risk of ORB? To what extent and how did reviews assess the inter‐rater reliability of ORB ratings? To what extent and how were issues of ORB considered in the review's abstract, plain language summary, and conclusions?
Project description:ObjectiveA critical element in conducting a systematic review is the identification of studies. To date, very little empirical evidence has been reported on whether the presence of a librarian or information professional can contribute to the quality of the final product. The goal of this study was to compare the reporting rigor of the literature searching component of systematic reviews with and without the help of a librarian.MethodSystematic reviews published from 2002 to 2011 in the twenty highest impact factor pediatrics journals were collected from MEDLINE. Corresponding authors were contacted via an email survey to determine if a librarian was involved, the role that the librarian played, and functions that the librarian performed. The reviews were scored independently by two reviewers using a fifteen-item checklist.ResultsThere were 186 reviews that met the inclusion criteria, and 44% of the authors indicated the involvement of a librarian in conducting the systematic review. With the presence of a librarian as coauthor or team member, the mean checklist score was 8.40, compared to 6.61 (p<0.001) for reviews without a librarian.ConclusionsFindings indicate that having a librarian as a coauthor or team member correlates with a higher score in the literature searching component of systematic reviews.
Project description:BACKGROUND:A recent study by Page et al. (PLoS Med. 2016;13(5):e1002028) claimed that increasing numbers of reviews are being published and many are poorly-conducted and reported. The aim of the present study was to assess how well reporting standards of systematic reviews produced in a Health Technology Assessment (HTA) context compare with reporting in Cochrane and other 'non-Cochrane' systematic reviews from the same years (2004 and 2014), as reported by Page et al. (PLoS Med. 2016;13(5):e1002028). METHODS:All relevant UK HTA programme systematic reviews published in 2004 and 2014 were identified. After piloting of the form, two reviewers each extracted relevant data on conduct and reporting from these reviews. These data were compared with data for Cochrane and "non-Cochrane" systematic reviews, as published by Page et al. (PLoS Med. 2016;13(5):e1002028). All data were tabulated and summarized. RESULTS:There were 30 UK HTA programme systematic reviews and 300 other systematic reviews, including Cochrane reviews (n = 45). The percentage of HTA reviews with required elements of conduct and reporting was frequently very similar to Cochrane and much higher than all other systematic reviews, e.g. availability of protocols (90, 98 and 16% respectively); the specification of study design criteria (100, 100, 79%); the reporting of outcomes (100, 100, 78%), quality assessment (100, 100, 70%); the searching of trial registries for unpublished data (70, 62, 19%); reporting of reasons for excluding studies (91, 91 and 70%) and reporting of authors' conflicts of interests (100, 100, 87%). HTA reviews only compared less favourably with Cochrane and other reviews in assessments of publication bias. CONCLUSIONS:UK HTA systematic reviews are often produced within a specific policy-making context. This context has implications for timelines, tools and resources. However, UK HTA systematic reviews still tend to present standards of conduct and reporting equivalent to "gold standard" Cochrane reviews and superior to systematic reviews more generally.
Project description:ObjectivesTo assess the methods and reporting of systematic reviews of diagnostic tests.Data sourcesSystematic searches of Medline, Embase, and five other databases identified reviews of tests used in patients with cancer. Of these, 89 satisfied our inclusion criteria of reporting accuracy of the test compared with a reference test, including an electronic search, and published since 1990.Review methodsAll reviews were assessed for methods and reporting of objectives, search strategy, participants, clinical setting, index and reference tests, study design, study results, graphs, meta-analysis, quality, bias, and procedures in the review. We assessed 25 randomly selected reviews in more detail.Results75% (67) of the reviews stated inclusion criteria, 49% (44) tabulated characteristics of included studies, 40% (36) reported details of study design, 17% (15) reported on the clinical setting, 17% (15) reported on the severity of disease in participants, and 49% (44) reported on whether the tumours were primary, metastatic, or recurrent. Of the 25 reviews assessed in detail, 68% (17) stated the reference standard used in the review, 36% (9) reported the definition of a positive result for the index test, and 56% (14) reported sensitivity, specificity, and sample sizes for individual studies. Of the 89 reviews, 61% (54) attempted to formally synthesise results of the studies and 32% (29) reported formal assessments of study quality.ConclusionsReliability and relevance of current systematic reviews of diagnostic tests is compromised by poor reporting and review methods.
Project description:BackgroundMany systematic reviews (SRs) have been published about the various treatments for distal radius fractures (DRF). The heterogeneity of SRs results may come from the misuse of SR methods, and literature overviews have demonstrated that SRs should be considered with caution as they may not always be synonymous with high-quality standards. Our objective is to evaluate the quality of published SRs on the treatment of DRF through these tools.MethodsThe methods utilized in this review were previously published in the PROSPERO database. We considered SRs of surgical and nonsurgical interventions for acute DRF in adults. A comprehensive search strategy was performed in the MEDLINE database (inception to May 2017) and we manually searched the grey literature for non-indexed research. Data were independently extracted by two authors. We assessed SR internal validity and reporting using AMSTAR (Assessing the Methodological Quality of Systematic Reviews and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyzes). Scores were calculated as the sum of reported items. We also extracted article characteristics and provided Spearman's correlation measurements.ResultsForty-one articles fulfilled the eligibility criteria. The mean score for PRISMA was 15.90 (CI 95%, 13.9-17.89) and AMSTAR was 6.48 (CI 95% 5.72-7.23). SRs that considered only RCTs had better AMSTAR [7.56 (2.1) vs. 5.62 (2.3); p = 0.014] and PRISMA scores [18.61 (5.22) vs. 13.93 (6.47), p = 0.027]. The presence of meta-analysis on the SRs altered PRISMA scores [19.17 (4.75) vs. 10.21 (4.51), p = 0.001] and AMSTAR scores [7.68 (1.9) vs. 4.39 (1.66), p = 0.001]. Journal impact factor or declaration of conflict of interest did not change PRISMA and AMSTAR scores. We found substantial inter observer agreement for PRISMA (0.82, 95% CI 0.62-0.94; p = 0.01) and AMSTAR (0.65, 95% CI 0.43-0.81; p = 0.01), and moderate correlation between PRISMA and AMSTAR scores (0.83, 95% CI 0.62-0.92; p = 0.01).ConclusionsDRF RCT-only SRs have better PRISMA and AMSTAR scores. These tools have substantial inter-observer agreement and moderate inter-tool correlation. We exposed the current research panorama and pointed out some factors that can contribute to improvements on the topic.
Project description:This review of reviews aimed to evaluate the reporting quality of published systematic reviews and meta-analyses in the field of sports physical therapy using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. This review of reviews included a literature search; in total, 2047 studies published between January 2015 and December 2020 in the top three journals related to sports physical therapy were screened. Among the 125 identified articles, 47 studies on sports physical therapy were included in the analysis (2 systematic reviews and 45 meta-analyses). There were several problems areas, including a lack of reporting for key components of the structured summary (10/47, 21.3%), protocol and registration (18/47, 38.3%), risk of bias in individual studies (28/47, 59.6%), risk of bias across studies (24/47, 51.1%), effect size and variance calculations (5/47, 10.6%), additional analyses (25/47, 53.2%), and funding (10/47, 21.3%). The quality of the reporting of systematic reviews and meta-analyses of studies on sports physical therapy was low to moderate. For better evidence-based practice in sports physical therapy, both authors and readers should examine assumptions in more detail, and report valid and adequate results. The PRISMA guideline should be used more extensively to improve reporting practices in sports physical therapy.
Project description:Background and objectiveThe living systematic review (LSR) approach is based on ongoing surveillance of the literature and continual updating. Most currently available guidance documents address the conduct, reporting, publishing, and appraisal of systematic reviews (SRs), but are not suitable for LSRs per se and miss additional LSR-specific considerations. In this scoping review, we aim to systematically collate methodological guidance literature on how to conduct, report, publish, and appraise the quality of LSRs and identify current gaps in guidance.MethodsA standard scoping review methodology was used. We searched MEDLINE (Ovid), EMBASE (Ovid), and The Cochrane Library on August 28, 2021. As for searching gray literature, we looked for existing guidelines and handbooks on LSRs from organizations that conduct evidence syntheses. The screening was conducted by two authors independently in Rayyan, and data extraction was done in duplicate using a pilot-tested data extraction form in Excel. Data was extracted according to four pre-defined categories for (i) conducting, (ii) reporting, (iii) publishing, and (iv) appraising LSRs. We mapped the findings by visualizing overview tables created in Microsoft Word.ResultsOf the 21 included papers, methodological guidance was found in 17 papers for conducting, in six papers for reporting, in 15 papers for publishing, and in two papers for appraising LSRs. Some of the identified key items for (i) conducting LSRs were identifying the rationale, screening tools, or re-revaluating inclusion criteria. Identified items of (ii) the original PRISMA checklist included reporting the registration and protocol, title, or synthesis methods. For (iii) publishing, there was guidance available on publication type and frequency or update trigger, and for (iv) appraising, guidance on the appropriate use of bias assessment or reporting funding of included studies was found. Our search revealed major evidence gaps, particularly for guidance on certain PRISMA items such as reporting results, discussion, support and funding, and availability of data and material of a LSR.ConclusionImportant evidence gaps were identified for guidance on how to report in LSRs and appraise their quality. Our findings were applied to inform and prepare a PRISMA 2020 extension for LSR.