Accuracy in detecting inadequate research reporting by early career peer reviewers using an online CONSORT-based peer-review tool (COBPeer) versus the usual peer-review process: a cross-sectional diagnostic study.
ABSTRACT: BACKGROUND:The peer review process has been questioned as it may fail to allow the publication of high-quality articles. This study aimed to evaluate the accuracy in identifying inadequate reporting in RCT reports by early career researchers (ECRs) using an online CONSORT-based peer-review tool (COBPeer) versus the usual peer-review process. METHODS:We performed a cross-sectional diagnostic study of 119 manuscripts, from BMC series medical journals, BMJ, BMJ Open, and Annals of Emergency Medicine reporting the results of two-arm parallel-group RCTs. One hundred and nineteen ECRs who had never reviewed an RCT manuscript were recruited from December 2017 to January 2018. Each ECR assessed one manuscript. To assess accuracy in identifying inadequate reporting, we used two tests: (1) ECRs assessing a manuscript using the COBPeer tool (after completing an online training module) and (2) the usual peer-review process. The reference standard was the assessment of the manuscript by two systematic reviewers. Inadequate reporting was defined as incomplete reporting or a switch in primary outcome and considered nine domains: the eight most important CONSORT domains and a switch in primary outcome(s). The primary outcome was the mean number of domains accurately classified (scale from 0 to 9). RESULTS:The mean (SD) number of domains (0 to 9) accurately classified per manuscript was 6.39 (1.49) for ECRs using COBPeer versus 5.03 (1.84) for the journal's usual peer-review process, with a mean difference [95% CI] of 1.36 [0.88-1.84] (p?
Project description:Systematic reviews evaluating the impact of interventions to improve the quality of peer review for biomedical publications highlighted that interventions were limited and have little impact. This study aims to compare the accuracy of early career peer reviewers who use an innovative online tool to the usual peer reviewer process in evaluating the completeness of reporting and switched primary outcomes in completed reports.This is a cross-sectional study of individual two-arm parallel-group randomised controlled trials (RCTs) published in the BioMed Central series medical journals, BMJ, BMJ Open and Annals of Emergency Medicine and indexed with the publication type 'Randomised Controlled Trial'. First, we will develop an online tool and training module based (a) on the Consolidated Standards of Reporting Trials (CONSORT) 2010 checklist and the Explanation and Elaboration document that would be dedicated to junior peer reviewers for assessing the completeness of reporting of key items and (b) the Centre for Evidence-Based Medicine Outcome Monitoring Project process used to identify switched outcomes in completed reports of the primary results of RCTs when initially submitted. Then, we will compare the performance of early career peer reviewers who use the online tool to the usual peer review process in identifying inadequate reporting and switched outcomes in completed reports of RCTs at initial journal submission. The primary outcome will be the mean number of items accurately classified per manuscript. The secondary outcomes will be the mean number of items accurately classified per manuscript for the CONSORT items and the sensitivity, specificity and likelihood ratio to detect the item as adequately reported and to identify a switch in outcomes. We aim to include 120 RCTs and 120 early career peer reviewers.The research protocol was approved by the ethics committee of the INSERM Institutional Review Board (21 January 2016). The study is based on voluntary participation and informed written consent.NCT03119376.
Project description:OBJECTIVE:To evaluate the impact of an editorial intervention to improve completeness of reporting of reports of randomised trials. DESIGN:Randomised controlled trial (RCT). SETTING:BMJ Open's quality improvement programme. PARTICIPANTS:24 manuscripts describing RCTs. INTERVENTIONS:We used an R Shiny application to randomise manuscripts (1:1 allocation ratio, blocks of 4) to the intervention (n=12) or control (n=12) group. The intervention was performed by a researcher with expertise in the content of the Consolidated Standards of Reporting Trials (CONSORT) and consisted of an evaluation of completeness of reporting of eight core CONSORT items using the submitted checklist to locate information, and the production of a report containing specific requests for authors based on the reporting issues found, provided alongside the peer review reports. The control group underwent the usual peer review. OUTCOMES:The primary outcome is the number of adequately reported items (0-8 scale) in the revised manuscript after the first round of peer review. The main analysis was intention-to-treat (n=24), and we imputed the scores of lost to follow-up manuscripts (rejected after peer review and not resubmitted). The secondary outcome is the proportion of manuscripts where each item was adequately reported. Two blinded reviewers assessed the outcomes independently and in duplicate and solved disagreements by consensus. We also recorded the amount of time to perform the intervention. RESULTS:Manuscripts in the intervention group (mean: 7.01; SD: 1.47) were more completely reported than those in the control group (mean: 5.68; SD: 1.43) (mean difference 1.43, 95% CI 0.31 to 2.58). We observed the main differences in items 6a (outcomes), 9 (allocation concealment mechanism), 11a (blinding) and 17a (outcomes and estimation). The mean time to perform the intervention was 87 (SD 42) min. CONCLUSIONS:We demonstrated the benefit of involving a reporting guideline expert in the editorial process. Improving the completeness of RCTs is essential to enhance their usability. TRIAL REGISTRATION NUMBER:NCT03751878.
Project description:<h4>Introduction</h4>Transparent and accurate reporting is essential for readers to adequately interpret the results of a study. Journals can play a vital role in improving the reporting of published randomised controlled trials (RCTs). We describe an RCT to evaluate our hypothesis that asking peer reviewers to check whether the most important and poorly reported CONsolidated Standards of Reporting Trials (CONSORT) items are adequately reported will result in higher adherence to CONSORT guidelines in published RCTs.<h4>Methods and analysis</h4>Manuscripts presenting the primary results of RCTs submitted to participating journals will be randomised to either the intervention group (peer reviewers will receive a reminder and short explanation of the 10 most important and poorly reported CONSORT items; they will be asked to check if these items are reported in the submitted manuscript) or a control group (usual journal practice). The primary outcome will be the mean proportion of the 10 items that are adequately reported in the published articles. Peer reviewers and manuscript authors will not be informed of the study hypothesis, design or intervention. Outcomes will be assessed in duplicate from published articles by two data extractors (at least one blinded to the intervention). We will enrol eligible manuscripts until a minimum of 83 articles per group (166 in total) are published.<h4>Ethics and dissemination</h4>This pragmatic RCT was approved by the Medical Sciences Interdivisional Research Ethics Committee of the University of Oxford (R62779/RE001). If this intervention is effective, it could be implemented by all medical journals without requiring large additional resources at journal level. Findings will be disseminated through presentations in relevant conferences and peer-reviewed publications. This trial is registered on the Open Science Framework (https://osf.io/c4hn8).
Project description:<h4>Background</h4>Pre-publication peer review of manuscripts should enhance the value of research publications to readers who may wish to utilize findings in clinical care or health policy-making. Much published research across all medical specialties is not useful, may be misleading, wasteful and even harmful. Reporting guidelines are tools that in addition to helping authors prepare better manuscripts may help peer reviewers in assessing them. We examined journals' instructions to peer reviewers to see if and how reviewers are encouraged to use them.<h4>Methods</h4>We surveyed websites of 116 journals from the McMaster list. Main outcomes were 1) identification of online instructions to peer reviewers and 2) presence or absence of key domains within instructions: on journal logistics, reviewer etiquette and addressing manuscript content (11 domains).<h4>Findings</h4>Only 41/116 journals (35%) provided online instructions. All 41 guided reviewers about the logistics of their review processes, 38 (93%) outlined standards of behaviour expected and 39 (95%) contained instruction about evaluating the manuscript content. There was great variation in explicit instruction for reviewers about how to evaluate manuscript content. Almost half of the online instructions 19/41 (46%) mentioned reporting guidelines usually as general statements suggesting they may be useful or asking whether authors had followed them rather than clear instructions about how to use them. All 19 named CONSORT for reporting randomized trials but there was little mention of CONSORT extensions. PRISMA, QUOROM (forerunner of PRISMA), STARD, STROBE and MOOSE were mentioned by several journals. No other reporting guideline was mentioned by more than two journals.<h4>Conclusions</h4>Although almost half of instructions mentioned reporting guidelines, their value in improving research publications is not being fully realised. Journals have a responsibility to support peer reviewers. We make several recommendations including wider reference to the EQUATOR Network online library (www.equator-network.org/).
Project description:Importance:Adherence to the Consolidated Standards of Reporting Trials (CONSORT) for randomized clinical trials is associated with improvingquality because inadequate reporting in randomized clinical trials may complicate the interpretation and the application of findings to clinical care. Objective:To evaluate an automated reporting checklist generation tool that uses natural language processing (NLP), called CONSORT-NLP. Design, Setting, and Participants:This study used published journal articles as training, testing, and validation sets to develop, refine, and evaluate the CONSORT-NLP tool. Articles reporting randomized clinical trials were selected from 25 high-impact-factor journals under the following categories: (1) general and internal medicine, (2) oncology, and (3) cardiac and cardiovascular systems. Main Outcomes and Measures:For an evaluation of the performance of this tool, an accuracy metric defined as the number of correct assessments divided by all assessments was calculated. Results:The CONSORT-NLP tool uses the widely used Portable Document Format as an input file. Of the 37 CONSORT reporting items, 34 (92%) were included in the tool. Of these 34 reporting items, 30 were fully implemented; 28 (93%) of the fully implemented CONSORT reporting items had an accuracy of more than 90% for the validation set. The remaining 2 (7%) had an accuracy between 80% and 90% for the validation set. Two to 5 articles were selected from each of these journals for a total of 158 articles to establish a training set of 111 articles to train CONSORT-NLP for CONSORT reporting items, a testing set of 25 articles to refine CONSORT-NLP, and a validation set of 22 articles to assess the performance of CONSORT-NLP. The CONSORT-NLP tool used the Portable Document Format of the articles as input files. A CONSORT-NLP graphical user interface was built using Java in 2019. The time required to complete the CONSORT checklist manually vs using the CONSORT-NLP tool was compared for 30 articles. Two case studies for randomized clinical trials are provided as an illustration for the CONSORT-NLP tool. For the 30 articles investigated, CONSORT-NLP required a mean (SD) 23.0 (4.1) seconds, whereas the manual reviewer required a mean (SD) 11.9 (2.2), 22.6 (4.6), and 57.6 (7.1) minutes, for 3 reviewers, respectively. Conclusions and Relevance:The CONSORT-NLP tool is designed to assist in the reporting of randomized clinical trials. Potential users of CONSORT-NLP include clinicians, researchers, and scientists who plan to publish a randomized trial study in a peer-reviewed journal. The use of CONSORT-NLP may help them save substantial time when generating the CONSORT checklist. This tool may also be useful for manuscript reviewers and journal editors who review these articles.
Project description:<h4>Objective</h4>To investigate the effectiveness of open peer review as a mechanism to improve the reporting of randomised trials published in biomedical journals.<h4>Design</h4>Retrospective before and after study.<h4>Setting</h4>BioMed Central series medical journals.<h4>Sample</h4>93 primary reports of randomised trials published in BMC-series medical journals in 2012.<h4>Main outcome measures</h4>Changes to the reporting of methodological aspects of randomised trials in manuscripts after peer review, based on the CONSORT checklist, corresponding peer reviewer reports, the type of changes requested, and the extent to which authors adhered to these requests.<h4>Results</h4>Of the 93 trial reports, 38% (n=35) did not describe the method of random sequence generation, 54% (n=50) concealment of allocation sequence, 50% (n=46) whether the study was blinded, 34% (n=32) the sample size calculation, 35% (n=33) specification of primary and secondary outcomes, 55% (n=51) results for the primary outcome, and 90% (n=84) details of the trial protocol. The number of changes between manuscript versions was relatively small; most involved adding new information or altering existing information. Most changes requested by peer reviewers had a positive impact on the reporting of the final manuscript--for example, adding or clarifying randomisation and blinding (n=27), sample size (n=15), primary and secondary outcomes (n=16), results for primary or secondary outcomes (n=14), and toning down conclusions to reflect the results (n=27). Some changes requested by peer reviewers, however, had a negative impact, such as adding additional unplanned analyses (n=15).<h4>Conclusion</h4>Peer reviewers fail to detect important deficiencies in reporting of the methods and results of randomised trials. The number of these changes requested by peer reviewers was relatively small. Although most had a positive impact, some were inappropriate and could have a negative impact on reporting in the final publication.
Project description:INTRODUCTION:There is significant variation in how anaesthesia is defined and reported in clinical research. This lack of standardisation complicates the interpretation of published evidence and planning of future clinical trials. This systematic review will assess the reporting of anaesthesia as an intervention in randomised controlled trials (RCT) against the Consolidated Standards of Reporting Trials for Non-Pharmacological Treatments (CONSORT-NPT) framework. METHODS AND ANALYSIS:Online archives of the top six journals ranked by impact factor for anaesthesia and the top three general medicine and general surgery journals will be systematically hand searched over a 42-month time period to identify RCTs describing the use of anaesthetic interventions for any invasive procedure. All modes of anaesthesia and anaesthesia techniques will be included. All study data, including the type of anaesthetic intervention described, will be extracted in keeping with the CONSORT-NPT checklist. Descriptive statistics will be used to summarise general study details including types/modes of anaesthetic interventions, and reporting standards of the trials. ETHICS AND DISSEMINATION:No ethical approval is required. The results will be used to inform a funding application to formally standardise general, local, regional anaesthesia and sedation for use in clinical research. The systematic review will be disseminated via peer-reviewed manuscript and conferences. PROSPERO REGISTRATION NUMBER:CRD42019141670.
Project description:<h4>Background</h4>Although peer review is widely considered to be the most credible way of selecting manuscripts and improving the quality of accepted papers in scientific journals, there is little evidence to support its use. Our aim was to estimate the effects on manuscript quality of either adding a statistical peer reviewer or suggesting the use of checklists such as CONSORT or STARD to clinical reviewers or both.<h4>Methodology and principal findings</h4>Interventions were defined as 1) the addition of a statistical reviewer to the clinical peer review process, and 2) suggesting reporting guidelines to reviewers; with "no statistical expert" and "no checklist" as controls. The two interventions were crossed in a 2x2 balanced factorial design including original research articles consecutively selected, between May 2004 and March 2005, by the Medicina Clinica (Barc) editorial committee. We randomized manuscripts to minimize differences in terms of baseline quality and type of study (intervention, longitudinal, cross-sectional, others). Sample-size calculations indicated that 100 papers provide an 80% power to test a 55% standardized difference. We specified the main outcome as the increment in quality of papers as measured on the Goodman Scale. Two blinded evaluators rated the quality of manuscripts at initial submission and final post peer review version. Of the 327 manuscripts submitted to the journal, 131 were accepted for further review, and 129 were randomized. Of those, 14 that were lost to follow-up showed no differences in initial quality to the followed-up papers. Hence, 115 were included in the main analysis, with 16 rejected for publication after peer review. 21 (18.3%) of the 115 included papers were interventions, 46 (40.0%) were longitudinal designs, 28 (24.3%) cross-sectional and 20 (17.4%) others. The 16 (13.9%) rejected papers had a significantly lower initial score on the overall Goodman scale than accepted papers (difference 15.0, 95% CI: 4.6-24.4). The effect of suggesting a guideline to the reviewers had no effect on change in overall quality as measured by the Goodman scale (0.9, 95% CI: -0.3-+2.1). The estimated effect of adding a statistical reviewer was 5.5 (95% CI: 4.3-6.7), showing a significant improvement in quality.<h4>Conclusions and significance</h4>This prospective randomized study shows the positive effect of adding a statistical reviewer to the field-expert peers in improving manuscript quality. We did not find a statistically significant positive effect by suggesting reviewers use reporting guidelines.
Project description:<h4>Introduction</h4>Evidence in the medical literature suggests that trial registration may not be preventing selective reporting of results. We wondered about the place of such information in the peer-review process.<h4>Method</h4>We asked 1,503 corresponding authors of clinical trials and 1,733 reviewers to complete an online survey soliciting their views on the use of trial registry information during the peer-review process.<h4>Results</h4>1,136 authors (n?=?713) and reviewers (n?=?423) responded (37.5%); 676 (59.5%) had reviewed an article reporting a clinical trial in the past 2 years. Among these, 232 (34.3%) examined information registered on a trial registry. If one or more items (primary outcome, eligibility criteria, etc.) differed between the registry record and the manuscript, 206 (88.8%) mentioned the discrepancy in their review comments, 46 (19.8%) advised editors not to accept the manuscript, and 8 did nothing. The reviewers' reasons for not using the trial registry information included a lack of registration number in the manuscript (n?=?132; 34.2%), lack of time (n?=?128; 33.2%), lack of usefulness of registered information for peer review (n?=?100; 25.9%), lack of awareness about registries (n?=?54; 14%), and excessive complexity of the process (n?=?39; 10.1%).<h4>Conclusion</h4>This survey revealed that only one-third of the peer reviewers surveyed examined registered trial information and reported any discrepancies to journal editors.
Project description:<h4>Objective</h4>To investigate the effect of an additional review based on reporting guidelines such as STROBE and CONSORT on quality of manuscripts.<h4>Design</h4>Masked randomised trial. Population Original research manuscripts submitted to the Medicina Clínica journal from May 2008 to April 2009 and considered suitable for publication.<h4>Intervention</h4><h4>Control group</h4>conventional peer reviews alone. Intervention group: conventional review plus an additional review looking for missing items from reporting guidelines. Outcomes Manuscript quality, assessed with a 5 point Likert scale (primary: overall quality; secondary: average quality of specific items in paper). Main analysis compared groups as allocated, after adjustment for baseline factors (analysis of covariance); sensitivity analysis compared groups as reviewed. Adherence to reviewer suggestions assessed with Likert scale.<h4>Results</h4>Of 126 consecutive papers receiving conventional review, 34 were not suitable for publication. The remaining 92 papers were allocated to receive conventional reviews alone (n=41) or additional reviews (n=51). Four papers assigned to the conventional review group deviated from protocol; they received an additional review based on reporting guidelines. We saw an improvement in manuscript quality in favour of the additional review group (comparison as allocated, 0.25, 95% confidence interval -0.05 to 0.54; as reviewed, 0.33, 0.03 to 0.63). More papers with additional reviews than with conventional reviews alone improved from baseline (22 (43%) v eight (20%), difference 23.6% (3.2% to 44.0%), number needed to treat 4.2 (from 2.3 to 31.2), relative risk 2.21 (1.10 to 4.44)). Authors in the additional review group adhered more to suggestions from conventional reviews than to those from additional reviews (average increase 0.43 Likert points (0.19 to 0.67)).<h4>Conclusions</h4>Additional reviews based on reporting guidelines improve manuscript quality, although the observed effect was smaller than hypothesised and not definitively demonstrated. Authors adhere more to suggestions from conventional reviews than to those from additional reviews, showing difficulties in adhering to high methodological standards at the latest research phases. To boost paper quality and impact, authors should be aware of future requirements of reporting guidelines at the very beginning of their study. Trial registration and protocol Although registries do not include trials of peer review, the protocol design was submitted to sponsored research projects (Instituto de Salud Carlos III, PI081903).