Are CONSORT checklists submitted by authors adequately reflecting what information is actually reported in published papers?
ABSTRACT: Compulsory submission of a checklist from the relevant reporting guideline is one of the most widespread journal requirements aiming to improve completeness of reporting. However, the current suboptimal levels of adherence to reporting guidelines observed in the literature may indicate that this journal policy is not having a significant effect.We explored whether authors provided the appropriate CONSORT checklist extension for their study and whether there were inconsistencies between what authors claimed on the submitted checklist and what was actually reported in the published paper. We randomly selected 12 randomized trials from three journals that provide the originally submitted checklist and analyzed six core CONSORT items. Only one paper used the appropriate checklist extension and had no inconsistencies between what was claimed in the submitted checklist and what was reported in the published paper.Journals should take further actions to take full advantage of the requirement for the submission of fulfilled CONSORT checklists, thus ensuring that these checklists reflect what is reported in the manuscript.
Project description:BACKGROUND: After the publication of the CONSORT 2010 statement, few studies have been conducted to assess the reporting quality of randomized clinical trials (RCTs) on treatment of diabetes mellitus with Traditional Chinese Medicine (TCM) published in Chinese journals. OBJECTIVE: To investigate the current situation of the reporting quality of RCTs in leading medical journals in China with the CONSORT 2010 statement as criteria. METHODS: The China National Knowledge Infrastructure (CNKI) electronic database was searched for RCTs on the treatment of diabetes mellitus with TCM published in the Journal of Traditional Chinese Medicine, Chinese Journal of Integrated Traditional & Western Medicine, and the China Journal of Chinese Materia Medica from January to December 2011. We excluded trials reported as "animal studies", "in vitro studies", "case studies", or "systematic reviews". The CONSORT checklist was applied by two independent raters to evaluate the reporting quality of all eligible trials after discussing and comprehending the items thoroughly. Each item in the checklist was graded as either "yes" or "no" depending on whether it had been reported by the authors. RESULTS: We identified 27 RCTs. According to the 37 items in the CONSORT checklist, the average reporting percentage was 45.0%, in which the average reporting percentage for the "title and abstract", the "introduction", the "methods", the "results", the "discussion" and the "other information" was 33.3%, 88.9%, 36.4%, 54.4%, 71.6% and 14.8%, respectively. In the Journal of Traditional Chinese Medicine, Chinese Journal of Integrated Traditional & Western Medicine, and the China Journal of Chinese Materia Medica the average reporting percentage was 42.2%, 56.8%, and 46.0%, respectively. CONCLUSIONS: The reporting quality of RCTs in these three journals was insufficient to allow readers to assess the validity of the trials. We recommend that editors require authors to use the CONSORT statement when reporting their trial results as a condition of publication.
Project description:BACKGROUND:The CONSORT (Consolidated Standards of Reporting Trials) Statement was developed to help biomedical researchers report randomised controlled trials (RCTs) transparently. We have developed an extension to the CONSORT 2010 Statement for social and psychological interventions (CONSORT-SPI 2018) to help behavioural and social scientists report these studies transparently. METHODS:Following a systematic review of existing reporting guidelines, we conducted an online Delphi process to prioritise the list of potential items for the CONSORT-SPI 2018 checklist identified from the systematic review. Of 384 international participants, 321 (84%) participated in both rating rounds. We then held a consensus meeting of 31 scientists, journal editors, and research funders (March 2014) to finalise the content of the CONSORT-SPI 2018 checklist and flow diagram. RESULTS:CONSORT-SPI 2018 extends 9 items (14 including sub-items) from the CONSORT 2010 checklist, adds a new item (with 3 sub-items) related to stakeholder involvement in trials, and modifies the CONSORT 2010 flow diagram. This Explanation and Elaboration (E&E) document is a user manual to enhance understanding of CONSORT-SPI 2018. It discusses the meaning and rationale for each checklist item and provides examples of complete and transparent reporting. CONCLUSIONS:The CONSORT-SPI 2018 Extension, this E&E document, and the CONSORT website ( www.consort-statement.org ) are helpful resources for improving the reporting of social and psychological intervention RCTs.
Project description:Systematic reviews evaluating the impact of interventions to improve the quality of peer review for biomedical publications highlighted that interventions were limited and have little impact. This study aims to compare the accuracy of early career peer reviewers who use an innovative online tool to the usual peer reviewer process in evaluating the completeness of reporting and switched primary outcomes in completed reports.This is a cross-sectional study of individual two-arm parallel-group randomised controlled trials (RCTs) published in the BioMed Central series medical journals, BMJ, BMJ Open and Annals of Emergency Medicine and indexed with the publication type 'Randomised Controlled Trial'. First, we will develop an online tool and training module based (a) on the Consolidated Standards of Reporting Trials (CONSORT) 2010 checklist and the Explanation and Elaboration document that would be dedicated to junior peer reviewers for assessing the completeness of reporting of key items and (b) the Centre for Evidence-Based Medicine Outcome Monitoring Project process used to identify switched outcomes in completed reports of the primary results of RCTs when initially submitted. Then, we will compare the performance of early career peer reviewers who use the online tool to the usual peer review process in identifying inadequate reporting and switched outcomes in completed reports of RCTs at initial journal submission. The primary outcome will be the mean number of items accurately classified per manuscript. The secondary outcomes will be the mean number of items accurately classified per manuscript for the CONSORT items and the sensitivity, specificity and likelihood ratio to detect the item as adequately reported and to identify a switch in outcomes. We aim to include 120 RCTs and 120 early career peer reviewers.The research protocol was approved by the ethics committee of the INSERM Institutional Review Board (21 January 2016). The study is based on voluntary participation and informed written consent.NCT03119376.
Project description:Importance:Adherence to the Consolidated Standards of Reporting Trials (CONSORT) for randomized clinical trials is associated with improvingquality because inadequate reporting in randomized clinical trials may complicate the interpretation and the application of findings to clinical care. Objective:To evaluate an automated reporting checklist generation tool that uses natural language processing (NLP), called CONSORT-NLP. Design, Setting, and Participants:This study used published journal articles as training, testing, and validation sets to develop, refine, and evaluate the CONSORT-NLP tool. Articles reporting randomized clinical trials were selected from 25 high-impact-factor journals under the following categories: (1) general and internal medicine, (2) oncology, and (3) cardiac and cardiovascular systems. Main Outcomes and Measures:For an evaluation of the performance of this tool, an accuracy metric defined as the number of correct assessments divided by all assessments was calculated. Results:The CONSORT-NLP tool uses the widely used Portable Document Format as an input file. Of the 37 CONSORT reporting items, 34 (92%) were included in the tool. Of these 34 reporting items, 30 were fully implemented; 28 (93%) of the fully implemented CONSORT reporting items had an accuracy of more than 90% for the validation set. The remaining 2 (7%) had an accuracy between 80% and 90% for the validation set. Two to 5 articles were selected from each of these journals for a total of 158 articles to establish a training set of 111 articles to train CONSORT-NLP for CONSORT reporting items, a testing set of 25 articles to refine CONSORT-NLP, and a validation set of 22 articles to assess the performance of CONSORT-NLP. The CONSORT-NLP tool used the Portable Document Format of the articles as input files. A CONSORT-NLP graphical user interface was built using Java in 2019. The time required to complete the CONSORT checklist manually vs using the CONSORT-NLP tool was compared for 30 articles. Two case studies for randomized clinical trials are provided as an illustration for the CONSORT-NLP tool. For the 30 articles investigated, CONSORT-NLP required a mean (SD) 23.0 (4.1) seconds, whereas the manual reviewer required a mean (SD) 11.9 (2.2), 22.6 (4.6), and 57.6 (7.1) minutes, for 3 reviewers, respectively. Conclusions and Relevance:The CONSORT-NLP tool is designed to assist in the reporting of randomized clinical trials. Potential users of CONSORT-NLP include clinicians, researchers, and scientists who plan to publish a randomized trial study in a peer-reviewed journal. The use of CONSORT-NLP may help them save substantial time when generating the CONSORT checklist. This tool may also be useful for manuscript reviewers and journal editors who review these articles.
Project description:BACKGROUND:Clear, transparent, and sufficiently detailed abstracts of conferences and journal articles related to randomized controlled trials (RCTs) are important, because readers often base their assessment of a trial solely on information in the abstract. Here, we extend the CONSORT (Consolidated Standards of Reporting Trials) Statement to develop a minimum list of essential items, which authors should consider when reporting the results of a RCT in any journal or conference abstract. METHODS AND FINDINGS:We generated a list of items from existing quality assessment tools and empirical evidence. A three-round, modified-Delphi process was used to select items. In all, 109 participants were invited to participate in an electronic survey; the response rate was 61%. Survey results were presented at a meeting of the CONSORT Group in Montebello, Canada, January 2007, involving 26 participants, including clinical trialists, statisticians, epidemiologists, and biomedical editors. Checklist items were discussed for eligibility into the final checklist. The checklist was then revised to ensure that it reflected discussions held during and subsequent to the meeting. CONSORT for Abstracts recommends that abstracts relating to RCTs have a structured format. Items should include details of trial objectives; trial design (e.g., method of allocation, blinding/masking); trial participants (i.e., description, numbers randomized, and number analyzed); interventions intended for each randomized group and their impact on primary efficacy outcomes and harms; trial conclusions; trial registration name and number; and source of funding. We recommend the checklist be used in conjunction with this explanatory document, which includes examples of good reporting, rationale, and evidence, when available, for the inclusion of each item. CONCLUSIONS:CONSORT for Abstracts aims to improve reporting of abstracts of RCTs published in journal articles and conference proceedings. It will help authors of abstracts of these trials provide the detail and clarity needed by readers wishing to assess a trial's validity and the applicability of its results.
Project description:OBJECTIVE:Poor reporting in randomized clinical trial (RCT) abstracts reduces quality and misinforms readers. Spin, a biased presentation of findings, could frequently mislead clinicians to accept a clinical intervention despite non-significant primary outcome. Therefore, good reporting practices and absence of spin enhances research quality. We aim to assess the reporting quality and spin in abstracts of RCTs evaluating the effect of periodontal therapy on cardiovascular (CVD) outcomes. METHODS:PubMed, Scopus, the Cochrane Central Register of Controlled Trials (CENTRAL), and 17 trial registration platforms were searched. Cohort, non-randomized, non-English studies, and pediatric studies were excluded. RCT abstracts were reviewed by 2 authors using the CONSORT for abstracts and spin checklists for data extraction. Cohen's Kappa statistic was used to assess inter-rater agreement. Data on the selected RCT publication metrics were collected. Descriptive analysis was performed with non-parametric methods. Correlation analysis between quality, spin and bibliometric parameters was conducted. RESULTS:24 RCTs were selected for CONSORT analysis and 14 fulfilled the criteria for spin analysis. Several important RCT elements per CONSORT were neglected in the abstract including description of the study population (100%), explicitly stated primary outcome (87%), methods of randomization and blinding (100%), trial registration (87%). No RCT examined true outcomes (CVD events). A significant fraction of the abstracts appeared with at least one form of spin in the results and conclusions (86%) and claimed some treatment benefit in spite of non-significant primary outcome (64%). High-quality reporting had a significant positive correlation with reporting of trial registration (p = 0.04) and funding (p = 0.009). Spinning showed marginal negative correlation with reporting quality (p = 0.059). CONCLUSION:Poor adherence to the CONSORT guidelines and high levels of data spin were found in abstracts of RCTs exploring the effects of periodontal therapy on CVD outcomes. Our findings indicate that journal editors and reviewers should consider strict adherence to proper reporting guidelines to improve reporting quality and reduce waste.
Project description:Adaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits. In order to encourage its wide dissemination this article is freely accessible on the BMJ and Trials journal websites."To maximise the benefit to society, you need to not just do research but do it well" Douglas G Altman.
Project description:Checklists have been shown to improve performance of complex, error-prone processes. To develop a checklist with potential to reduce the likelihood of diagnostic error for patients presenting to the Emergency Room (ER) with undiagnosed conditions.Participants included 15 staff ER physicians working in two large academic centers. A rapid cycle design and evaluation process was used to develop a general checklist for high-risk situations vulnerable to diagnostic error. Physicians used the general checklists and a set of symptom-specific checklists for a period of 2 months. We conducted a mixed methods evaluation that included interviews regarding user perceptions and quantitative assessment of resource utilization before and after checklist use.A general checklist was developed iteratively by obtaining feedback from users and subject matter experts, and was trialed along with a set of specific checklists in the ER. Both the general and the symptom-specific checklists were judged to be helpful, with a slight preference for using symptom-specific lists. Checklist use commonly prompted consideration of additional diagnostic possibilities, changed the working diagnosis in approximately 10% of cases, and anecdotally was thought to be helpful in avoiding diagnostic errors. Checklist use was prompted by a variety of different factors, not just diagnostic uncertainty. None of the physicians used the checklists in collaboration with the patient, despite being encouraged to do so. Checklist use did not prompt large changes in test ordering or consultation.In the ER setting, checklists for diagnosis are helpful in considering additional diagnostic possibilities, thus having potential to prevent diagnostic errors. Inconsistent usage and using the checklists privately, instead of with the patient, are factors that may detract from obtaining maximum benefit. Further research is needed to optimize checklists for use in the ER, determine how to increase usage, to evaluate the impact of checklist utilization on error rates and patient outcomes, to determine how checklist usage affects test ordering and consultation, and to compare checklists generally with other approaches to reduce diagnostic error.
Project description:Reporting randomised controlled trials is a key element in order to disseminate research findings. The CONSORT statement was introduced to improve the reporting quality. We assessed the adherence to the CONSORT statement of randomised controlled trials published 2011 in the top ten ranked journals of critical care medicine (ISI Web of Knowledge 2011, Thomson Reuters, London UK).Design. We performed a retrospective cross sectional data analysis. Setting. This study was executed at the University Hospital of RWTH, Aachen. Participants. We selected the following top ten listed journals according to ISI Web of Knowledge (Thomson Reuters, London, UK) critical care medicine ranking in the year 2011: American Journal of Respiratory and Critical Care Medicine, Critical Care Medicine, Intensive Care Medicine, CHEST, Critical Care, Journal of Neurotrauma, Resuscitation, Pediatric Critical Care Medicine, Shock and Minerva Anestesiologica. Main outcome measures. We screened the online table of contents of each included journal, to identify the randomised controlled trials. The adherence to the items of the CONSORT Checklist in each trial was evaluated. Additionally we correlated the citation frequency of the articles and the impact factor of the respective journal with the amount of reported items per trial.We analysed 119 randomised controlled trials and found, 15 years after the implementation of the CONSORT statement, that a median of 61,1% of the checklist-items were reported. Only 55.5% of the articles were identified as randomised trials in their titles. The citation frequency of the trials correlated significantly (rs = 0,433; p<0,001 and r = 0,331; p<0,001) to the CONSORT statement adherence. The impact factor showed also a significant correlation to the CONSORT adherence (r = 0,386; p<0,001).The reporting quality of randomised controlled trials in the field of critical care medicine remains poor and needs considerable improvement.
Project description:BACKGROUND: We investigated whether there had been an improvement in quality of reporting for randomised controlled trials of acupuncture since the publication of the STRICTA and CONSORT statements. We conducted a before-and-after study, comparing ratings for quality of reporting following the publication of both STRICTA and CONSORT recommendations. METHODOLOGY AND PRINCIPAL FINDINGS: Ninety peer reviewed journal articles reporting the results of acupuncture trials were selected at random from a wider sample frame of 266 papers. Papers published in three distinct time periods (1994-1995, 1999-2000 and 2004-2005) were compared. Assessment criteria were developed directly from CONSORT and STRICTA checklists. Papers were independently assessed for quality of reporting by two assessors, one of whom was blind to information which could have introduced systematic bias (e.g. date of publication). We detected a statistically significant increase in the reporting of CONSORT items for papers published in each time period measured. We did not, however, find a difference between the number of STRICTA items reported in journal articles published before and 3 to 4 years following the introduction of STRICTA recommendations. CONCLUSIONS AND SIGNIFICANCE: The results of this study suggest that general standards of reporting for acupuncture trials have significantly improved since the introduction of the CONSORT statement in 1996, but that quality in reporting details specific to acupuncture interventions has yet to change following the more recent introduction of STRICTA recommendations. Wider targeting and revision of the guidelines is recommended.