Project description:This study aimed to analyze the content of data availability statements (DAS) and the actual sharing of raw data in preprint articles about COVID-19. The study combined a bibliometric analysis and a cross-sectional survey. We analyzed preprint articles on COVID-19 published on medRxiv and bioRxiv from January 1, 2020 to March 30, 2020. We extracted data sharing statements, tried to locate raw data when authors indicated they were available, and surveyed authors. The authors were surveyed in 2020-2021. We surveyed authors whose articles did not include DAS, who indicated that data are available on request, or their manuscript reported that raw data are available in the manuscript, but raw data were not found. Raw data collected in this study are published on Open Science Framework (https://osf.io/6ztec/). We analyzed 897 preprint articles. There were 699 (78%) articles with Data/Code field present on the website of a preprint server. In 234 (26%) preprints, data/code sharing statement was reported within the manuscript. For 283 preprints that reported that data were accessible, we found raw data/code for 133 (47%) of those 283 preprints (15% of all analyzed preprint articles). Most commonly, authors indicated that data were available on GitHub or another clearly specified web location, on (reasonable) request, in the manuscript or its supplementary files. In conclusion, preprint servers should require authors to provide data sharing statements that will be included both on the website and in the manuscript. Education of researchers about the meaning of data sharing is needed.Supplementary informationThe online version contains supplementary material available at 10.1007/s11192-022-04346-1.
Project description:BackgroundThe concept of standard of care (SoC) treatment is commonly utilized in clinical trials. However, in a setting of an emergent disease, such as COVID-19, where there is no established effective treatment, it is unclear what the investigators considered as the SoC in early clinical trials. The aim of this study was to analyze and classify SoC reported in randomized controlled trial (RCT) registrations and RCTs published in scholarly journals and on preprint servers about treatment interventions for COVID-19.MethodsWe conducted a cross-sectional study. We included RCTs registered in a trial registry, and/or published in a scholarly journal, and/or published on preprint servers medRxiv and bioRxiv (any phase; any recruitment status; any language) that aim to compare treatment interventions related to COVID-19 and SoC, available from January 1, 2020, to October 8, 2020. Studies using "standard" treatment were eligible for inclusion if they reported they used standard, usual, conventional, or routine treatment. When we found such multiple reports of an RCT, we treated those multiple sources as one unit of analysis.ResultsAmong 737 unique trials included in the analysis, 152 (21%) reported that SoC was proposed by the institutional or national authority. There were 129 (18%) trials that reported component(s) of SoC; the remaining trials simply reported that they used SoC, with no further detail. Among those 129 trials, the number of components of SoC ranged from 1 to 10. The most commonly used groups of interventions in the SoC were antiparasitics (62% of the trials), antivirals (57%), antibiotics (31%), oxygen (17%), antithrombotics/anticoagulants (14%), vitamins (13%), immunomodulatory agents (13%), corticosteroids (12%), analgesics/antipyretics (12%). Various combinations of those interventions were used in the SoC, with up to 7 different types of interventions combined. Posology, timing, and method of administration were frequently not reported for SoC components.ConclusionMost RCTs (82%) about treatment for COVID-19 that were registered or published in the first 9 months of the pandemic did not describe the "standard of care" they used. Many of those interventions have, by now, been shown as ineffective or even detrimental.
Project description:ObjectivesTo assess the reporting quality of abstracts for published randomized controlled trials (RCTs) of interventions for coronavirus disease 2019 (COVID-19), including the use of spin strategies and the level of spin for RCTs with statistically non-significant primary outcomes, and to explore potential predictors for reporting quality and the severity of spin.Study design and settingPubMed was searched to find RCTs that tested interventions for COVID-19, and the reporting quality and spin in the abstracts were assessed. Linear regression analyses were used to identify potential predictors.ResultsForty RCT abstracts were included in our assessment of reporting quality, and a higher word count in the abstract was significantly correlated with higher reporting scores (95% CI 0.044 to 0.658, P=0.026). Multiple spin strategies were identified. Our multivariate analyses showed that geographical origin was associated with severity of spin, with research from non-Asian regions containing fewer spin strategies (95% CI -0.760 to -0.099, P=0.013).ConclusionsThe reporting quality of abstracts of RCTs of interventions for COVID-19 is far from satisfactory. A relatively high proportion of the abstracts contained spin, and the findings reported in the results and conclusion sections of these abstracts need to be interpreted with caution.
Project description:BackgroundAbstracts provide readers a concise and readily accessible information of the trials. However, poor reporting quality and spin (misrepresentation of research findings) can lead to an overestimation in trial validity. This methodological study aimed to assess the reporting quality and spin among randomized controlled trial (RCT) abstracts in pediatric dentistry.MethodsWe hand-searched RCTs in five leading pediatric dental journals between 2015 and 2021. Reporting quality in each abstract was assessed using the original 16-item CONSORT for abstracts checklist. Linear regression analyses were performed to identify factors associated with reporting quality. We evaluated the presence and characteristics of spin only in abstracts of parallel-group RCTs with nonsignificant primary outcomes according to pre-determined spin strategies.ResultsOne hundred eighty-two abstracts were included in reporting quality evaluation. The mean overall quality score was 4.57 (SD, 0.103; 95% CI, 4.36-4.77; score range, 1-10). Only interventions, objective, and conclusions were adequately reported. Use of flow diagram (P < 0.001) was the only significant factor of higher reporting quality. Of the 51 RCT abstracts included for spin analysis, spin was identified in 40 abstracts (78.4%), among which 23 abstracts (45.1%) had spin in the Results section and 39 in the Conclusions Sect. (76.5%).ConclusionsThe reporting quality of RCT abstracts in pediatric dentistry is suboptimal and the prevalence of spin is high. Joint efforts are needed to improve reporting quality and minimize spin.
Project description:IntroductionPreprints have been widely cited during the COVID-19 pandemics, even in the major medical journals. However, since subsequent publication of preprint is not always mentioned in preprint repositories, some may be inappropriately cited or quoted. Our objectives were to assess the reliability of preprint citations in articles on COVID-19, to the rate of publication of preprints cited in these articles and to compare, if relevant, the content of the preprints to their published version.MethodsArticles published on COVID in 2020 in the BMJ, The Lancet, the JAMA and the NEJM were manually screened to identify all articles citing at least one preprint from medRxiv. We searched PubMed, Google and Google Scholar to assess if the preprint had been published in a peer-reviewed journal, and when. Published articles were screened to assess if the title, data or conclusions were identical to the preprint version.ResultsAmong the 205 research articles on COVID published by the four major medical journals in 2020, 60 (29.3%) cited at least one medRxiv preprint. Among the 182 preprints cited, 124 were published in a peer-reviewed journal, with 51 (41.1%) before the citing article was published online and 73 (58.9%) later. There were differences in the title, the data or the conclusion between the preprint cited and the published version for nearly half of them. MedRxiv did not mentioned the publication for 53 (42.7%) of preprints.ConclusionsMore than a quarter of preprints citations were inappropriate since preprints were in fact already published at the time of publication of the citing article, often with a different content. Authors and editors should check the accuracy of the citations and of the quotations of preprints before publishing manuscripts that cite them.
Project description:BackgroundOpen access (OA) journals are becoming a publication standard for health research, but it is not clear how they differ from traditional subscription journals in the quality of research reporting. We assessed the completeness of results reporting in abstracts of randomized controlled trials (RCTs) published in these journals.MethodsWe used the Consolidated Standards of Reporting Trials Checklist for Abstracts (CONSORT-A) to assess the completeness of reporting in abstracts of parallel-design RCTs published in subscription journals (n = 149; New England Journal of Medicine, Journal of the American Medical Association, Annals of Internal Medicine, and Lancet) and OA journals (n = 119; BioMedCentral series, PLoS journals) in 2016 and 2017.ResultsAbstracts in subscription journals completely reported 79% (95% confidence interval [CI], 77-81%) of 16 CONSORT-A items, compared with 65% (95% CI, 63-67%) of these items in abstracts from OA journals (P < 0.001, chi-square test). The median number of completely reported CONSORT-A items was 13 (95% CI, 12-13) in subscription journal articles and 11 (95% CI, 10-11) in OA journal articles. Subscription journal articles had significantly more complete reporting than OA journal articles for nine CONSORT-A items and did not differ in reporting for items trial design, outcome, randomization, blinding (masking), recruitment, and conclusions. OA journals were better than subscription journals in reporting randomized study design in the title.ConclusionAbstracts of randomized controlled trials published in subscription medical journals have greater completeness of reporting than abstracts published in OA journals. OA journals should take appropriate measures to ensure that published articles contain adequate detail to facilitate understanding and quality appraisal of research reports about RCTs.
Project description:ObjectiveTo evaluate the reporting quality of randomized controlled trial (RCT) abstracts regarding patients with coronavirus disease 2019 (COVID-19) and to analyze the factors influencing the quality.MethodsThe PubMed, Embase, Web of Science, and Cochrane Library databases were searched to collect RCTs on patients with COVID-19. The retrieval time was from inception to December 1, 2020. The CONSORT statement for abstracts was used to evaluate the reporting quality of RCT abstracts.ResultsA total of 53 RCT abstracts were included. The CONSORT statement for abstracts showed that the average reporting rate of all items was 50.2%. The items with a lower reporting quality were mainly the trial design and the details of randomization and blinding (<10%). The mean overall adherence score across all studies was 8.68 ± 2.69 (range 4-13.5). Multivariate linear regression analysis showed that the higher reporting scores were associated with higher journal impact factor (P < 0.01), international collaboration (P = 0.04), and structured abstract format (P < 0.01).ConclusionsAlthough many RCTs on patients with COVID-19 have been published in different journals, the overall quality of reporting in the included RCT abstracts was suboptimal, thus diminishing their potential usefulness, and this may mislead clinical decision-making. In order to improve the reporting quality, it is necessary to promote and actively apply the CONSORT statement for abstracts.
Project description:Background: Spin refers to reporting practices that could distort the interpretation and mislead readers by being more optimistic than the results justify, thereby possibly changing the perception of clinicians and influence their decisions. Because of the clinical importance of accurate interpretation of results and the evidence of spin in other research fields, we aim to identify the nature and frequency of spin in published reports of tinnitus randomized controlled trials (RCTs) and to assess possible determinants and effects of spin. Methods: We searched PubMed systematically for RCTs with tinnitus-related outcomes published from 2015 to 2019. All eligible articles were assessed on actual and potential spin using prespecified criteria. Results: Our search identified 628 studies, of which 87 were eligible for evaluation. A total of 95% of the studies contained actual or potential spin. Actual spin was found mostly in the conclusion of articles, which reflected something else than the reported point estimate (or CI) of the outcome (n = 34, 39%) or which was selectively focused (n = 49, 56%). Linguistic spin ("trend," "marginally significant," or "tendency toward an effect") was found in 17% of the studies. We were not able to assess the association between study characteristics and the occurrence of spin due to the low number of trials for some categories of the study characteristics. We found no effect of spin on type of journal [odds ratio (OR) -0.13, 95% CI -0.56-0.31], journal impact factor (OR 0.17, 95% CI -0.18-0.51), or number of citations (OR 1.95, CI -2.74-6.65). Conclusion: There is a large amount of spin in tinnitus RCTs. Our findings show that there is room for improvement in reporting and interpretation of results. Awareness of different forms of spin must be raised to improve research quality and reduce research waste.
Project description:ImportanceNumerous studies have shown that adherence to reporting guidelines is suboptimal.ObjectiveTo evaluate whether asking peer reviewers to check if specific reporting guideline items were adequately reported would improve adherence to reporting guidelines in published articles.Design, setting, and participantsTwo parallel-group, superiority randomized trials were performed using manuscripts submitted to 7 biomedical journals (5 from the BMJ Publishing Group and 2 from the Public Library of Science) as the unit of randomization, with peer reviewers allocated to the intervention or control group.InterventionsThe first trial (CONSORT-PR) focused on manuscripts that presented randomized clinical trial (RCT) results and reported following the Consolidated Standards of Reporting Trials (CONSORT) guideline, and the second trial (SPIRIT-PR) focused on manuscripts that presented RCT protocols and reported following the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) guideline. The CONSORT-PR trial included manuscripts that described RCT primary results (submitted July 2019 to July 2021). The SPIRIT-PR trial included manuscripts that contained RCT protocols (submitted June 2020 to May 2021). Manuscripts in both trials were randomized (1:1) to the intervention or control group; the control group received usual journal practice. In the intervention group of both trials, peer reviewers received an email from the journal that asked them to check whether the 10 most important and poorly reported CONSORT (for CONSORT-PR) or SPIRIT (for SPIRIT-PR) items were adequately reported in the manuscript. Peer reviewers and authors were not informed of the purpose of the study, and outcome assessors were blinded.Main outcomes and measuresThe difference in the mean proportion of adequately reported 10 CONSORT or SPIRIT items between the intervention and control groups in published articles.ResultsIn the CONSORT-PR trial, 510 manuscripts were randomized. Of those, 243 were published (122 in the intervention group and 121 in the control group). A mean proportion of 69.3% (95% CI, 66.0%-72.7%) of the 10 CONSORT items were adequately reported in the intervention group and 66.6% (95% CI, 62.5%-70.7%) in the control group (mean difference, 2.7%; 95% CI, -2.6% to 8.0%). In the SPIRIT-PR trial, of the 244 randomized manuscripts, 178 were published (90 in the intervention group and 88 in the control group). A mean proportion of 46.1% (95% CI, 41.8%-50.4%) of the 10 SPIRIT items were adequately reported in the intervention group and 45.6% (95% CI, 41.7% to 49.4%) in the control group (mean difference, 0.5%; 95% CI, -5.2% to 6.3%).Conclusions and relevanceThese 2 randomized trials found that it was not useful to implement the tested intervention to increase reporting completeness in published articles. Other interventions should be assessed and considered in the future.Trial registrationClinicalTrials.gov Identifiers: NCT05820971 (CONSORT-PR) and NCT05820984 (SPIRIT-PR).