Project description:ImportanceClinical trial data sharing holds promise for maximizing the value of clinical research. The International Committee of Medical Journal Editors (ICMJE) adopted a policy promoting data sharing in July 2018.ObjectiveTo evaluate the association of the ICMJE data sharing policy with data availability and reproducibility of main conclusions among leading surgical journals.Design, setting, and participantsThis cross-sectional study, conducted in October 2021, examined randomized clinical trials (RCTs) in 10 leading surgical journals before and after the implementation of the ICMJE data sharing policy in July 2018.ExposureImplementation of the ICMJE data sharing policy.Main outcomes and measuresTo demonstrate a pre-post increase in data availability from 5% to 25% (α = .05; β = 0.1), 65 RCTs published before and 65 RCTs published after the policy was issued were included, and their data were requested. The primary outcome was data availability (ie, the receipt of sufficient data to enable reanalysis of the primary outcome). When data sharing was available, the primary outcomes reported in the journal articles were reanalyzed to explore reproducibility. The reproducibility features of these studies were detailed.ResultsData were available for 2 of 65 RCTs (3.1%) published before the ICMJE policy and for 2 of 65 RCTs (3.1%) published after the policy was issued (odds ratio, 1.00; 95% CI, 0.07-14.19; P > .99). A data sharing statement was observed in 11 of 65 RCTs (16.9%) published after the policy vs none before the policy (risk ratio, 2.20; 95% CI, 1.81-2.68; P = .001). Data obtained for reanalysis (n = 4) were not from RCTs published with a data sharing statement. Of the 4 RCTs with available data, all of them had primary outcomes that were fully reproduced. However, discrepancies or inaccuracies that were not associated with study conclusions were identified in 3 RCTs. These concerned the number of patients included in 1 RCT, the management of missing values in another RCT, and discrepant timing for the principal outcome declared in the study registration and reported in the third RCT.Conclusions and relevanceThis cross-sectional study suggests that data sharing practices are rare in surgical journals despite the ICMJE policy and that most RCTs published in these journals lack transparency. The results of these studies may not be reproducible by external researchers.
Project description:ImportanceMultiple analyses in a clinical trial can increase the probability of inaccurately concluding that there is a statistically significant treatment effect. However, to date, it is unknown how many randomized clinical trials (RCTs) perform adjustments for multiple comparisons, the lack of which could lead to erroneous findings.ObjectivesTo assess the prevalence of multiplicity and whether appropriate multiplicity adjustments were performed among cardiovascular RCTs published in 6 medical journals with a high impact factor.Design, setting, and participantsIn this cross-sectional study, cardiovascular RCTs were selected from all over the world, characterized as North America, Western Europe, multiregional, and rest of the world. Data were collected from past issues of 3 cardiovascular journals (Circulation, European Heart Journal, and Journal of the American College of Cardiology) and 3 general medicine journals (JAMA, The Lancet, and The New England Journal of Medicine) with high impact factors published between August 1, 2015, and July 31, 2018. Supplements and trial protocols of each of the included RCTs were also searched for multiplicity. Data were analyzed December 20 to 27, 2018.ExposuresData from the selected RCTs were extracted and verified independently by 2 researchers using a structured data instrument. In case of disagreement, a third reviewer helped to achieve consensus. An RCT was considered to have multiple treatment groups if it had more than 2 arms; multiple outcomes were defined as having more than 1 primary outcome, and multiple analyses were defined as analysis of the same outcome variable in multiple ways. Multiplicity was examined only for the analysis of the primary end point.Main outcomes and measuresOutcomes of interest were percentages of primary analyses that performed multiplicity adjustment of primary end points.ResultsOf 511 cardiovascular RCTs included in this analysis, 300 (58.7%) had some form of multiplicity; of these 300, only 85 (28.3%) adjusted for multiplicity. Intervention type and funding source had no statistically significant association with the reporting of multiplicity risk adjustment. Trials that assessed mortality vs nonmortality outcomes were more likely to contain a multiplicity risk in their primary analysis (66.3% [177 of 267] vs 50.4% [123 of 244]; P < .001), and larger trials vs smaller trials were less likely to make any adjustments for multiplicity (35.6% [52 of 146] vs 21.4% [33 of 154]; P = .001).Conclusions and relevanceFindings from this study suggest that cardiovascular RCTs published in medical journals with high impact factors demonstrate infrequent adjustments to correct for multiple comparisons in the primary end point. These parameters may be improved by more standardized reporting.
Project description:IntroductionThe recent past has seen a significant increase in the number of trauma and orthopaedic randomised clinical trials published in "the big five" general medical journals. The quality of this research has, however, not yet been established.MethodsWe therefore set out to critically appraise the quality of available literature over a 10-year period (April 2010-April 2020) through a systematic search of these 5 high-impact general medical journals (JAMA, NEJM, BMJ, Lancet and Annals). A standardised data extraction proforma was utilised to gather information regarding: trial design, sample size calculation, results, study quality and pragmatism. Quality assessment was performed using the Cochrane Risk of Bias 2 tool and the modified Delphi list. Study pragmatism was assessed using the PRECIS-2 tool.ResultsA total of 25 studies were eligible for inclusion. Over half of the included trials did not meet their sample size calculation for the primary outcome, with a similar proportion of these studies at risk of type II error for their non-significant results. There was a high degree of pragmatism according to PRECIS-2. Non-significant studies had greater pragmatism that those with statistically significant results (p < 0.001). Only 56% studies provided adequate justification for the minimum clinically important difference (MCID) in the population assessed. Overall, very few studies were deemed high quality/low risk of bias.ConclusionsThese findings highlight that there are some important methodological concerns present within the current evidence base of RCTs published in high-impact medical journals. Potential strategies that may improve future trial design are highlighted.Level of evidenceLevel 1.
Project description:The influential claim that most published results are false raised concerns about the trustworthiness and integrity of science. Since then, there have been numerous attempts to examine the rate of false-positive results that have failed to settle this question empirically. Here we propose a new way to estimate the false positive risk and apply the method to the results of (randomized) clinical trials in top medical journals. Contrary to claims that most published results are false, we find that the traditional significance criterion of α = .05 produces a false positive risk of 13%. Adjusting α to.01 lowers the false positive risk to less than 5%. However, our method does provide clear evidence of publication bias that leads to inflated effect size estimates. These results provide a solid empirical foundation for evaluations of the trustworthiness of medical research.
Project description:ImportancePatient and family engagement in research may improve the design, conduct, and dissemination of clinical research, but little is known about whether these stakeholder groups are involved in the design and conduct of randomized clinical trials.ObjectiveTo characterize the involvement and role of patient and family representatives in the design and conduct of randomized clinical trials by reviewing randomized clinical trials from 3 peer-reviewed medical and surgical journals with high impact factors.Evidence reviewIn this systematic review, the first 50 consecutive randomized clinical trials published on or after January 1, 2021, until September 30, 2021, from each of 3 medical and surgical journals with high impact factors were reviewed for patient or family involvement in trial design and/or conduct. The manuscript, supplemental data, and trial registry records were searched for trial design and governance structures. Two independent, blinded reviewers screened citations and extracted data. This study followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines.FindingsOnly 7 of 150 randomized clinical trials (5%) reported patient or family representation in their study design or conduct. Most studies with patient or family representation (n = 5) were from a single journal. Stakeholder involvement was mainly in the execution phase (n = 7), although in 2 studies stakeholders were also involved in the translation phase.Conclusions and relevanceThe findings of this systematic review suggest that patient or family involvement in the design and conduct of randomized clinical trials in the publications with high impact factors is lacking. We found that when patient or family groups are involved in research, the focus was mainly on the execution phase of research design. There is a need to increase stakeholder involvement in the research design, conduct, and translation of randomized clinical trials.
Project description:BackgroundData sharing plays a crucial role in health informatics, contributing to improving health information systems, enhancing operational efficiency, informing policy and decision-making, and advancing public health surveillance including disease tracking. Sharing individual participant data in public, environmental, and occupational health trials can help improve public trust and support by enhancing transparent reporting and reproducibility of research findings. The International Committee of Medical Journal Editors (ICMJE) requires all papers to include a data-sharing statement. However, it is unclear whether journals in the field of public, environmental, and occupational health adhere to this requirement.ObjectiveThis study aims to investigate whether public, environmental, and occupational health journals requested data-sharing statements from clinical trials submitted for publication.MethodsIn this bibliometric survey of "Public, Environmental, and Occupational Health" journals, defined by the Journal Citation Reports (as of June 2023), we included 202 journals with clinical trial reports published between 2019 and 2022. The primary outcome was a journal request for a data-sharing statement, as identified in the paper submission instructions. Multivariable logistic regression analysis was conducted to evaluate the relationship between journal characteristics and journal requests for data-sharing statements, with results presented as odds ratios (ORs) and corresponding 95% CIs. We also investigated whether the journals included a data-sharing statement in their published trial reports.ResultsAmong the 202 public, environmental, and occupational health journals included, there were 68 (33.7%) journals that did not request data-sharing statements. Factors significantly associated with journal requests for data-sharing statements included open access status (OR 0.43, 95% CI 0.19-0.97), high journal impact factor (OR 2.31, 95% CI 1.15-4.78), endorsement of Consolidated Standards of Reporting Trials (OR 2.43, 95% CI 1.25-4.79), and publication in the United Kingdom (OR 7.18, 95% CI 2.61-23.4). Among the 134 journals requesting data-sharing statements, 26.9% (36/134) did not have statements in their published trial reports.ConclusionsOver one-third of the public, environmental, and occupational health journals did not request data-sharing statements in clinical trial reports. Among those journals that requested data-sharing statements in their submission guidance pages, more than one quarter published trial reports with no data-sharing statements. These results revealed an inadequate practice of requesting data-sharing statements by public, environmental, and occupational health journals, requiring more effort at the journal level to implement ICJME recommendations on data-sharing statements.
Project description:BackgroundMissing outcome data is a threat to the validity of treatment effect estimates in randomized controlled trials. We aimed to evaluate the extent, handling, and sensitivity analysis of missing data and intention-to-treat (ITT) analysis of randomized controlled trials (RCTs) in top tier medical journals, and compare our findings with previous reviews related to missing data and ITT in RCTs.MethodsReview of RCTs published between July and December 2013 in the BMJ, JAMA, Lancet, and New England Journal of Medicine, excluding cluster randomized trials and trials whose primary outcome was survival.ResultsOf the 77 identified eligible articles, 73 (95%) reported some missing outcome data. The median percentage of participants with a missing outcome was 9% (range 0 - 70%). The most commonly used method to handle missing data in the primary analysis was complete case analysis (33, 45%), while 20 (27%) performed simple imputation, 15 (19%) used model based methods, and 6 (8%) used multiple imputation. 27 (35%) trials with missing data reported a sensitivity analysis. However, most did not alter the assumptions of missing data from the primary analysis. Reports of ITT or modified ITT were found in 52 (85%) trials, with 21 (40%) of them including all randomized participants. A comparison to a review of trials reported in 2001 showed that missing data rates and approaches are similar, but the use of the term ITT has increased, as has the report of sensitivity analysis.ConclusionsMissing outcome data continues to be a common problem in RCTs. Definitions of the ITT approach remain inconsistent across trials. A large gap is apparent between statistical methods research related to missing data and use of these methods in application settings, including RCTs in top medical journals.
Project description:ObjectivesTo explore the effectiveness of data sharing by randomized controlled trials (RCTs) in journals with a full data sharing policy and to describe potential difficulties encountered in the process of performing reanalyses of the primary outcomes.DesignSurvey of published RCTs.SettingPubMed/Medline.Eligibility criteriaRCTs that had been submitted and published by The BMJ and PLOS Medicine subsequent to the adoption of data sharing policies by these journals.Main outcome measureThe primary outcome was data availability, defined as the eventual receipt of complete data with clear labelling. Primary outcomes were reanalyzed to assess to what extent studies were reproduced. Difficulties encountered were described.Results37 RCTs (21 from The BMJ and 16 from PLOS Medicine) published between 2013 and 2016 met the eligibility criteria. 17/37 (46%, 95% confidence interval 30% to 62%) satisfied the definition of data availability and 14 of the 17 (82%, 59% to 94%) were fully reproduced on all their primary outcomes. Of the remaining RCTs, errors were identified in two but reached similar conclusions and one paper did not provide enough information in the Methods section to reproduce the analyses. Difficulties identified included problems in contacting corresponding authors and lack of resources on their behalf in preparing the datasets. In addition, there was a range of different data sharing practices across study groups.ConclusionsData availability was not optimal in two journals with a strong policy for data sharing. When investigators shared data, most reanalyses largely reproduced the original results. Data sharing practices need to become more widespread and streamlined to allow meaningful reanalyses and reuse of data.Trial registrationOpen Science Framework osf.io/c4zke.
Project description:AimTo identify data sharing practices of authors of randomized-controlled trials (RCTs) in indexed orthodontic journals and explore associations between published reports and several publication characteristics.Materials and methodsRCTs from indexed orthodontic journals in major databases, namely PubMed® (Medline), Scopus®, EMBASE®, and Web of Science™, were included from January 2019 to December 2023. Data extraction was conducted for outcome and predictor variables such as data and statistical code sharing practices reported, protocol registration, funding sources, and other publication characteristics, including the year of publication, journal ranking, the origin of authorship, number of authors, design of the RCT, and outcome-related variables (e.g. efficacy/safety). Statistical analyses included descriptive statistics, cross-tabulations, and univariable and multivariable logistic regression.ResultsA total of 318 RCTs were included. Statement for intention of the authors to provide their data upon request was recorded in 51 of 318 RCTs (16.0%), while 6 of 318 (1.9%) openly provided their data in repositories. No RCT provided any code or script for statistical analysis. A significant association was found between data sharing practices and the year of publication, with increasing odds for data sharing by 1.56 times across the years (odds ratio [OR]: 1.56; 95% confidence interval [CI]: 1.22, 2.01; P < .001). RCTs reporting on safety outcomes presented 62% lower odds for including positive data sharing statements compared to efficacy outcomes (OR: 0.38; 95% CI: 0.17, 0.88). There was evidence that funded RCTs were more likely to report on data sharing compared to non-funded (P = .02).ConclusionsAlbeit progress has been made towards credibility and transparency in the presentation of findings from RCTs in orthodontics, less than 20% of published orthodontic trials include a positive data sharing statement while less than 2% openly provide their data with publication.