Hiding negative trials by pooling them: a secondary analysis of pooled-trials publication bias in FDA-registered antidepressant trials.
ABSTRACT: BACKGROUND:Previous studies on reporting bias generally examined whether trials were published in stand-alone publications. In this study, we investigated whether pooled-trials publications constitute a specific form of reporting bias. We assessed whether negative trials were more likely to be exclusively published in pooled-trials publications than positive trials and examined the research questions, individual trial results, and conclusions presented in these articles. METHODS:Data from a cohort of 105 randomized controlled trials of 16 antidepressants were extracted from earlier publications and the corresponding Food and Drug Administration (FDA) reviews. A systematic literature search was conducted to identify pooled-trials publications. RESULTS:We found 107 pooled-trials publications that reported results of 23 (72%) of 32 trials not published in stand-alone publications. Only two (3.8%) of 54 positive trials were published exclusively in pooled-trials publications, compared with 21 (41.1%) of 51 negative trials (p < 0.001). Thirteen (12%) of 107 publications had as primary aim to present data on the trial's primary research question (drug efficacy compared with placebo). Only four of these publications, reporting on five (22%) trials, presented individual efficacy data for the primary research question. Additionally, only five (5%) of 107 pooled-trials publications had a negative conclusion. CONCLUSIONS:Compared with positive trials, negative trials of antidepressants for depression were much more likely to be reported exclusively in pooled-trials publications. Pooled-trials publications flood the evidence base with often-redundant articles that, instead of addressing the original primary research question, present (positive) results on secondary questions. Therefore, pooled-trials publications distort the apparent risk-benefit profile of antidepressants.
Project description:BACKGROUND: Indirect comparisons of competing treatments by network meta-analysis (NMA) are increasingly in use. Reporting bias has received little attention in this context. We aimed to assess the impact of such bias in NMAs. METHODS: We used data from 74 FDA-registered placebo-controlled trials of 12 antidepressants and their 51 matching publications. For each dataset, NMA was used to estimate the effect sizes for 66 possible pair-wise comparisons of these drugs, the probabilities of being the best drug and ranking the drugs. To assess the impact of reporting bias, we compared the NMA results for the 51 published trials and those for the 74 FDA-registered trials. To assess how reporting bias affecting only one drug may affect the ranking of all drugs, we performed 12 different NMAs for hypothetical analysis. For each of these NMAs, we used published data for one drug and FDA data for the 11 other drugs. FINDINGS: Pair-wise effect sizes for drugs derived from the NMA of published data and those from the NMA of FDA data differed in absolute value by at least 100% in 30 of 66 pair-wise comparisons (45%). Depending on the dataset used, the top 3 agents differed, in composition and order. When reporting bias hypothetically affected only one drug, the affected drug ranked first in 5 of the 12 NMAs but second (n?=?2), fourth (n?=?1) or eighth (n?=?2) in the NMA of the complete FDA network. CONCLUSIONS: In this particular network, reporting bias biased NMA-based estimates of treatments efficacy and modified ranking. The reporting bias effect in NMAs may differ from that in classical meta-analyses in that reporting bias affecting only one drug may affect the ranking of all drugs.
Project description:Evidence from randomized controlled trials (RCTs) is required to guide treatment of critically ill children, but the number of RCTs available is limited and the publications are often difficult to find. The objectives of this review were to systematically identify RCTs in pediatric critical care and describe their methods and reporting.We searched MEDLINE, EMBASE, LILACS and CENTRAL (from inception to April 16, 2013) and reference lists of included RCTs and relevant systematic reviews. We included published RCTs administering any intervention to children in a pediatric ICU. We excluded trials conducted in neonatal ICUs, those enrolling exclusively preterm infants, and individual patient crossover trials. Pairs of reviewers independently screened studies for eligibility, assessed risk of bias, and abstracted data. Discrepancies were resolved by consensus.We included 248 RCTs: 45 (18%) were multicentered and 14 (6%) were multinational. Trials most frequently enrolled both medical and surgical patients (43%) but postoperative cardiac surgery was the single largest population studied (19%). The most frequently evaluated types of intervention were medications (63%), devices (11%) and nutrition (8%). Laboratory or physiological measurements were the most frequent type of primary outcomes (18%). Half of these trials (50%) reported blinding. Of the 107 (43%) trials that reported an a priori sample size, 34 (32%) were stopped early. The median number of children randomized per trial was 49 and ranged from 6 to 4,947. The frequency of RCT publications increased at a mean rate of 0.7 RCTs per year (P<0.001) from 1 to 20 trials per year.This scoping review identified the available RCTs in pediatric critical care and made them accessible to clinicians and researchers (http://epicc.mcmaster.ca). Most focused on medications and intermediate or surrogate outcomes, were single-centered and were conducted in North America and Western Europe. The results of this review underscore the need for trials with rigorous methodology, appropriate outcome measures, and improved quality of reporting to ensure that high quality evidence exists to support clinical decision-making in this vulnerable population.
Project description:BACKGROUND: The reporting of outcomes within published randomized trials has previously been shown to be incomplete, biased and inconsistent with study protocols. We sought to determine whether outcome reporting bias would be present in a cohort of government-funded trials subjected to rigorous peer review. METHODS: We compared protocols for randomized trials approved for funding by the Canadian Institutes of Health Research (formerly the Medical Research Council of Canada) from 1990 to 1998 with subsequent reports of the trials identified in journal publications. Characteristics of reported and unreported outcomes were recorded from the protocols and publications. Incompletely reported outcomes were defined as those with insufficient data provided in publications for inclusion in meta-analyses. An overall odds ratio measuring the association between completeness of reporting and statistical significance was calculated stratified by trial. Finally, primary outcomes specified in trial protocols were compared with those reported in publications. RESULTS: We identified 48 trials with 68 publications and 1402 outcomes. The median number of participants per trial was 299, and 44% of the trials were published in general medical journals. A median of 31% (10th-90th percentile range 5%-67%) of outcomes measured to assess the efficacy of an intervention (efficacy outcomes) and 59% (0%-100%) of those measured to assess the harm of an intervention (harm outcomes) per trial were incompletely reported. Statistically significant efficacy outcomes had a higher odds than nonsignificant efficacy outcomes of being fully reported (odds ratio 2.7; 95% confidence interval 1.5-5.0). Primary outcomes differed between protocols and publications for 40% of the trials. INTERPRETATION: Selective reporting of outcomes frequently occurs in publications of high-quality government-funded trials.
Project description:Cognitive dysfunction is often present in major depressive disorder (MDD). Several clinical trials have noted a pro-cognitive effect of antidepressants in MDD. The objective of the current systematic review and meta-analysis was to assess the pooled efficacy of antidepressants on various domains of cognition in MDD.Trials published prior to April 15, 2015, were identified through searching the Cochrane Central Register of Controlled Trials, PubMed, Embase, PsychINFO, Clinicaltrials.gov, and relevant review articles. Data from randomized clinical trials assessing the cognitive effects of antidepressants were pooled to determine standard mean differences (SMD) using a random-effects model.Nine placebo-controlled randomized trials (2 550 participants) evaluating the cognitive effects of vortioxetine (n = 728), duloxetine (n = 714), paroxetine (n = 23), citalopram (n = 84), phenelzine (n = 28), nortryptiline (n = 32), and sertraline (n = 49) were identified. Antidepressants had a positive effect on psychomotor speed (SMD 0.16; 95% confidence interval [CI] 0.05-0.27; I(2) = 46%) and delayed recall (SMD 0.24; 95% CI 0.15-0.34; I(2) = 0%). The effect on cognitive control and executive function did not reach statistical significance. Of note, after removal of vortioxetine from the analysis, statistical significance was lost for psychomotor speed. Eight head-to-head randomized trials comparing the effects of selective serotonin reuptake inhibitors (SSRIs; n = 371), selective serotonin and norepinephrine reuptake inhibitors (SNRIs; n = 25), tricyclic antidepressants (TCAs; n = 138), and norepinephrine and dopamine reuptake inhibitors (NDRIs; n = 46) were identified. No statistically significant difference in cognitive effects was found when pooling results from head-to-head trials of SSRIs, SNRIs, TCAs, and NDRIs. Significant limitations were the heterogeneity of results, limited number of studies, and small sample sizes.Available evidence suggests that antidepressants have a significant positive effect on psychomotor speed and delayed recall.
Project description:Background:Since the early 2000s, a number of publications in the medical literature have highlighted inadequacies in the design, conduct and reporting of pilot trials. This work led to two notable publications in 2016: a conceptual framework for defining feasibility studies and an extension to the CONSORT 2010 statement to include pilot trials. It was hoped that these publications would educate researchers, leading to better use of pilot trials and thus more rigorously planned and informed randomised controlled trials. The aim of the present work is to evaluate the impact of these publications in the field of physical activity by reviewing the literature pre- and post-2016. This first article presents the pre-2016 review of the reporting and the current editorial policy applied to pilot trials published in physical activity journals. Methods:Fourteen physical activity journals were screened for pilot and feasibility studies published between 2012 and 2015. The CONSORT 2010 extension to pilot and feasibility studies was used as a framework to assess the reporting quality of the studies. Editors of the eligible physical activity journals were canvassed regarding their editorial policy for pilot and feasibility studies. Results:Thirty-one articles across five journals met the eligibility criteria. These articles fell into three distinct categories: trials that were carried out in preparation for a future definitive trial (23%), trials that evaluated the feasibility of a novel intervention but did not explicitly address a future definitive trial (23%) and trials that did not have any clear objectives to address feasibility (55%). Editors from all five journals stated that they generally do not accept pilot trials, and none gave reference to the CONSORT 2010 extension as a guideline for submissions. Conclusion:The result that over half of the studies did not have feasibility objectives is in line with previous research findings, demonstrating that these findings are not being disseminated effectively to researchers in the field of physical activity. The low standard of reporting across most reviewed articles and the neglect of the extended CONSORT 2010 statement by the journal editors highlight the need to actively disseminate these guidelines to ensure their impact.
Project description:OBJECTIVE: To determine policy-associated changes over time in 1) the enrollment of women and minorities in National Institute of Neurological Disorders and Stroke (NINDS)-funded clinical trials and 2) the trial publication reporting of race/ethnicity and gender. METHODS: All NINDS-funded phase III trials published between 1985 and 2008 were identified. Percent of African Americans, Hispanic Americans, and women enrolled in the trials was calculated for those trials with available data. Z tests were used to compare reporting and enrollment data from before (period 1) and after (period 2) 1995 when NIH enacted their policies regarding race, ethnicity, and gender. Percent of main trial publications reporting enrollment of African Americans, Hispanic Americans, and women was also calculated. RESULTS: Of the 56 trials identified, 100%, 48%, and 25% reported enrollment by gender, race, and ethnicity. Women constituted 42.1% of the trial population. Enrollment of women increased over time (36.9% period 1; 49.0% period 2, p < 0.001). African Americans constituted 19.8% of the enrollees in trials with available data and enrollment increased over time (11.6% period 1; 30.7% period 2, p < 0.001). Hispanic Americans constituted 5.8% of subjects in trials with available data and enrollment decreased over time (7.4% period 1; 5.0% period 2, p < 0.001). CONCLUSIONS: Improvements in reporting of race/ethnicity in publications and enrollment of Hispanics in NINDS trials are needed. While African American representation is above population levels, Hispanic Americans are underrepresented in NINDS trials and representation is declining despite Hispanics' increasing representation in the US population.
Project description:Research on stem cells (SC) is growing rapidly in neurology, but clinical applications of SC for neurological disorders remain to be proven effective and safe. Human clinical trials need to be registered in registries in order to reduce publication bias and selective reporting.We searched three databases-clinicaltrials.gov, the Clinical Research Information System (CRIS), and PubMed-for neurologically relevant SC-based human trials and articles in Korea. The registration of trials, posting and publication of results, and registration of published SC articles were examined.There were 17 completed trials registered at clinicaltrials.gov and the CRIS website, with results articles having been published for 5 of them. Our study found 16 publications, of which 1 was a review article, 1 was a protocol article, and 8 contained registered trial information.Many registered SC trials related to neurological disorders are not reported, while many SC-related publications are not registered in a public registry. These results support the presence of biased reporting and publication bias in SC trials related to neurological disorders in Korea.
Project description:We wanted to investigate the frequency of undisclosed changes in the outcomes of randomized controlled trials (RCTs) between trial registration and publication.Using a retrospective, nonrandom, cross-sectional study design, we investigated RCTs published in consecutive issues of 5 major medical journals during a 6-month period and their associated trials registry entries. Articles were excluded if they did not have an available trial registry entry, did not have analyzable outcomes, or were secondary publications. The primary outcome was the proportion of publications in which the primary outcome of the trial was, without disclosure, changed between that recorded in the trial registry and that reported in the final publication. The secondary outcome was the proportion of publications in which the secondary outcome was changed without disclosure.We reviewed 158 reports of RCTs and included 110 in the analysis. In 34 (31%), a primary outcome had been changed, and in 77 (70%), a secondary outcome had been changed.There are substantial and important undisclosed changes made to the outcomes of published RCTs between trial registration and publication. This finding has important implications for the interpretation of trial results. Disclosure and discussion of changes would improve transparency in the performance and reporting of trials.
Project description:Objectives To evaluate the adequacy of reporting of protocols for randomised trials on diseases of the digestive system registered in http://ClinicalTrials.gov and the consistency between primary outcomes, secondary outcomes and sample size specified in http://ClinicalTrials.gov and published trials. Methods Randomised phase III trials on adult patients with gastrointestinal diseases registered before January 2009 in http://ClinicalTrials.gov were eligible for inclusion. From http://ClinicalTrials.gov all data elements in the database required by the International Committee of Medical Journal Editors (ICMJE) member journals were extracted. The subsequent publications for registered trials were identified. For published trials, data concerning publication date, primary and secondary endpoint, sample size, and whether the journal adhered to ICMJE principles were extracted. Differences between primary and secondary outcomes, sample size and sample size calculations data in http://ClinicalTrials.gov and in the published paper were registered. Results 105 trials were evaluated. 66 trials (63%) were published. 30% of trials were registered incorrectly after their completion date. Several data elements of the required ICMJE data list were not filled in, with missing data in 22% and 11%, respectively, of cases concerning the primary outcome measure and sample size. In 26% of the published papers, data on sample size calculations were missing and discrepancies between sample size reporting in http://ClinicalTrials.gov and published trials existed. Conclusion The quality of registration of randomised controlled trials still needs improvement.
Project description:BACKGROUND AND OBJECTIVE:Trial registration is widely endorsed as it is considered not only to enhance transparency and quality of reporting but also to help safeguard against outcome reporting bias and probably spin, known as specific reporting that could distort the interpretation of results thus mislead readers. We planned to investigate the current registration status of recently published randomized controlled trials (RCTs) of acupuncture, outcome reporting bias in the prospectively registered trials, and the association between trial registration and presence of spin and methodological factors in acupuncture RCTs. METHODS:Acupuncture RCTs published in English in recent 5 years (January 2013 to December 2017) were searched in PubMed, Cochrane Central Register of Controlled Trials, and EMBASE. Trial registration records identified in the publications and trial registries were classified into prospectively registered, retrospectively registered, or unregistered. Primary outcomes were identified and the direction of the results was judged as statistically significant (positive) or statistically nonsignificant (negative). We compared registered and published primary outcomes to assess outcome reporting bias and assessed whether discrepancies favored statistically significant outcomes. Frequency and strategies of spin in published reports with statistically nonsignificant results for primary outcomes were then identified. We also analyzed whether the trial registration status was associated with spin and quality of methodological factors. RESULTS:Of the 322 included RCTs, 41.9% (n = 135) were prospectively registered. Among 64 studies that were prospectively registered and specified primary outcomes, 25 trials had the discrepancies between the registered and published primary outcomes and 60% of them (15 trials) favored the statistically significant findings. Among 169 studies that specified primary outcomes, trial registration status was not associated with the direction of results, i.e., statistically significant or not. Spin was identified in 56.4% out of 78 studies with statistically nonsignificant primary outcomes and claiming efficacy with no consideration of statistically nonsignificant primary outcomes was the most common strategy for spin. Trial registration status was not statistically different between studies with and without spin. CONCLUSION:While trial registration seemed to have improved over time, primary outcomes in registered records and publications were often inconsistent, tending to favor statistically significant findings and spin was common in studies with statistically nonsignificant primary outcomes. Journal editors and researchers in this field should be alerted to still prevalent reporting bias and spin.