Underregistration and Underreporting of Stem Cell Clinical Trials in Neurological Disorders.
ABSTRACT: Research on stem cells (SC) is growing rapidly in neurology, but clinical applications of SC for neurological disorders remain to be proven effective and safe. Human clinical trials need to be registered in registries in order to reduce publication bias and selective reporting.We searched three databases-clinicaltrials.gov, the Clinical Research Information System (CRIS), and PubMed-for neurologically relevant SC-based human trials and articles in Korea. The registration of trials, posting and publication of results, and registration of published SC articles were examined.There were 17 completed trials registered at clinicaltrials.gov and the CRIS website, with results articles having been published for 5 of them. Our study found 16 publications, of which 1 was a review article, 1 was a protocol article, and 8 contained registered trial information.Many registered SC trials related to neurological disorders are not reported, while many SC-related publications are not registered in a public registry. These results support the presence of biased reporting and publication bias in SC trials related to neurological disorders in Korea.
Project description:OBJECTIVE:Many people dealing with Parkinson's disease (PD) turn to complementary and alternative medicine when searching for a cure or relief from symptoms. Acupuncture is widely used in the Korean PD population to alleviate symptoms and in hopes of curing the illness. However, acupuncture use for PD patients has only recently begun to be studied scientifically and is still considered an unproven treatment for PD. Therefore, there is an urgent need for acupuncture to be studied, validated and used for PD. Thus, our study's aim is to examine how many acupuncture studies in PD are registered and reported in Korea. METHODS:The registries Clinicaltrials.gov and the Clinical Research Information Service (CRIS) and the search engine PubMed were searched to find relevant human clinical studies involving acupuncture therapy in PD patients. We examined the registration of trials, the posting and publication of results, and whether published articles were registered. RESULTS:In Clinicaltrials.gov, one completed trial was found with published results. In CRIS, one completed trial was found with published results. A total of 6 publications were found in our study: 2 articles were registered, but only 1 had the registered trial number listed in the article. CONCLUSION:Acupuncture is popular among the PD population in Korea regardless of its unproven safety and efficacy. Despite the pressing need for clinical trials, the number of studies listed in the registries was small, and only a few publications were registered. More effort and rigor are needed to validate the efficacy and safety of acupuncture for PD.
Project description:OBJECTIVES:To examine 1) the publication rate of registered otology trials in ClinicalTrials.gov, 2) the public availability of the results, 3) the study characteristics associated with publication, and 4) the time to publication after trial completion. BACKGROUND:Publication bias, the publication or non-publication of research findings, depending on the nature and direction of results, is accountable for wrong treatment decisions. The extent of publication bias in otology trials has not been evaluated. METHODS:All registered otology trials were extracted from ClinicalTrials.gov with completion date up to December 2015. A search strategy was used to identify corresponding publications up to June 2017, providing at least 18 months to publish the results after trial completion. Characteristics were obtained from ClinicalTrials.gov and corresponding publications. Regression models were used to examine study characteristics associated with publication or non-publication. RESULTS:From the 419 trials identified on ClinicalTrials.gov, 225 (53.7%) corresponding publications were found in PubMed. Among these, 109 (48.4%) publications were cited on ClinicalTrials.gov and 124 (55.1%) articles reported the National Clinical Trial registry number. For 36 (8.6%) trials, results were only reported in ClinicalTrials.gov. Trials with a biological intervention were more likely to be published than studies involving drugs (odds ratio (OR) 10.41, 95% confidence interval (CI) 1.26-86.22, P = 0.030). Trials funded by industry were less likely to be published (OR 0.46, CI 0.25-0.84, P = 0.011). The median trial duration was 20 months (interquartile range (IQR) 26 months), and median time from trial completion to publication was 24 months (IQR 22 months). CONCLUSION:In 37.7% of the registered otology trials the results remained unpublished, even several years after trial completion. With little citations on ClinicalTrials.gov and low reporting of the Clinical Trial registry number, the accessibility is limited. Our findings show that there is room for improvement in accuracy of trial registration and publication of results, in order to diminish publication bias in otology studies.
Project description:Background:To research 1) how many acupuncture clinical trials are registered with the WHO International Clinical Trial Registry Platform (ICTRP) and what patterns they demonstrate, 2) publication of the articles of acupuncture clinical trials which were registered with ICTRP. Methods:The search strategy using the ICTRP: Intervention: acupuncture; Recruitment status: All; Date of registration; from 1 Jan 1990 to 31 Dec 2018. We searched the indexed articles in PubMed using trial IDs on 25 Feb 2019. When the paper was published, we indicated the number of weeks from the date of registration with ICTRP to the date of publication in order to define time till the publication. We divided the whole period we analyzed into 6 periods of every 3 years and measured the proportion of publication and the time from the date of registration of each trial till its publication in each period by the Kaplan-Meier method. Results:Forty-three countries/areas conducted at least one acupuncture clinical trial. The total number of registrations was 1758. China, the USA, and the Republic of Korea accounted for 61% of those registrations. The proportion of publication was 178/1758 10% for the fully published papers and 141/1758 8% for the protocol papers. Conclusions:The substantial increase of registrations by China, the Republic of Korea, Iran, Brazil, Japan was observed which may be attributed to improved awareness of the CONSORT statement. However, the fully published papers rate is low at 10%. The publication of results of acupuncture clinical trials should also be rigorously mandated.
Project description:Purpose This study aimed to determine the number of posterior cruciate ligament (PCL) publications performed in Asian countries and to identify factors associated with research output in this region. Materials and methods Searches of existing academic journal articles were performed using PubMed, Google Scholar, and the Cochrane Library from January 1, 2009 until December 31, 2019. Results A total of 265 articles were published in the last 10 years in Asian countries, with an increase in publications after 2010 and an average of 26 articles every year. More than half (70%) of the articles were published in journals with an impact factor (IF) ?1. The majority of the publications were cohort studies (27%), followed by case reports (16%), systematic reviews/meta-analyses (2.6%), laboratory studies (1.8%), and case-control studies (1.5%). South Korea and China had the most PCL publications, and most authors were from South Korea. Conclusion The PCL research output in Asia is low in quantity but high in quality publications, and the majority of publications come from South Korea, China and Japan, with most being cohort studies and case reports. Highlights • Publication Trends.• Posterior Cruciate Ligament.• Over the Past 10 Years.• PubMed.• Asian Countries.
Project description:Emergency surgical practice constitutes 50% of the workload for surgeons, but there is a lack of high quality randomised controlled trials (RCTs) in emergency surgery. This study aims to establish the differences between the registration, completion and publication of emergency and elective surgical trials.The clinicaltrials.gov and ISRCTN.com trials registry databases were searched for RCTs between 12 July 2010 and 12 July 2012 using the keyword 'surgery'. Publications were systematically searched for in Pubmed, MEDLINE and EMBASE.Results with no surgical interventions were excluded. The remaining results were manually categorised into 'emergency' or 'elective' and 'surgical' or 'adjunct' by two reviewers.Number of RCTs registered in emergency versus elective surgery.Number of RCTs published in emergency versus elective surgery; reasons why trials remain unpublished; funding, sponsorship and impact of published articles; number of adjunct trials registered in emergency and elective surgery.2700 randomised trials were registered. 1173 trials were on a surgical population and of these, 414 trials were studying surgery. Only 9.4% (39/414) of surgical trials were in emergency surgery. The proportion of trials successfully published did not significantly differ between emergency and elective surgery (0.46 vs 0.52; mean difference (MD) -0.06, 95%?CI -0.24 to 0.12). Unpublished emergency surgical trials were statistically equally likely to be terminated early compared with elective trials (0.33 vs 0.16; MD -0.18, 95%?CI -0.06 to 0.41). Low accrual accounted for a similar majority in both groups (0.43 vs 0.46; MD -0.04, 95%?CI -0.48 to 0.41). Unpublished trials in both groups were statistically equally likely to still be planning publication (0.52 vs 0.71; MD -0.18, 95%?CI -0.43 to 0.07).Fewer RCTs are registered in emergency than elective surgery. Once trials are registered both groups are equally likely to be published.
Project description:The COVID-19 pandemic has unleashed a deluge of publications. For this cross-sectional study we compared the amount and reporting characteristics of COVID-19-related academic articles and preprints and the number of ongoing clinical trials and systematic reviews. To do this, we searched the PubMed database of citations and abstracts for published life science journals by using appropriate combinations of medical subject headings (MeSH terms), and the COVID-19 section of the MedRxiv and BioRxiv archives up to 20 May 2020 (21 weeks). In addition, we searched Clinicaltrial.gov, Chinese Clinical Trial Registry, EU Clinical Trials Register, and 15 other trial registers, as well as PROSPERO, the international prospective register of systematic reviews. The characteristics of each publication were extracted. Regression analyses and Z tests were used to detect publication trends and their relative proportions. A total of 3635 academic publications and 3805 preprints were retrieved. Only 8.6% (n = 329) of the preprints were already published in indexed journals. The number of academic and preprint publications increased significantly over time (p<0.001). Case reports (6% academic vs 0.9% preprints; p<0.001) and letters (17.4% academic vs 0.5% preprints; p<0.001) accounted for a greater share of academic compared to preprint publications. Differently, randomized controlled trials (0.22% vs 0.63%; p<0.001) and systematic reviews (0.08% vs 5%) made up a greater share of the preprints. The relative proportion of clinical studies registered at Clinicaltrials.gov, Chinese Clinical Trial Registry, and EU Clinical Trials Register was 57.9%, 49.5%, and 98.9%, respectively, most of which were still "recruiting". PROSPERO listed 962 systematic review protocols. Preprints were slightly more prevalent than academic articles but both were increasing in number. The void left by the lack of primary studies was filled by an outpour of immediate opinions (i.e., letters to the editor) published in PubMed-indexed journals. Summarizing, preprints have gained traction as a publishing response to the demand for prompt access to empirical, albeit not peer-reviewed, findings during the present pandemic.
Project description:To measure the rate of non-publication and assess possible publication bias in clinical trials of electronic health records.We searched ClinicalTrials.gov to identify registered clinical trials of electronic health records and searched the biomedical literature and contacted trial investigators to determine whether the results of the trials were published. Publications were judged as positive, negative, or neutral according to the primary outcome.Seventy-six percent of trials had publications describing trial results; of these, 74% were positive, 21% were neutral, and 4% were negative (harmful). Of unpublished studies for which the investigator responded, 43% were positive, 57% were neutral, and none were negative; the lower rate of positive results was significant (p<0.001).The rate of non-publication in electronic health record studies is similar to that in other biomedical studies. There appears to be a bias toward publication of positive trials in this domain.
Project description:To address the bias occurring in the medical literature associated with selective outcome reporting, in 2005, the International Committee of Medical Journal Editors (ICMJE) introduced mandatory trial registration guidelines and member journals required prospective registration of trials prior to patient enrolment as a condition of publication. No research has examined whether these guidelines are impacting psychiatry publications. Our objectives were to determine the extent to which articles published in psychiatry journals adhering to ICMJE guidelines were correctly prospectively registered, whether there was evidence of selective outcome reporting and changes to participant numbers, and whether there was a relationship between registration status and source of funding.Any clinical trial (as defined by ICMJE) published between 1 January 2009 and 31 July 2013 in the top five psychiatry journals adhering to ICMJE guidelines (The American Journal of Psychiatry, Archives of General Psychiatry/JAMA Psychiatry, Biological Psychiatry, Journal of the American Academy of Child and Adolescent Psychiatry, and The Journal of Clinical Psychiatry) and conducted after July 2005 (or 2007 for two journals) was included. For each identified trial, where possible we extracted trial registration information, changes to POMs between publication and registry to assess selective outcome reporting, changes to participant numbers, and funding type.Out of 3305 articles, 181 studies were identified as clinical trials requiring registration: 21 (11.6%) were deemed unregistered, 61 (33.7%) were retrospectively registered, 37 (20.4%) had unclear POMs either in the article or the registry and 2 (1.1%) were registered in an inaccessible trial registry. Only 60 (33.1%) studies were prospectively registered with clearly defined POMs; 17 of these 60 (28.3%) showed evidence of selective outcome reporting and 16 (26.7%) demonstrated a change in participant numbers of 20% or more; only 26 (14.4%) of the 181 the trials were prospectively registered and did not alter their POMs or the time frames at which they were measured. Prospective registration with no changes in POMs occurred more frequently with pharmaceutical funding.Although standards are in place to improve prospective registration and transparency in clinical trials, less than 15% of psychiatry trials were prospectively registered with no changes in POMs. Most trials were either not prospectively registered, changed POMs or the timeframes at some point after registration or changed participant numbers. Authors, journal editors and reviewers need to further efforts to highlight the value of prospective trial registration.
Project description:BACKGROUND:Little is known about publication agreements between industry and academic investigators in trial protocols and the consistency of these agreements with corresponding statements in publications. We aimed to investigate (i) the existence and types of publication agreements in trial protocols, (ii) the completeness and consistency of the reporting of these agreements in subsequent publications, and (iii) the frequency of co-authorship by industry employees. METHODS AND FINDINGS:We used a retrospective cohort of randomized clinical trials (RCTs) based on archived protocols approved by six research ethics committees between 13 January 2000 and 25 November 2003. Only RCTs with industry involvement were eligible. We investigated the documentation of publication agreements in RCT protocols and statements in corresponding journal publications. Of 647 eligible RCT protocols, 456 (70.5%) mentioned an agreement regarding publication of results. Of these 456, 393 (86.2%) documented an industry partner's right to disapprove or at least review proposed manuscripts; 39 (8.6%) agreements were without constraints of publication. The remaining 24 (5.3%) protocols referred to separate agreement documents not accessible to us. Of those 432 protocols with an accessible publication agreement, 268 (62.0%) trials were published. Most agreements documented in the protocol were not reported in the subsequent publication (197/268 [73.5%]). Of 71 agreements reported in publications, 52 (73.2%) were concordant with those documented in the protocol. In 14 of 37 (37.8%) publications in which statements suggested unrestricted publication rights, at least one co-author was an industry employee. In 25 protocol-publication pairs, author statements in publications suggested no constraints, but 18 corresponding protocols documented restricting agreements. CONCLUSIONS:Publication agreements constraining academic authors' independence are common. Journal articles seldom report on publication agreements, and, if they do, statements can be discrepant with the trial protocol.
Project description:BACKGROUND:Clinical trials are key to advancing evidence-based medical research. The medical research literature has identified the impact of publication bias in clinical trials. Selective publication for positive outcomes or nonpublication of negative results could misdirect subsequent research and result in literature reviews leaning toward positive outcomes. Digital health trials face specific challenges, including a high attrition rate, usability issues, and insufficient formative research. These challenges may contribute to nonpublication of the trial results. To our knowledge, no study has thus far reported the nonpublication rates of digital health trials. OBJECTIVE:The primary research objective was to evaluate the nonpublication rate of digital health randomized clinical trials registered in ClinicalTrials.gov. Our secondary research objective was to determine whether industry funding contributes to nonpublication of digital health trials. METHODS:To identify digital health trials, a list of 47 search terms was developed through an iterative process and applied to the "Title," "Interventions," and "Outcome Measures" fields of registered trials with completion dates between April 1, 2010, and April 1, 2013. The search was based on the full dataset exported from the ClinlicalTrials.gov database, with 265,657 trials entries downloaded on February 10, 2018, to allow publication of studies within 5 years of trial completion. We identified publications related to the results of the trials through a comprehensive approach that included an automated and manual publication-identification process. RESULTS:In total, 6717 articles matched the a priori search terms, of which 803 trials matched our latest completion date criteria. After screening, 556 trials were included in this study. We found that 150 (27%) of all included trials remained unpublished 5 years after their completion date. In bivariate analyses, we observed statistically significant differences in trial characteristics between published and unpublished trials in terms of the intervention target condition, country, trial size, trial phases, recruitment, and prospective trial registration. In multivariate analyses, differences in trial characteristics between published and unpublished trials remained statistically significant for the intervention target condition, country, trial size, trial phases, and recruitment; the odds of publication for non-US-based trials were significant, and these trials were 3.3 (95% CI 1.845-5.964) times more likely to be published than US-based trials. We observed a trend of 1.5 times higher nonpublication rates for industry-funded trials. However, the trend was not statistically significant. CONCLUSIONS:In the domain of digital health, 27% of registered clinical trials results are unpublished, which is lower than nonpublication rates in other fields. There are substantial differences in nonpublication rates between trials funded by industry and nonindustry sponsors. Further research is required to define the determinants and reasons for nonpublication and, more importantly, to articulate the impact and risk of publication bias in the field of digital health trials.