Project description:BackgroundThe Food and Drug Administration Amendments Act (FDAAA) mandates timely reporting of results of applicable clinical trials to ClinicalTrials.gov. We characterized the proportion of applicable clinical trials with publicly available results and determined independent factors associated with the reporting of results.MethodsUsing an algorithm based on input from the National Library of Medicine, we identified trials that were likely to be subject to FDAAA provisions (highly likely applicable clinical trials, or HLACTs) from 2008 through 2013. We determined the proportion of HLACTs that reported results within the 12-month interval mandated by the FDAAA or at any time during the 5-year study period. We used regression models to examine characteristics associated with reporting at 12 months and throughout the 5-year study period.ResultsFrom all the trials at ClinicalTrials.gov, we identified 13,327 HLACTs that were terminated or completed from January 1, 2008, through August 31, 2012. Of these trials, 77.4% were classified as drug trials. A total of 36.9% of the trials were phase 2 studies, and 23.4% were phase 3 studies; 65.6% were funded by industry. Only 13.4% of trials reported summary results within 12 months after trial completion, whereas 38.3% reported results at any time up to September 27, 2013. Timely reporting was independently associated with factors such as FDA oversight, a later trial phase, and industry funding. A sample review suggested that 45% of industry-funded trials were not required to report results, as compared with 6% of trials funded by the National Institutes of Health (NIH) and 9% of trials that were funded by other government or academic institutions.ConclusionsDespite ethical and legal obligations to disclose findings promptly, most HLACTs did not report results to ClinicalTrials.gov in a timely fashion during the study period. Industry-funded trials adhered to legal obligations more often than did trials funded by the NIH or other government or academic institutions. (Funded by the Clinical Trials Transformation Initiative and the NIH.).
Project description:Academic degrees following author names are often included in medical research papers. However, it remains unclear how many journals choose to include academic degrees and whether this is more common in certain types of journals. We examined the 100 highest impact medical journals and found that only 24 medical journals reported academic degrees. Moreover, this was substantially more common in journals based in North America compared with Europe. Further research is required to explore the implications of listing academic degrees on the readers' attitude towards research quality.
Project description:BackgroundAt the end of the past century there were multiple concerns regarding lack of transparency in the conduct of clinical trials as well as some ethical and scientific issues affecting the trials' design and reporting. In 2000 ClinicalTrials.gov data repository was developed and deployed to serve public and scientific communities with valid data on clinical trials. Later in order to increase deposited data completeness and transparency of medical research a set of restrains had been imposed making the results deposition compulsory for multiple cases.MethodsWe investigated efficiency of the results deposition and outcome reporting as well as what factors make positive impact on providing information of interest and what makes it more difficult, whether efficiency depends on what kind of institution was a trial sponsor. Data from the ClinicalTrials.gov repository has been classified based on what kind of institution a trial sponsor was. The odds ratio was calculated for results and outcome reporting by different sponsors' class.ResultsAs of 01/01/2012 118,602 clinical trials data deposits were made to the depository. They came from 9068 different sources. 35344 (29.8%) of them are assigned as FDA regulated and 25151 (21.2%) as Section 801 controlled substances. Despite multiple regulatory requirements, only about 35% of trials had clinical study results deposited, the maximum 55.56% of trials with the results, was observed for trials completed in 2008.ConclusionsThe most positive impact on depositing results, the imposed restrains made for hospitals and clinics. Health care companies showed much higher efficiency than other investigated classes both in higher fraction of trials with results and in providing at least one outcome for their trials. They also more often than others deposit results when it is not strictly required, particularly, in the case of non-interventional studies.
Project description:BackgroundThe US Food and Drug Administration Amendments Act requires results from clinical trials of Food and Drug Administration-approved drugs to be posted at ClinicalTrials.gov within 1 y after trial completion. We compared the timing and completeness of results of drug trials posted at ClinicalTrials.gov and published in journals.Methods and findingsWe searched ClinicalTrials.gov on March 27, 2012, for randomized controlled trials of drugs with posted results. For a random sample of these trials, we searched PubMed for corresponding publications. Data were extracted independently from ClinicalTrials.gov and from the published articles for trials with results both posted and published. We assessed the time to first public posting or publishing of results and compared the completeness of results posted at ClinicalTrials.gov versus published in journal articles. Completeness was defined as the reporting of all key elements, according to three experts, for the flow of participants, efficacy results, adverse events, and serious adverse events (e.g., for adverse events, reporting of the number of adverse events per arm, without restriction to statistically significant differences between arms for all randomized patients or for those who received at least one treatment dose). From the 600 trials with results posted at ClinicalTrials.gov, we randomly sampled 50% (n?=?297) had no corresponding published article. For trials with both posted and published results (n?=?202), the median time between primary completion date and first results publicly posted was 19 mo (first quartile?=?14, third quartile?=?30 mo), and the median time between primary completion date and journal publication was 21 mo (first quartile?=?14, third quartile?=?28 mo). Reporting was significantly more complete at ClinicalTrials.gov than in the published article for the flow of participants (64% versus 48% of trials, p<0.001), efficacy results (79% versus 69%, p?=?0.02), adverse events (73% versus 45%, p<0.001), and serious adverse events (99% versus 63%, p<0.001). The main study limitation was that we considered only the publication describing the results for the primary outcomes.ConclusionsOur results highlight the need to search ClinicalTrials.gov for both unpublished and published trials. Trial results, especially serious adverse events, are more completely reported at ClinicalTrials.gov than in the published article.
Project description:Background/aims The Food and Drug Administration Amendments Act mandates that applicable clinical trials report basic summary results to the ClinicalTrials.gov database within 1 year of trial completion or termination. We aimed to determine the proportion of pulmonary trials reporting basic summary results to ClinicalTrials.gov and assess factors associated with reporting. Methods We identified pulmonary clinical trials subject to the Food and Drug Administration Amendments Act (called highly likely applicable clinical trials) that were completed or terminated between 2008 and 2012 and reported results by September 2013. We estimated the cumulative percentage of applicable clinical trials reporting results by pulmonary disease category. Multivariable Cox regression modeling identified characteristics independently associated with results reporting. Results Of 1450 pulmonary highly likely applicable clinical trials, 380 (26%) examined respiratory neoplasms, 238 (16%) asthma, 175 (12%) chronic obstructive pulmonary disease, and 657 (45%) other respiratory diseases. Most (75%) were pharmaceutical highly likely applicable clinical trials and 71% were industry-funded. Approximately 15% of highly likely applicable clinical trials reported results within 1 year of trial completion, while 55% reported results over the 5-year study period. Earlier phase highly likely applicable clinical trials were less likely to report results compared to phase 4 highly likely applicable clinical trials (phases 1/2 and 2 (adjusted hazard ratio 0.41 (95% confidence interval: 0.31-0.54)), phases 2/3 and 3 (adjusted hazard ratio 0.55 (95% confidence interval: 0.42-0.72)) and phase not applicable (adjusted hazard ratio 0.43 (95% confidence interval: 0.29-0.63)). Pulmonary highly likely applicable clinical trials without Food and Drug Administration oversight were less likely to report results compared with those with oversight (adjusted hazard ratio 0.65 (95% confidence interval: 0.51-0.83)). Conclusion A total of 15% of pulmonary clinical highly likely applicable clinical trials report basic summary results to ClinicalTrials.gov within 1 year of trial completion. Strategies to improve reporting are needed within the pulmonary community.
Project description:ClinicalTrials.gov requires reporting of result summaries for many drug and device trials.To evaluate the consistency of reporting of trials that are registered in the ClinicalTrials.gov results database and published in the literature.ClinicalTrials.gov results database and matched publications identified through ClinicalTrials.gov and a manual search of 2 electronic databases.10% random sample of phase 3 or 4 trials with results in the ClinicalTrials.gov results database, completed before 1 January 2009, with 2 or more groups.One reviewer extracted data about trial design and results from the results database and matching publications. A subsample was independently verified.Of 110 trials with results, most were industry-sponsored, parallel-design drug studies. The most common inconsistency was the number of secondary outcome measures reported (80%). Sixteen trials (15%) reported the primary outcome description inconsistently, and 22 (20%) reported the primary outcome value inconsistently. Thirty-eight trials inconsistently reported the number of individuals with a serious adverse event (SAE); of these, 33 (87%) reported more SAEs in ClinicalTrials.gov. Among the 84 trials that reported SAEs in ClinicalTrials.gov, 11 publications did not mention SAEs, 5 reported them as zero or not occurring, and 21 reported a different number of SAEs. Among 29 trials that reported deaths in ClinicalTrials.gov, 28% differed from the matched publication.Small sample that included earliest results posted to the database.Reporting discrepancies between the ClinicalTrials.gov results database and matching publications are common. Which source contains the more accurate account of results is unclear, although ClinicalTrials.gov may provide a more comprehensive description of adverse events than the publication.Agency for Healthcare Research and Quality.
Project description:BackgroundSelective outcome reporting is a significant methodological concern. Comparisons between the outcomes reported in clinical trial registrations and those later published allow investigators to understand the extent of selection bias among trialists. We examined the possibility of selective outcome reporting in randomized controlled trials (RCTs) published in neurology journals.MethodsWe searched PubMed for randomized controlled trials from Jan 1, 2010 -Dec 31, 2015 published in the top 3 impact factor neurology journals. These articles were screened according to specific inclusion criteria. Each author individually extracted data from trials following a standardized protocol. A second author verified each extracted element and discrepancies were resolved. Consistency between registered and published outcomes was evaluated and correlations between discrepancies and funding, journal, and temporal trends were examined.Results180 trials were included for analysis. 10 (6%) primary outcomes were demoted, 38 (21%) primary outcomes were omitted from the publication, and 61 (34%) unregistered primary outcomes were added to the published report. There were 18 (10%) cases of secondary outcomes being upgraded to primary outcomes in the publication, and there were 53 (29%) changes in timing of assessment. Of 82 (46%) major discrepancies with reported p-values, 54 (66.0%) favored publication of statistically significant results.ConclusionAcross trials, we found 180 major discrepancies. 66% of major discrepancies with a reported p-value (n = 82) favored statistically significant results. These results suggest a need within neurology to provide more consistent and timely registration of outcomes.
Project description:ObjectiveChildhood obesity is one of the most severe challenges of public health in the twenty-first century and may increase the risk of various physical and psychological diseases in adulthood. The prevalence and predictors of unreported results and premature termination in pediatric obesity research are not clear. We aimed to characterize childhood obesity trials registered on ClinicalTrials.gov and identify features associated with early termination and lack of results reporting.MethodsRecords were downloaded and screened for all childhood obesity trials from the inception of ClinicalTrials.gov to July 29, 2021. We performed descriptive analyses of characteristics, Cox regression for early termination, and logistic regression for lack of results reporting.ResultsWe identified 1,312 trials registered at ClinicalTrials.gov. Among clinicalTrials.gov registered childhood obesity-related intervention trials, trial unreported results were 88.5 and 4.3% of trials were prematurely terminated. Additionally, the factors that reduced the risk of unreported outcomes were US-registered clinical studies and drug intervention trials. Factors associated with a reduced risk of early termination are National Institutes of Health (NIH) or other federal agency funding and large trials.ConclusionThe problem of unreported results in clinical trials of childhood obesity is serious. Therefore, timely bulletin of the results and reasons for termination remain urgent aims for childhood obesity trials.
Project description:ObjectiveWe assessed how well articles in major medical and psychiatric journals followed best reporting practices in presenting results of intervention studies.MethodStandardised data collection was used to review studies in high-impact and widely read medical (JAMA, Lancet and New England Journal of Medicine) and psychiatric (American Journal of Psychiatry, JAMA Psychiatry, Journal of Clinical Psychiatry and Lancet Psychiatry) journals, published between 1 September 2018 and 31 August 2019. Two team members independently reviewed each article.MeasuresThe primary outcome measure was proportion of papers reporting consensus elements required to understand and evaluate the results of the intervention. The secondary outcome measure was comparison of complete and accessible reporting in the major medical versus the major psychiatric journals.ResultsOne hundred twenty-seven articles were identified for inclusion. At least 90% of articles in both medical and psychiatric journals included sample size, statistical significance, randomisation method, elements of study flow, and age, sex, and illness severity by randomisation group. Selected elements less frequently reported by either journal type were confidence intervals in the abstract, reported in 93% (95% CI 84% to 97%) of medical journal articles and 58% (95% CI 45% to 69%) of psychiatric journal articles, and sample size method (93%, 95% CI 84% to 97% medical; 69%, 95% CI 57% to 80% psychiatric), race and ethnicity by randomisation group (51%, 95% CI 40% to 63% medical; 73%, 95% CI 60% to 83% psychiatric), and adverse events (94%; 95% CI 86% to 98% medical; 80%, 95% CI 68% to 88% psychiatric) in the main text. CIs were included less often in psychiatric than medical journals (p<0.004 abstract, p=0.04 main text, after multiple-testing correction).ConclusionsRecommendations include standard inclusion of a table specifying the outcome(s) designated as primary, and the sample size, effect size(s), CI(s) and p value(s) corresponding to the primary test(s) for efficacy.
Project description:BackgroundPharmaceutical companies and other trial sponsors must submit certain trial results to ClinicalTrials.gov. The validity of these results is unclear.PurposeTo validate results posted on ClinicalTrials.gov against publicly available U.S. Food and Drug Administration (FDA) reviews on Drugs@FDA.Data sourcesClinicalTrials.gov (registry and results database) and Drugs@FDA (medical and statistical reviews).Study selection100 parallel-group, randomized trials for new drug approvals (January 2013 to July 2014) with results posted on ClinicalTrials.gov (15 March 2015).Data extraction2 assessors extracted, and another verified, the trial design, primary and secondary outcomes, adverse events, and deaths.ResultsMost trials were phase 3 (90%), double-blind (92%), and placebo-controlled (73%) and involved 32 drugs from 24 companies. Of 137 primary outcomes identified from ClinicalTrials.gov, 134 (98%) had corresponding data at Drugs@FDA, 130 (95%) had concordant definitions, and 107 (78%) had concordant results. Most differences were nominal (that is, relative difference <10%). Primary outcome results in 14 trials could not be validated. Of 1927 secondary outcomes from ClinicalTrials.gov, Drugs@FDA mentioned 1061 (55%) and included results data for 367 (19%). Of 96 trials with 1 or more serious adverse events in either source, 14 could be compared and 7 had discordant numbers of persons experiencing the adverse events. Of 62 trials with 1 or more deaths in either source, 25 could be compared and 17 were discordant.LimitationUnknown generalizability to uncontrolled or crossover trial results.ConclusionPrimary outcome definitions and results were largely concordant between ClinicalTrials.gov and Drugs@FDA. Half the secondary outcomes, as well as serious events and deaths, could not be validated because Drugs@FDA includes only "key outcomes" for regulatory decision making and frequently includes only adverse event results aggregated across multiple trials.Primary funding sourceNational Library of Medicine.