Project description:BackgroundClinical trial registries have been established as a form of public accountability. Sponsors ought to register their trials promptly and accurately, but this is not always done. Some of the problems include non-registration of trials, registration of trials with incomplete information, and non-reporting of trial results on time. In this study we enumerate or quantify some quality issues with respect to Principal Investigator (PI) and Responsible Party data.MethodsWe analyzed interventional trials registered with ClinicalTrials.gov. Using certain selection criteria, we started with 112,013 records, and then applied further filters. The trial had to (a) start between 1 January 2005 and 31 December 2014, (b) include a "drug" or "biological" in the "intervention" field, (c) be registered with an American authority, and (d) list a real person's name as investigator and also his or her role in the study.ResultsWe identified four categories of errors in the ClinicalTrials.gov records. First, some data were missing. The name of the investigator, or his or her role, was missing in 12% of 35,121 trials. In examining 71,359 pairs of names and roles, 17% of the "names" were found to be not those of real persons, but instead junk information. Second, there were variations in a large number of names. We identified 19 categories of variants. We determined that 13% of the names had variants that could not be resolved using a program. Third, some trials listed many PIs each, although only one such person holds overall responsibility for the trial and therefore not more than one person should be listed as PI. Fourth, in examining whether the PI's name was available as part of the Responsible Party tag, we found that in 1221 (3.5%) of 35,121 trials, the Responsible Party tag is absent.ConclusionsWe have outlined four categories of problems with data hosted by ClinicalTrials.gov and have quantified three of them. We also suggest how these errors could be prevented in future. It is important to carry out various kinds of audits of trial registries, in order to identify lacunae in the records, that they be addressed.
Project description:BackgroundCommon disease-specific outcomes are vital for ensuring comparability of clinical trial data and enabling meta analyses and interstudy comparisons. Traditionally, the process of deciding which outcomes should be recommended as common for a particular disease relied on assembling and surveying panels of subject-matter experts. This is usually a time-consuming and laborious process.ObjectiveThe objectives of this work were to develop and evaluate a generalized pipeline that can automatically identify common outcomes specific to any given disease by finding, downloading, and analyzing data of previous clinical trials relevant to that disease.MethodsAn automated pipeline to interface with ClinicalTrials.gov's application programming interface and download the relevant trials for the input condition was designed. The primary and secondary outcomes of those trials were parsed and grouped based on text similarity and ranked based on frequency. The quality and usefulness of the pipeline's output were assessed by comparing the top outcomes identified by it for chronic obstructive pulmonary disease (COPD) to a list of 80 outcomes manually abstracted from the most frequently cited and comprehensive reviews delineating clinical outcomes for COPD.ResultsThe common disease-specific outcome pipeline successfully downloaded and processed 3876 studies related to COPD. Manual verification indicated that the pipeline was downloading and processing the same number of trials as were obtained from the self-service ClinicalTrials.gov portal. Evaluating the automatically identified outcomes against the manually abstracted ones showed that the pipeline achieved a recall of 92% and precision of 79%. The precision number indicated that the pipeline was identifying many outcomes that were not covered in the literature reviews. Assessment of those outcomes indicated that they are relevant to COPD and could be considered in future research.ConclusionsAn automated evidence-based pipeline can identify common clinical trial outcomes of comparable breadth and quality as the outcomes identified in comprehensive literature reviews. Moreover, such an approach can highlight relevant outcomes for further consideration.
Project description:BackgroundThe ClinicalTrials.gov trial registry was expanded in 2008 to include a database for reporting summary results. We summarize the structure and contents of the results database, provide an update of relevant policies, and show how the data can be used to gain insight into the state of clinical research.MethodsWe analyzed ClinicalTrials.gov data that were publicly available between September 2009 and September 2010.ResultsAs of September 27, 2010, ClinicalTrials.gov received approximately 330 new and 2000 revised registrations each week, along with 30 new and 80 revised results submissions. We characterized the 79,413 registry and 2178 results of trial records available as of September 2010. From a sample cohort of results records, 78 of 150 (52%) had associated publications within 2 years after posting. Of results records available publicly, 20% reported more than two primary outcome measures and 5% reported more than five. Of a sample of 100 registry record outcome measures, 61% lacked specificity in describing the metric used in the planned analysis. In a sample of 700 results records, the mean number of different analysis populations per study group was 2.5 (median, 1; range, 1 to 25). Of these trials, 24% reported results for 90% or less of their participants.ConclusionsClinicalTrials.gov provides access to study results not otherwise available to the public. Although the database allows examination of various aspects of ongoing and completed clinical trials, its ultimate usefulness depends on the research community to submit accurate, informative data.
Project description:IntroductionEmergency medicine (EM) organizations such as the Society for Academic Emergency Medicine and the Institute of Medicine have called for more clinical research as a way of addressing the scarcity of research in EM. Previous investigations have examined funding and productivity in EM research, but whether EM researchers preferentially concentrate on certain patient-related topics is not known. We hypothesized that at least part of the scarcity of EM research is from the tendency of EM researchers, like researchers in other fields, to focus on rarer conditions with higher morbidity or mortality instead of on more common conditions with less acuity. This study compared the frequency of specific medical conditions presenting to emergency departments nationwide with the frequency of emergency physician research on those same conditions.MethodsThis study is a structured retrospective review and comparison of 2 databases during an 11-year span. Principal diagnoses made by emergency physicians as reported by the National Hospital Ambulatory Medical Care Survey were compared to all first-author publications by emergency physicians as reported in PubMed between 1996 and 2006. Statistics included correlations and linear regression with the number of emergency department (ED) visits per diagnosis as the independent variable and the number of articles published as the dependent variable.ResultsDuring the study period, there was significant concordance between the frequency of presenting conditions in the emergency department and the frequency of research being performed on those conditions, with a high correlation of 0.85 (P < 0.01). More common ED diagnoses such as injury/poisoning, symptoms/ill-defined conditions, and diseases of the respiratory system accounted for 60.9% of ED principal diagnoses and 50.2% of the total research published in PubMed.ConclusionUnlike researchers in other fields, emergency physicians investigate clinical problems in almost the exact proportion as those conditions are encountered in the emergency department. The scarcity of EM research does not have to do with a skewed focus toward less common patient problems.
Project description:BackgroundHealthcare system data (HSD) are increasingly used in clinical trials, augmenting or replacing traditional methods of collecting outcome data. This study, PRIMORANT, set out to identify, in the UK context, issues to be considered before the decision to use HSD for outcome data in a clinical trial is finalised, a methodological question prioritised by the clinical trials community.MethodsThe PRIMORANT study had three phases. First, an initial workshop was held to scope the issues faced by trialists when considering whether to use HSDs for trial outcomes. Second, a consultation exercise was undertaken with clinical trials unit (CTU) staff, trialists, methodologists, clinicians, funding panels and data providers. Third, a final discussion workshop was held, at which the results of the consultation were fed back, case studies presented, and issues considered in small breakout groups.ResultsKey topics included in the consultation process were the validity of outcome data, timeliness of data capture, internal pilots, data-sharing, practical issues, and decision-making. A majority of consultation respondents (n = 78, 95%) considered the development of guidance for trialists to be feasible. Guidance was developed following the discussion workshop, for the five broad areas of terminology, feasibility, internal pilots, onward data sharing, and data archiving.ConclusionsWe provide guidance to inform decisions about whether or not to use HSDs for outcomes, and if so, to assist trialists in working with registries and other HSD providers to improve the design and delivery of trials.
Project description:Participants with stimulating and recording electrodes implanted within the brain for clinical evaluation and treatment provide a rare opportunity to unravel the neuronal correlates of human memory, as well as offer potential for modulation of behavior. Recent intracranial stimulation studies of memory have been inconsistent in methodologies employed and reported conclusions, which renders generalizations and construction of a framework impossible. In an effort to unify future study efforts and enable larger meta-analyses we propose in this mini-review a set of guidelines to consider when pursuing intracranial stimulation studies of human declarative memory and summarize details reported by previous relevant studies. We present technical and safety issues to consider when undertaking such studies and a checklist for researchers and clinicians to use for guidance when reporting results, including targeting, placement, and localization of electrodes, behavioral task design, stimulation and electrophysiological recording methods, details of participants, and statistical analyses. We hope that, as research in invasive stimulation of human declarative memory further progresses, these reporting guidelines will aid in setting standards for multicenter studies, in comparison of findings across studies, and in study replications.
Project description:Genomic research and biobanking has undergone exponential growth in Africa and at the heart of this research is the sharing of biospecimens and associated clinical data amongst researchers in Africa and across the world. While this move towards open science is progressing, there has been a strengthening internationally of data protection regulations that seek to safeguard the rights of data subjects while promoting the movement of data for the benefit of research. In line with this global shift, many jurisdictions in Africa are introducing data protection regulations, but there has been limited consideration of the regulation of data sharing for genomic research and biobanking in Africa. South Africa (SA) is one country that has sought to regulate the international sharing of data and has enacted the Protection of Personal Information Act (POPIA) 2013 that will change the governance and regulation of data in SA, including health research data, once it is in force. To identify and discuss challenges and opportunities in the governance of data sharing for genomic and health research data in SA, a two-day meeting was convened in February 2019 in Cape Town, SA with over 30 participants with expertise in law, ethics, genomics and biobanking science, drawn from academia, industry, and government. This report sets out some of the key challenges identified during the workshop and the opportunities and limitations of the current regulatory framework in SA.