Project description:The Journal of Physiology and British Journal of Pharmacology jointly published an editorial series in 2011 to improve standards in statistical reporting and data analysis. It is not known whether reporting practices changed in response to the editorial advice. We conducted a cross-sectional analysis of reporting practices in a random sample of research papers published in these journals before (n = 202) and after (n = 199) publication of the editorial advice. Descriptive data are presented. There was no evidence that reporting practices improved following publication of the editorial advice. Overall, 76-84% of papers with written measures that summarized data variability used standard errors of the mean, and 90-96% of papers did not report exact p-values for primary analyses and post-hoc tests. 76-84% of papers that plotted measures to summarize data variability used standard errors of the mean, and only 2-4% of papers plotted raw data used to calculate variability. Of papers that reported p-values between 0.05 and 0.1, 56-63% interpreted these as trends or statistically significant. Implied or gross spin was noted incidentally in papers before (n = 10) and after (n = 9) the editorial advice was published. Overall, poor statistical reporting, inadequate data presentation and spin were present before and after the editorial advice was published. While the scientific community continues to implement strategies for improving reporting practices, our results indicate stronger incentives or enforcements are needed.
Project description:Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science.
Project description:PurposeTo investigate the quality of harms reporting in systematic reviews (SRs) regarding hip arthroscopy in the current literature.MethodsIn May 2022, an extensive search of 4 major databases was performed identifying SRs regarding hip arthroscopy: MEDLINE (PubMed and Ovid), EMBASE, Epistemonikos, and Cochrane Database of Systematic Reviews. A cross-sectional analysis was conducted, in which investigators performed screening and data extraction of the included studies in a masked, duplicate fashion. AMSTAR-2 (A Measurement Tool to Assess Systematic Reviews-2) was used to assess the methodologic quality and bias of the included studies. The corrected covered area was calculated for SR dyads.ResultsA total of 82 SRs were included in our study for data extraction. Of these SRs, 37 reported under 50% of the harms criteria (37 of 82, 45.1%) and 9 did not report harms at all (9 of 82, 10.9%). A significant relation was found between completeness of harms reporting and overall AMSTAR appraisal (P = .0261), as well as whether a harm was listed as a primary or secondary outcome (P = .0001). Eight SR dyads had corrected covered areas of 50% or greater and were compared for shared harms reported.ConclusionsIn this study, we found inadequate harms reporting in most SRs concerning hip arthroscopy.Clinical relevanceWith the magnitude of hip arthroscopic procedures being performed, adequate reporting of harms-related information in the research surrounding this treatment is essential in assessing the efficacy of the treatment. This study provides data in relation to harms reporting in SRs regarding hip arthroscopy.
Project description:The Investigators will conduct a longitudinal, mixed-methods cohort study to assess primary and secondary psychosocial outcomes among 705 MyCode pediatric participants and their parents, and health behaviors of parents whose children receive an adult- or pediatric-onset genomic result. Data will be gathered via quantitative surveys using validated measures of distress, family functioning, quality of life, body image, perceived cancer/heart disease risk, genetic counseling satisfaction, genomics knowledge, and adjustment to genetic information; qualitative interviews with adolescents and parents; and electronic health records review of parents’ cascade testing uptake and initiation of risk reduction behaviors. The investigators will also conduct empirical and theoretical legal research to examine the loss of chance doctrine and its applicability to genomic research.
Project description:BackgroundTo assess registration completeness and safety data of trials on human genome editing (HGE) reported in primary registries and published in journals, as HGE has safety and ethical problems, including the risk of undesirable and unpredictable outcomes. Registration transparency has not been evaluated for clinical trials using these novel and revolutionary techniques in human participants.MethodsObservational study of trials involving engineered site-specific nucleases and long-term follow-up observations, identified from the WHO ICTRP HGE Registry in November 2020 and two comprehensive reviews published in the same year. Registration and adverse events (AEs) information were collected from public registries and matching publications. Published data were extracted in May 2021.ResultsAmong 81 eligible trials, most were recruiting (51.9%) phase 1 trials (45.7%). Five trials were withdrawn. Most trials investigated CAR T cells therapies (45.7%) and used CRISPR/Cas9 (35.8%) ex vivo (88.9%). Among 12 trials with protocols both registered and published, eligibility criteria, sample size, and secondary outcome measures were consistently reported for less than a half. Three trials posted results in ClinicalTrials.gov, and one reported serious AEs.ConclusionsIncomplete registration and published data give emphasis to the need to increase the transparency of HGE trials. Further improvements in registration requirements, including phase 1 trials, and a more controlled publication procedure, are needed to augment the implementation of this promising technology.
Project description:Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.
Project description:Pharmacokinetics is the cornerstone of understanding drug absorption, distribution, metabolism, and elimination. It is also the key to describing variability in drug response caused by drug-drug interactions (DDIs), pharmacogenetics, impaired kidney and liver function, etc. This tutorial aims to provide a guideline and step-by-step tutorial on essential considerations when designing clinical pharmacokinetic studies and reporting results. This includes a comprehensive guide on how to conduct the statistical analysis and a complete code for the statistical software R. As an example, we created a mock dataset simulating a clinical pharmacokinetic DDI study with 12 subjects who were administered 2 mg oral midazolam with and without an inducer of cytochrome P450 3A. We provide a step-by-step guide to the statistical analysis of this clinical pharmacokinetic study, including sample size/power calculation, descriptive statistics, noncompartmental analyses, and hypothesis testing. The different analyses and parameters are described in detail, and we provide a complete R code ready to use in supplementary files. Finally, we discuss important considerations when designing and reporting clinical pharmacokinetic studies. The scope of this tutorial is not limited to DDI studies, and with minor adjustments, it applies to all types of clinical pharmacokinetic studies. This work was done by early career researchers for early career researchers. We hope this tutorial may help early career researchers when getting started on their own pharmacokinetic studies. We encourage you to use this as an inspiration and starting point and continuously evolve your statistical skills.
Project description:BackgroundThe widespread reluctance to share published research data is often hypothesized to be due to the authors' fear that reanalysis may expose errors in their work or may produce conclusions that contradict their own. However, these hypotheses have not previously been studied systematically.Methods and findingsWe related the reluctance to share research data for reanalysis to 1148 statistically significant results reported in 49 papers published in two major psychology journals. We found the reluctance to share data to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.ConclusionsOur findings on the basis of psychological papers suggest that statistical results are particularly hard to verify when reanalysis is more likely to lead to contrasting conclusions. This highlights the importance of establishing mandatory data archiving policies.