Project description:BackgroundRecent cohort studies of randomised controlled trials have provided evidence of within-study selective reporting bias; where statistically significant outcomes are more likely to be more completely reported compared to non-significant outcomes. Bias resulting from selective reporting can impact on meta-analyses, influencing the conclusions of systematic reviews, and in turn, evidence based clinical practice guidelines.In 2006 we received funding to investigate if there was evidence of within-study selective reporting in a cohort of RCTs submitted to New Zealand Regional Ethics Committees in 1998/99. This research involved accessing ethics applications, their amendments and annual reports, and comparing these with corresponding publications. We did not plan to obtain informed consent from trialists to view their ethics applications for practical and scientific reasons. In November 2006 we sought ethical approval to undertake the research from our institutional ethics committee. The Committee declined our application on the grounds that we were not obtaining informed consent from the trialists to view their ethics application. This initiated a seventeen month process to obtain ethical approval. This publication outlines what we planned to do, the issues we encountered, discusses the legal and ethical issues, and presents some potential solutions.Discussion and conclusionMethodological research such as this has the potential for public benefit and there is little or no harm for the participants (trialists) in undertaking it. Further, in New Zealand, there is freedom of information legislation, which in this circumstance, unambiguously provided rights of access and use of the information in the ethics applications. The decision of our institutional ethics committee defeated this right and did not recognise the nature of this observational research. Methodological research, such as this, can be used to develop processes to improve quality in research reporting. Recognition of the potential benefit of this research in the broader research community, and those who sit on ethics committees, is perhaps needed. In addition, changes to the ethical review process which involve separation between those who review proposals to undertake methodological research using ethics applications, and those with responsibility for reviewing ethics applications for trials, should be considered. Finally, we contend that the research community could benefit from quality improvement approaches used in allied sectors.
Project description:It is common to undertake qualitative research alongside randomised controlled trials (RCTs) when evaluating complex interventions. Researchers tend to analyse these datasets one by one and then consider their findings separately within the discussion section of the final report, rarely integrating quantitative and qualitative data or findings, and missing opportunities to combine data in order to add rigour, enabling thorough and more complete analysis, provide credibility to results, and generate further important insights about the intervention under evaluation. This paper reports on a 2 day expert meeting funded by the United Kingdom Medical Research Council Hubs for Trials Methodology Research with the aims to identify current strengths and weaknesses in the integration of quantitative and qualitative methods in clinical trials, establish the next steps required to provide the trials community with guidance on the integration of mixed methods in RCTs and set-up a network of individuals, groups and organisations willing to collaborate on related methodological activity. We summarise integration techniques and go beyond previous publications by highlighting the potential value of integration using three examples that are specific to RCTs. We suggest that applying mixed methods integration techniques to data or findings from studies involving both RCTs and qualitative research can yield insights that might be useful for understanding variation in outcomes, the mechanism by which interventions have an impact, and identifying ways of tailoring therapy to patient preference and type. Given a general lack of examples and knowledge of these techniques, researchers and funders will need future guidance on how to undertake and appraise them.
Project description:ObjectivesMedical records are critical to patient care, but often contain incomplete information. In UK hospitals, record-keeping is traditionally undertaken by junior doctors, who are increasingly completing early-career placements in psychiatry, but negative attitudes towards psychiatry may affect their performance. Little is known about the accuracy of medical records in psychiatry in general. This study aimed to evaluate the accuracy of Electronic Medical Records (EMRs) pertinent to clinical decision-making ("rationale") for prescribing completed by junior doctors during a psychiatry placement, focusing on the differences between psychotropic vs. non-psychotropic drugs and the temporal association during their placement.ResultsEMRs of 276 participants yielding 780 ward round entries were analysed, 100% of which were completed by Foundation Year or General Practice specialty training junior doctors rather than more senior clinicians. Compared with non-psychotropic drugs, documentation of prescribing rationale for psychotropic drugs was less likely (OR = 0.24, 95% CI 0.16-0.36, p < 0.001). The rate of rationale documentation significantly declined over time especially for psychotropic drugs (p < 0.001). Prescribing documentation of non-psychotropic drugs for people with mental illness is paradoxically more accurate than that of psychotropic drugs. Early-career junior doctors are therefore increasingly shaping EMRs of people receiving psychiatric care.
Project description:BackgroundMulti-centre randomized controlled clinical trials play an important role in modern evidence-based medicine. Advantages of collecting data from more than one site are numerous, including accelerated recruitment and increased generalisability of results. Mixed models can be applied to account for potential clustering in the data, in particular when many small centres contribute patients to the study. Previously proposed methods on sample size calculation for mixed models only considered balanced treatment allocations which is an unlikely outcome in practice if block randomisation with reasonable choices of block length is used.MethodsWe propose a sample size determination procedure for multi-centre trials comparing two treatment groups for a continuous outcome, modelling centre differences using random effects and allowing for arbitrary sample sizes. It is assumed that block randomisation with fixed block length is used at each study site for subject allocation. Simulations are used to assess operation characteristics such as power of the sample size approach. The proposed method is illustrated by an example in disease management systems.ResultsA sample size formula as well as a lower and upper boundary for the required overall sample size are given. We demonstrate the superiority of the new sample size formula over the conventional approach of ignoring the multi-centre structure and show the influence of parameters such as block length or centre heterogeneity. The application of the procedure on the example data shows that large blocks require larger sample sizes, if centre heterogeneity is present.ConclusionUnbalanced treatment allocation can result in substantial power loss when centre heterogeneity is present but not considered at the planning stage. When only few patients by centre will be recruited, one has to weigh the risk of imbalance between treatment groups due to large blocks and the risk of unblinding due to small blocks. The proposed approach should be considered when planning multi-centre trials.
Project description:BackgroundIn large multicentre trials in diverse settings, there is uncertainty about the need to adjust for centre variation in design and analysis. A key distinction is the difference between variation in outcome (independent of treatment) and variation in treatment effect. Through re-analysis of the CRASH-2 trial (2010), this study clarifies when and how to use multi-level models for multicentre studies with binary outcomes.MethodsCRASH-2 randomised 20,127 trauma patients across 271 centres and 40 countries to either single-dose tranexamic acid or identical placebo, with all-cause death at 4 weeks the primary outcome. The trial data had a hierarchical structure, with patients nested in hospitals which in turn are nested within countries. Reanalysis of CRASH-2 trial data assessed treatment effect and both patient and centre level baseline covariates as fixed effects in logistic regression models. Random effects were included to assess where there was variation between countries, and between centres within countries, both in underlying risk of death and in treatment effect.ResultsIn CRASH-2, there was significant variation between countries and between centres in death at 4 weeks, but absolutely no differences between countries or centres in the effect of treatment. Average treatment effect was not altered after accounting for centre and country variation in this study.ConclusionsIt is important to distinguish between underlying variation in outcomes and variation in treatment effects; the former is common but the latter is not. Stratifying randomisation by centre overcomes many statistical problems and including random intercepts in analysis may increase power and decrease bias in mean and standard error estimates.Trial registrationCurrent Controlled Trials ISRCTN86750102 , ClinicalTrials.gov NCT00375258 , and South African Clinical Trial Register DOH-27-0607-1919.
Project description:AimsThe aim of the PULsE-AI trial was to assess the effectiveness of a machine learning risk-prediction algorithm in conjunction with diagnostic testing for identifying undiagnosed atrial fibrillation (AF) in primary care in England.Methods and resultsEligible participants (aged ≥30 years without AF diagnosis; n = 23 745) from six general practices in England were randomized into intervention and control arms. Intervention arm participants, identified by the algorithm as high risk of undiagnosed AF (n = 944), were invited for diagnostic testing (n = 256 consented); those who did not accept the invitation, and all control arm participants, were managed routinely. The primary endpoint was the proportion of AF, atrial flutter, and fast atrial tachycardia diagnoses during the trial (June 2019-February 2021) in high-risk participants. Atrial fibrillation and related arrhythmias were diagnosed in 5.63% and 4.93% of high-risk participants in intervention and control arms, respectively {odds ratio (OR) [95% confidence interval (CI)]: 1.15 (0.77-1.73), P = 0.486}. Among intervention arm participants who underwent diagnostic testing (28.1%), 9.41% received AF and related arrhythmia diagnoses [vs. 4.93% (control); OR (95% CI): 2.24 (1.31-3.73), P = 0.003].ConclusionThe AF risk-prediction algorithm accurately identified high-risk participants in both arms. While the proportions of AF and related arrhythmia diagnoses were not significantly different between high-risk arms, intervention arm participants who underwent diagnostic testing were twice as likely to receive arrhythmia diagnoses compared with routine care. The algorithm could be a valuable tool to select primary care groups at high risk of undiagnosed AF who may benefit from diagnostic testing.
Project description:IntroductionDissemination and implementation (D&I) scientists are key members of collaborative, interdisciplinary clinical and translational research teams. Yet, early career D&I researchers (ECRs) have few guidelines for cultivating productive research collaborations. We developed recommendations for ECRs in D&I when serving as collaborators or co-investigators.MethodsWe employed a consensus-building approach: (1) group discussions to identify 3 areas of interest: "Marketing yourself" (describing your value to non-D&I collaborators), "Collaboration considerations" (contributions during proposal development), and "Responsibilities following project initiation" (defining your role throughout projects); (2) first survey and focus groups to iteratively rank/refine sub-domains within each area; (3) second survey and expert input on clarity/content of sub-domains; and (4) iterative development of key recommendations.ResultsForty-four D&I researchers completed the first survey, 12 of whom attended one of three focus groups. Twenty-nine D&I researchers completed the second survey (n = 29) and 10 experts provided input. We identified 25 recommendations. Findings suggest unique collaboration strengths (e.g, partnership-building) and challenges (e.g., unclear link to career milestones) for ECR D&I researchers, and underscore the value of ongoing training and mentorship for ECRs and the need to intersect collaborative D&I efforts with health equity principles.ConclusionsResearch collaborations are essential in clinical and translational research. We identified recommendations for D&I ECRs to be productive research collaborators, including training and support needs for the field. Findings suggest an opportunity to examine research collaboration needs among early career D&I scientists, and provide guidance on how to successfully provide mentorship and integrate health equity principles into collaborative research.