Project description:The accuracy of a diagnostic test, which is often quantified by a pair of measures such as sensitivity and specificity, is critical for medical decision making. Separate studies of an investigational diagnostic test can be combined through meta-analysis; however, such an analysis can be threatened by publication bias. To the best of our knowledge, there is no existing method that accounts for publication bias in the meta-analysis of diagnostic tests involving bivariate outcomes. In this paper, we extend the Copas selection model from univariate outcomes to bivariate outcomes for the correction of publication bias when the probability of a study being published can depend on its sensitivity, specificity, and the associated standard errors. We develop an expectation-maximization algorithm for the maximum likelihood estimation under the proposed selection model. We investigate the finite sample performance of the proposed method through simulation studies and illustrate the method by assessing a meta-analysis of 17 published studies of a rapid diagnostic test for influenza.
Project description:BackgroundSystematic reviews and meta-analyses of pre-clinical studies, in vivo animal experiments in particular, can influence clinical care. Publication bias is one of the major threats of validity in systematic reviews and meta-analyses. Previous empirical studies suggested that systematic reviews and meta-analyses have become more prevalent until 2010 and found evidence for compromised methodological rigor with a trend towards improvement. We aim to comprehensively summarize and update the evidence base on systematic reviews and meta-analyses of animal studies, their methodological quality and assessment of publication bias in particular.Methods/designThe objectives of this systematic review are as follows: •To investigate the epidemiology of published systematic reviews of animal studies until present. •To examine methodological features of systematic reviews and meta-analyses of animal studies with special attention to the assessment of publication bias. •To investigate the influence of systematic reviews of animal studies on clinical research by examining citations of the systematic reviews by clinical studies. Eligible studies for this systematic review constitute systematic reviews and meta-analyses that summarize in vivo animal experiments with the purpose of reviewing animal evidence to inform human health. We will exclude genome-wide association studies and animal experiments with the main purpose to learn more about fundamental biology, physical functioning or behavior. In addition to the inclusion of systematic reviews and meta-analyses identified by other empirical studies, we will systematically search Ovid Medline, Embase, ToxNet, and ScienceDirect from 2009 to January 2013 for further eligible studies without language restrictions. Two reviewers working independently will assess titles, abstracts, and full texts for eligibility and extract relevant data from included studies. Data reporting will involve a descriptive summary of meta-analyses and systematic reviews.DiscussionResults are expected to be publicly available later in 2013 and may form the basis for recommendations to improve the quality of systematic reviews and meta-analyses of animal studies and their use with respect to clinical care.
Project description:The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention.We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40-62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.Recent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Project description:BackgroundThe increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making.Methodology/principal findingsIn this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40-62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.ConclusionsThis update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Project description:BackgroundTo research 1) how many acupuncture clinical trials are registered with the WHO International Clinical Trial Registry Platform (ICTRP) and what patterns they demonstrate, 2) publication of the articles of acupuncture clinical trials which were registered with ICTRP.MethodsThe search strategy using the ICTRP: Intervention: acupuncture; Recruitment status: All; Date of registration; from 1 Jan 1990 to 31 Dec 2018. We searched the indexed articles in PubMed using trial IDs on 25 Feb 2019. When the paper was published, we indicated the number of weeks from the date of registration with ICTRP to the date of publication in order to define time till the publication. We divided the whole period we analyzed into 6 periods of every 3 years and measured the proportion of publication and the time from the date of registration of each trial till its publication in each period by the Kaplan-Meier method.ResultsForty-three countries/areas conducted at least one acupuncture clinical trial. The total number of registrations was 1758. China, the USA, and the Republic of Korea accounted for 61% of those registrations. The proportion of publication was 178/1758 10% for the fully published papers and 141/1758 8% for the protocol papers.ConclusionsThe substantial increase of registrations by China, the Republic of Korea, Iran, Brazil, Japan was observed which may be attributed to improved awareness of the CONSORT statement. However, the fully published papers rate is low at 10%. The publication of results of acupuncture clinical trials should also be rigorously mandated.
Project description:ObjectivesTo determine the effectiveness of interventions designed to prevent or reduce publication and related biases.Study design and settingWe searched multiple databases and performed manual searches using terms related to publication bias and known interventions against publication bias. We dually reviewed citations and assessed risk of bias. We synthesized results by intervention and outcomes measured and graded the quality of the evidence (QoE).ResultsWe located 38 eligible studies. The use of prospective trial registries (PTR) has increased since 2005 (seven studies, moderate QoE); however, positive outcome-reporting bias is prevalent (14 studies, low QoE), and information in nonmandatory fields is vague (10 studies, low QoE). Disclosure of financial conflict of interest (CoI) is inadequate (five studies, low QoE). Blinding peer reviewers may reduce geographical bias (two studies, very low QoE), and open-access publishing does not discriminate against authors from low-income countries (two studies, very low QoE).ConclusionThe use of PTR and CoI disclosures is increasing; however, the adequacy of their use requires improvement. The effect of open-access publication and blinding of peer reviewers on publication bias is unclear, as is the effect of other interventions such as electronic publication and authors' rights to publish their results.
Project description:Research on goal priming asks whether the subtle activation of an achievement goal can improve task performance. Studies in this domain employ a range of priming methods, such as surreptitiously displaying a photograph of an athlete winning a race, and a range of dependent variables including measures of creativity and workplace performance. Chen, Latham, Piccolo and Itzchakov (Chen et al. 2021 J. Appl. Psychol. 70, 216-253) recently undertook a meta-analysis of this research and reported positive overall effects in both laboratory and field studies, with field studies yielding a moderate-to-large effect that was significantly larger than that obtained in laboratory experiments. We highlight a number of issues with Chen et al.'s selection of field studies and then report a new meta-analysis (k = 13, N = 683) that corrects these. The new meta-analysis reveals suggestive evidence of publication bias and low power in goal priming field studies. We conclude that the available evidence falls short of demonstrating goal priming effects in the workplace, and offer proposals for how future research can provide stronger tests.
Project description:In this study, we explore the potential for publication bias using market simulation results that estimate the effect of US ethanol expansion on corn prices. We provide a new test of whether the publication process routes market simulation results into one of the following two narratives: food-versus-fuel or greenhouse gas (GHG) emissions. Our research question is whether model results with either high price or large land impact are favored for publication in one body of literature or the other. In other words, a model that generates larger price effects might be more readily published in the food-versus-fuel literature while a model that generates larger land use change and GHG emissions might find a home in the GHG emission literature. We develop a test for publication bias based on matching narrative and normalized price effects from simulated market models. As such, our approach differs from past studies of publication bias that typically focus on statistically estimated parameters. This focus could have broad implications: if in the future more studies assess publication bias of quantitative results that are not statistically estimated parameters, then important inferences about publication bias could be drawn. More specifically, such a body of literature could explore the potential that practices common in either statistical methods or other methods tend to encourage or deter publication bias. Turning back to the present case, our findings in this study do not detect a relationship between food-versus-fuel or GHG narrative orientation and corn price effects. The results are relevant to debates about biofuel impacts and our approach can inform the publication bias literature more generally.
Project description:Previously, we reviewed 1052 randomized-controlled trial abstracts presented at the American Society of Anesthesiologists annual meetings from 2001-2004. We found significant positive publication bias in the period examined, with the odds ratio for abstracts with positive results proceeding to journal publication over those with null results being 2.01 [95% confidence interval: 1.52, 2.66; P < 0.001]. Mandatory trial registration was introduced in 2005 as a required standard for publication. We sought to examine whether mandatory trial registration has decreased publication bias in the anesthesia and perioperative medicine literature. We reviewed all abstracts from the 2010-2016 American Society of Anesthesiologists meetings that reported on randomized-controlled trials in humans. We scored the result of each abstract as positive or null according to a priori definitions. We systematically searched for any subsequent publication of the studies and calculated the odds ratio for journal publication, comparing positive vs null studies. We compared the odds ratio from the 2010-2016 abstracts (post-mandatory trial registration) with the odds ratio from the 2001-2004 abstracts (pre-mandatory trial registration) as a ratio of odds ratios. We defined a 33% decrease in the odds ratio as significant, corresponding to a new odds ratio of 1.33. We reviewed 9789 abstracts; 1049 met inclusion criteria as randomized-controlled trials, with 542 (51.7%) of the abstracts going on to publication. The odds ratio for abstracts with positive results proceeding to journal publication was 1.28 [95% CI: 0.97, 1.67; P = 0.076]. With adjustment for sample size and abstract quality, the difference in publication rate between positive and null abstracts was statistically significant (odds ratio 1.34; 95% CI: 1.02, 1.76; P = 0.037). The ratio of odds ratios, comparing the odds ratio from the 2010-2016 abstracts (post-mandatory trial registration) to the odds ratio from the 2001-2004 abstracts (pre-mandatory trial registration), was 0.63 (95% CI: 0.43, 0.93); P = 0.021). We present the first study in the anesthesia and perioperative medicine literature that examines and compares publication bias over two discrete periods of time, prior to and after the implementation of mandatory trial registration. Our results suggest that the amount of publication bias has decreased markedly following implementation of mandatory trial registration. However, some positive publication bias in the anesthesia and perioperative medicine literature remains.