Project description:The pooled estimate of the average effect is of primary interest when fitting the random-effects model for meta-analysis. However estimates of study specific effects, for example those displayed on forest plots, are also often of interest. In this tutorial, we present the case, with the accompanying statistical theory, for estimating the study specific true effects using so called 'empirical Bayes estimates' or 'Best Unbiased Linear Predictions' under the random-effects model. These estimates can be accompanied by prediction intervals that indicate a plausible range of study specific true effects. We coalesce and elucidate the available literature and illustrate the methodology using two published meta-analyses as examples. We also perform a simulation study that reveals that coverage probability of study specific prediction intervals are substantially too low if the between-study variance is small but not negligible. Researchers need to be aware of this defect when interpreting prediction intervals. We also show how empirical Bayes estimates, accompanied with study specific prediction intervals, can embellish forest plots. We hope that this tutorial will serve to provide a clear theoretical underpinning for this methodology and encourage its widespread adoption.
Project description:BackgroundA prediction interval represents a clinical interpretation of heterogeneity. The aim of this study was to determine the prevalence of prediction interval reporting in orthodontic random effect meta-analyses. The corroboration between effect size estimates with 95% confidence intervals (CIs) and prediction intervals were also explored.Materials and methodsSystematic reviews (SRs) published between 1 January 2010 and 31 January 2021 containing at least one random effects meta-analysis (minimum of three trials) were identified electronically. SR and meta-analyses characteristics were extracted and prediction intervals, where possible, were calculated. Descriptive statistics and the percentage of meta-analyses where the prediction interval changed the interpretation based on the 95% CI were calculated. Fisher's exact test was used to examine associations between the study variables and reporting of prediction intervals.ResultsOne hundred and twenty-one SRs were included. The median number of SR authors was 5 (interquartile range: 4-6). The reporting of prediction intervals was undertaken in only 19.0% (N = 23/121) of meta-analyses. Out of 95 meta-analyses, only in 6 (6.3%, N = 6/95) were the 95% CI corroborated by the prediction interval. In 60 meta-analyses (63.3%, N = 60/95) despite a 95% CI indicating a statistically significant result, this was not corroborated by the corresponding prediction interval.ConclusionsWithin the study timeframe, reporting of prediction intervals is not routinely undertaken in orthodontic meta-analyses possibly due to a lack of awareness. In future orthodontic random effects models containing a minimum of three trials, reporting of prediction intervals is advocated as this gives an indication of the range of the expected effect of treatment interventions.
Project description:Population forecasts entail a significant amount of uncertainty, especially for long-range horizons and for places with small or rapidly changing populations. This uncertainty can be dealt with by presenting a range of projections or by developing statistical prediction intervals. The latter can be based on models that incorporate the stochastic nature of the forecasting process, on empirical analyses of past forecast errors, or on a combination of the two. In this article, we develop and test prediction intervals based on empirical analyses of past forecast errors for counties in the United States. Using decennial census data from 1900 to 2000, we apply trend extrapolation techniques to develop a set of county population forecasts; calculate forecast errors by comparing forecasts to subsequent census counts; and use the distribution of errors to construct empirical prediction intervals. We find that empirically-based prediction intervals provide reasonably accurate predictions of the precision of population forecasts, but provide little guidance regarding their tendency to be too high or too low. We believe the construction of empirically-based prediction intervals will help users of small-area population forecasts measure and evaluate the uncertainty inherent in population forecasts and plan more effectively for the future.
Project description:Problems of finding confidence intervals (CIs) and prediction intervals (PIs) for two-parameter negative binomial distributions are considered. Simple CIs for the mean of a two-parameter negative binomial distribution based on some large sample methods are proposed and compared with the likelihood CIs. Proposed CIs are not only simple to compute, but also better than the likelihood CIs for moderate sample sizes. Prediction intervals for the mean of a future sample from a two-parameter negative binomial distribution are also proposed and evaluated for their accuracy. The methods are illustrated using two examples with real life data sets.
Project description:Heart failure is the most common cause of death in both males and females around the world. Cardiovascular diseases (CVDs), in particular, are the main cause of death worldwide, accounting for 30% of all fatalities in the United States and 45% in Europe. Artificial intelligence (AI) approaches such as machine learning (ML) and deep learning (DL) models are playing an important role in the advancement of heart failure therapy. The main objective of this study was to perform a network meta-analysis of patients with heart failure, stroke, hypertension, and diabetes by comparing the ML and DL models. A comprehensive search of five electronic databases was performed using ScienceDirect, EMBASE, PubMed, Web of Science, and IEEE Xplore. The search strategy was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement. The methodological quality of studies was assessed by following the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) guidelines. The random-effects network meta-analysis forest plot with categorical data was used, as were subgroups testing for all four types of treatments and calculating odds ratio (OR) with a 95% confidence interval (CI). Pooled network forest, funnel plots, and the league table, which show the best algorithms for each outcome, were analyzed. Seventeen studies, with a total of 285,213 patients with CVDs, were included in the network meta-analysis. The statistical evidence indicated that the DL algorithms performed well in the prediction of heart failure with AUC of 0.843 and CI [0.840-0.845], while in the ML algorithm, the gradient boosting machine (GBM) achieved an average accuracy of 91.10% in predicting heart failure. An artificial neural network (ANN) performed well in the prediction of diabetes with an OR and CI of 0.0905 [0.0489; 0.1673]. Support vector machine (SVM) performed better for the prediction of stroke with OR and CI of 25.0801 [11.4824; 54.7803]. Random forest (RF) results performed well in the prediction of hypertension with OR and CI of 10.8527 [4.7434; 24.8305]. The findings of this work suggest that the DL models can effectively advance the prediction of and knowledge about heart failure, but there is a lack of literature regarding DL methods in the field of CVDs. As a result, more DL models should be applied in this field. To confirm our findings, more meta-analysis (e.g., Bayesian network) and thorough research with a larger number of patients are encouraged.
Project description:Hundreds of organizations and analysts use energy projections, such as those contained in the US Energy Information Administration (EIA)'s Annual Energy Outlook (AEO), for investment and policy decisions. Retrospective analyses of past AEO projections have shown that observed values can differ from the projection by several hundred percent, and thus a thorough treatment of uncertainty is essential. We evaluate the out-of-sample forecasting performance of several empirical density forecasting methods, using the continuous ranked probability score (CRPS). The analysis confirms that a Gaussian density, estimated on past forecasting errors, gives comparatively accurate uncertainty estimates over a variety of energy quantities in the AEO, in particular outperforming scenario projections provided in the AEO. We report probabilistic uncertainties for 18 core quantities of the AEO 2016 projections. Our work frames how to produce, evaluate, and rank probabilistic forecasts in this setting. We propose a log transformation of forecast errors for price projections and a modified nonparametric empirical density forecasting method. Our findings give guidance on how to evaluate and communicate uncertainty in future energy outlooks.
Project description:Uncertainty quantification is a fundamental problem in the analysis and interpretation of synthetic control (SC) methods. We develop conditional prediction intervals in the SC framework, and provide conditions under which these intervals offer finite-sample probability guarantees. Our method allows for covariate adjustment and non-stationary data. The construction begins by noting that the statistical uncertainty of the SC prediction is governed by two distinct sources of randomness: one coming from the construction of the (likely misspecified) SC weights in the pre-treatment period, and the other coming from the unobservable stochastic error in the post-treatment period when the treatment effect is analyzed. Accordingly, our proposed prediction intervals are constructed taking into account both sources of randomness. For implementation, we propose a simulation-based approach along with finite-sample-based probability bound arguments, naturally leading to principled sensitivity analysis methods. We illustrate the numerical performance of our methods using empirical applications and a small simulation study. Python, R and Stata software packages implementing our methodology are available.
Project description:ObjectiveThis study aimed to determine the efficacy and acceptability of pharmacotherapies for cannabis use disorder (CUD).MethodsWe conducted a systematic review and frequentist network meta-analysis, searching five electronic databases for randomized placebo-controlled trials of individuals diagnosed with CUD receiving pharmacotherapy with or without concomitant psychotherapy. Primary outcomes were the reduction in cannabis use and retention in treatment. Secondary outcomes were adverse events, discontinuation due to adverse events, total abstinence, withdrawal symptoms, cravings, and CUD severity. We applied a frequentist, random-effects Network Meta-Analysis model to pool effect sizes across trials using standardized mean differences (SMD, g) and rate ratios (RR) with their 95% confidence intervals.ResultsWe identified a total of 24 trials (n=1912, 74.9% male, mean age 30.2 years). Nabilone (d=-4.47 [-8.15; -0.79]), topiramate (d=-3.80 [-7.06; -0.54]), and fatty-acid amyl hydroxylase inhibitors (d=-2.30 [-4.75; 0.15]) reduced cannabis use relative to placebo. Dronabinol improved retention in treatment (RR=1.27 [1.02; 1.57]), while topiramate worsened treatment retention (RR=0.62 [0.42; 0.91]). Gabapentin reduced cannabis cravings (d=-2.42 [-3.53; -1.32], while vilazodone worsened craving severity (d=1.69 [0.71; 2.66]. Buspirone (RR=1.14 [1.00; 1.29]), venlafaxine (RR=1.78 [1.40; 2.26]), and topiramate (RR=9.10 [1.27; 65.11]) caused more adverse events, while topiramate caused more dropouts due to adverse events.ConclusionsBased on this review, some medications appeared to show promise for treating individual aspects of CUD. However, there is a lack of robust evidence to support any particular pharmacological treatment. There is a need for additional studies to expand the evidence base for CUD pharmacotherapy. While medication strategies may become an integral component for CUD treatment one day, psychosocial interventions should remain the first line given the limitations in the available evidence.
Project description:Supporting decision making in drug development is a key purpose of pharmacometric models. Pharmacokinetic models predict exposures under alternative posologies or in different populations. Pharmacodynamic models predict drug effects based on exposure to drug, disease, or other patient characteristics. Estimation uncertainty is commonly reported for model parameters; however, prediction uncertainty is the key quantity for clinical decision making. This tutorial reviews confidence and prediction intervals with associated calculation methods, encouraging pharmacometricians to report these routinely.