Project description:The concordance probability is a widely used measure to assess discrimination of prognostic models with binary and survival endpoints. We formally define the concordance probability for a prognostic model of the absolute risk of an event of interest in the presence of competing risks and relate it to recently proposed time-dependent area under the receiver operating characteristic curve measures. For right-censored data, we investigate inverse probability of censoring weighted (IPCW) estimates of a truncated concordance index based on a working model for the censoring distribution. We demonstrate consistency and asymptotic normality of the IPCW estimate if the working model is correctly specified and derive an explicit formula for the asymptotic variance under independent censoring. The small sample properties of the estimator are assessed in a simulation study also against misspecification of the working model. We further illustrate the methods by computing the concordance probability for a prognostic model of coronary heart disease (CHD) events in the presence of the competing risk of non-CHD death.
Project description:Often in public health, we are interested in promoting routine preventive screenings (e.g., blood glucose monitoring, hypertension screening, or mammography). Evaluating novel interventions to encourage frequent screenings using randomized controlled trials can help inform evidence-based health promotion programs. When the desired behavior change is a recurrent event, specifying the most meaningful study outcomes may prove challenging.To understand the efficiency of multiple approaches for evaluating an intervention seeking to increase regular health screenings we (a) simulated several replications of a trial with a positive intervention effect under various censoring scenarios, (b) formulated three different analytical outcome definitions (screening a certain number of times during the entire study period versus not, screening at least once within a clinically meaningful time period versus not, "hazard" or instantaneous rate of screening), and (c) compared them with regard to interpreting results and estimating power at different sample sizes.Approaches which better utilize detailed prospective data, while also accounting for within-participant correlations, are less likely to miss the actual underlying benefits conferred by a new prevention strategy compared to relying on a dichotomous measure derived from aggregating events over the study duration. Such approaches are also more powerful in realistic scenarios wherein some participants are lost to follow-up over time.Researchers should carefully consider the choice of analytical outcomes and strive to employ more efficient approaches that model comprehensive event-specific information, rather than summarizing repeated measures into less-informative dichotomous responses, while designing and conducting trials with recurrent preventive screenings.
Project description:The analysis of time-to-event data can be complicated by competing risks, which are events that alter the probability of, or completely preclude the occurrence of an event of interest. This is distinct from censoring, which merely prevents us from observing the time at which the event of interest occurs. However, the censoring distribution plays a vital role in the proportional subdistribution hazards model, a commonly used method for regression analysis of time-to-event data in the presence of competing risks.We present the equations that underlie the proportional subdistribution hazards model to highlight the way in which the censoring distribution is included in its estimation via risk set weights. By simulating competing risk data under a proportional subdistribution hazards model with different patterns of censoring, we examine the properties of the estimates from such a model when the censoring distribution is misspecified. We use an example from stem cell transplantation in multiple myeloma to illustrate the issue in real data.Models that correctly specified the censoring distribution performed better than those that did not, giving lower bias and variance in the estimate of the subdistribution hazard ratio. In particular, when the covariate of interest does not affect the censoring distribution but is used in calculating risk set weights, estimates from the model based on these weights may not reflect the correct likelihood structure and therefore may have suboptimal performance.The estimation of the censoring distribution can affect the accuracy and conclusions of a competing risks analysis, so it is important that this issue is considered carefully when analysing time-to-event data in the presence of competing risks.
Project description:Quantile residual lifetime analysis is conducted to compare remaining lifetimes among groups for survival data. Evaluating residual lifetimes among groups after adjustment for covariates is often of interest. The current literature is limited to comparing two groups for independent data. We propose a pseudo-value approach to compare quantile residual lifetimes given covariates between multiple groups for independent and clustered survival data. The proposed method considers clustered event times and clustered censoring times in addition to independent event times and censoring times. We show that the method can also be used to compare multiple groups on the cause specific residual life distribution in the competing risk setting, for which there are no current methods which account for clustering. The empirical Type I errors and statistical power of the proposed study are examined in a simulation study, which shows that the proposed method controls Type I errors very well and has higher power than an existing method. The proposed method is illustrated by a bone marrow transplant data set.
Project description:To evaluate the power to detect associations between SNPs and time-to-event outcomes across a range of pharmacogenomic study designs while comparing alternative regression approaches.Simulations were conducted to compare Cox proportional hazards modeling accounting for censoring and logistic regression modeling of a dichotomized outcome at the end of the study.The Cox proportional hazards model was demonstrated to be more powerful than the logistic regression analysis. The difference in power between the approaches was highly dependent on the rate of censoring.Initial evaluation of single-nucleotide polymorphism association signals using computationally efficient software with dichotomized outcomes provides an effective screening tool for some design scenarios, and thus has important implications for the development of analytical protocols in pharmacogenomic studies.
Project description:In many studies with a survival outcome, it is often not feasible to fully observe the primary event of interest. This often leads to heavy censoring and thus, difficulty in efficiently estimating survival or comparing survival rates between two groups. In certain diseases, baseline covariates and the event time of non-fatal intermediate events may be associated with overall survival. In these settings, incorporating such additional information may lead to gains in efficiency in estimation of survival and testing for a difference in survival between two treatment groups. If gains in efficiency can be achieved, it may then be possible to decrease the sample size of patients required for a study to achieve a particular power level or decrease the duration of the study. Most existing methods for incorporating intermediate events and covariates to predict survival focus on estimation of relative risk parameters and/or the joint distribution of events under semiparametric models. However, in practice, these model assumptions may not hold and hence may lead to biased estimates of the marginal survival. In this paper, we propose a semi-nonparametric two-stage procedure to estimate and compare t-year survival rates by incorporating intermediate event information observed before some landmark time, which serves as a useful approach to overcome semi-competing risks issues. In a randomized clinical trial setting, we further improve efficiency through an additional calibration step. Simulation studies demonstrate substantial potential gains in efficiency in terms of estimation and power. We illustrate our proposed procedures using an AIDS Clinical Trial Protocol 175 dataset by estimating survival and examining the difference in survival between two treatment groups: zidovudine and zidovudine plus zalcitabine.
Project description:In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984-2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed.
Project description:In this work, we study quantile regression when the response is an event time subject to potentially dependent censoring. We consider the semi-competing risks setting, where time to censoring remains observable after the occurrence of the event of interest. While such a scenario frequently arises in biomedical studies, most of current quantile regression methods for censored data are not applicable because they generally require the censoring time and the event time be independent. By imposing rather mild assumptions on the association structure between the time-to-event response and the censoring time variable, we propose quantile regression procedures, which allow us to garner a comprehensive view of the covariate effects on the event time outcome as well as to examine the informativeness of censoring. An efficient and stable algorithm is provided for implementing the new method. We establish the asymptotic properties of the resulting estimators including uniform consistency and weak convergence. The theoretical development may serve as a useful template for addressing estimating settings that involve stochastic integrals. Extensive simulation studies suggest that the proposed method performs well with moderate sample sizes. We illustrate the practical utility of our proposals through an application to a bone marrow transplant trial.
Project description:In follow-up studies, utility marker measurements are usually collected upon the occurrence of recurrent events until a terminal event such as death takes place. In this article, we define the recurrent marker process to characterize utility accumulation over time. For example, with medical cost and repeated hospitalizations being treated as marker and recurrent events respectively, the recurrent marker process is the trajectory of cumulative cost, which stops to increase after death. In many applications, competing risks arise as subjects are at risk of more than one mutually exclusive terminal event, such as death from different causes, and modeling the recurrent marker process for each failure type is often of interest. However, censoring creates challenges in the methodological development, because for censored subjects, both failure type and recurrent marker process after censoring are unobserved. To circumvent this problem, we propose a nonparametric framework for recurrent marker process with competing terminal events. In the presence of competing risks, we start with an estimator by using marker information from uncensored subjects. As a result, the estimator can be inefficient under heavy censoring. To improve efficiency, we propose a second estimator by combining the first estimator with auxiliary information from the estimate under non-competing risks model. The large sample properties and optimality of the second estimator is established. Simulation studies and an application to the SEER-Medicare linked data are presented to illustrate the proposed methods. Supplemental materials are available online.
Project description:RATIONALE:After the sample size of a randomized clinical trial (RCT) is set by the power requirement of its primary endpoint, investigators select secondary endpoints while unable to further adjust sample size. How the sensitivity and specificity of an instrument used to measure these outcomes, together with their expected underlying event rates, affect an RCT's power to measure significant differences in these outcomes is poorly understood. OBJECTIVES:Motivated by the design of an RCT of neuromuscular blockade in acute respiratory distress syndrome, we examined how power to detect a difference in secondary endpoints varies with the sensitivity and specificity of the instrument used to measure such outcomes. METHODS:We derived a general formula and Stata code for calculating an RCT's power to detect differences in binary outcomes when such outcomes are measured with imperfect sensitivity and specificity. The formula informed the choice of instrument for measuring post-traumatic stress-like symptoms in the Reevaluation of Systemic Early Neuromuscular Blockade RCT ( www.clinicaltrials.gov identifier NCT02509078). MEASUREMENTS AND MAIN RESULTS:On the basis of published sensitivities and specificities, the Impact of Events Scale-Revised was predicted to measure a 36% symptom rate, whereas the Post-Traumatic Stress Symptoms instrument was predicted to measure a 23% rate, if the true underlying rate of post-traumatic stress symptoms were 25%. Despite its lower sensitivity, the briefer Post-Traumatic Stress Symptoms instrument provided superior power to detect a difference in rates between trial arms, owing to its higher specificity. CONCLUSIONS:Examining instruments' power to detect differences in outcomes may guide their selection when multiple instruments exist, each with different sensitivities and specificities.