Interpreting HIV diagnostic histories into infection time estimates: analytical framework and online tool.
ABSTRACT: BACKGROUND:It is frequently of epidemiological and/or clinical interest to estimate the date of HIV infection or time-since-infection of individuals. Yet, for over 15 years, the only widely-referenced infection dating algorithm that utilises diagnostic testing data to estimate time-since-infection has been the 'Fiebig staging' system. This defines a number of stages of early HIV infection through various standard combinations of contemporaneous discordant diagnostic results using tests of different sensitivity. To develop a new, more nuanced infection dating algorithm, we generalised the Fiebig approach to accommodate positive and negative diagnostic results generated on the same or different dates, and arbitrary current or future tests - as long as the test sensitivity is known. For this purpose, test sensitivity is the probability of a positive result as a function of time since infection. METHODS:The present work outlines the analytical framework for infection date estimation using subject-level diagnostic testing histories, and data on test sensitivity. We introduce a publicly-available online HIV infection dating tool that implements this estimation method, bringing together 1) curatorship of HIV test performance data, and 2) infection date estimation functionality, to calculate plausible intervals within which infection likely became detectable for each individual. The midpoints of these intervals are interpreted as infection time 'point estimates' and referred to as Estimated Dates of Detectable Infection (EDDIs). The tool is designed for easy bulk processing of information (as may be appropriate for research studies) but can also be used for individual patients (such as in clinical practice). RESULTS:In many settings, including most research studies, detailed diagnostic testing data are routinely recorded, and can provide reasonably precise estimates of the timing of HIV infection. We present a simple logic to the interpretation of diagnostic testing histories into infection time estimates, either as a point estimate (EDDI) or an interval (earliest plausible to latest plausible dates of detectable infection), along with a publicly-accessible online tool that supports wide application of this logic. CONCLUSIONS:This tool, available at https://tools.incidence-estimation.org/idt/ , is readily updatable as test technology evolves, given the simple architecture of the system and its nature as an open source project.
Project description:OBJECTIVE:To determine the precision of new and established methods for estimating duration of HIV infection. DESIGN:A retrospective analysis of HIV testing results from serial samples in commercially available panels, taking advantage of extensive testing previously conducted on 53 seroconverters. METHODS:We initially investigated four methods for estimating infection timing: method 1, 'Fiebig stages' based on test results from a single specimen; method 2, an updated '4th gen' method similar to Fiebig stages but using antigen/antibody tests in place of the p24 antigen test; method 3, modeling of 'viral ramp-up' dynamics using quantitative HIV-1 viral load data from antibody-negative specimens; and method 4, using detailed clinical testing history to define a plausible interval and best estimate of infection time. We then investigated a 'two-step method' using data from both methods 3 and 4, allowing for test results to have come from specimens collected on different days. RESULTS:Fiebig and '4th gen' staging method estimates of time since detectable viremia had similar and modest correlation with observed data. Correlation of estimates from both new methods (3 and 4), and from a combination of these two ('two-step method') was markedly improved and variability significantly reduced when compared with Fiebig estimates on the same specimens. CONCLUSION:The new 'two-step' method more accurately estimates timing of infection and is intended to be generalizable to more situations in clinical medicine, research, and surveillance than previous methods. An online tool is now available that enables researchers/clinicians to input data related to method 4, and generate estimated dates of detectable infection.
Project description:Accurate methods for determining the duration of HIV infection at the individual level are valuable in many settings, including many critical research studies and in clinical practice (especially for acute infection). Since first published in 2003, the 'Fiebig staging system' has been used as the primary way of classifying early HIV infection into five sequential stages based on HIV test result patterns in newly diagnosed individuals. However, Fiebig stages can only be assigned to individuals who produce both a negative and a positive test result on the same day, on specific pairs of tests of varying 'sensitivity'. Further, in the past 16 years HIV-testing technology has evolved substantially, and three of the five key assays used to define Fiebig stages are no longer widely used. To address these limitations, we developed an improved and more general framework for estimating the duration of HIV infection by interpreting any combination of diagnostic test results, whether obtained on single or multiple days, into an estimated date of detectable infection, or EDDI. A key advantage of the EDDI method over Fiebig staging is that it allows for the generation of a point estimate, as well as an associated credibility interval for the date of first detectable infection, for any person who has at least one positive and one negative HIV test of any kind. The tests do not have to be run on the same day; they do not have to be run during the acute phase of infection and the method does not rely on any special pairing of tests to define 'stages' of infection. The size of the interval surrounding the EDDI (and therefore the precision of the estimate itself) depends largely on the length of time between negative and positive tests. The EDDI approach is also flexible, seamlessly incorporating any assay for which there is a reasonable diagnostic delay estimate. An open-source, free online tool includes a user-updatable curated database of published diagnostic delays. HIV diagnostics have evolved tremendously since that original publication more than 15 years ago, and it is time to similarly evolve the methods used to estimate timing of infection. The EDDI method is a flexible and rigorous way to estimate the timing of HIV infection in a continuously evolving diagnostic landscape.
Project description:Background:Myeloid activation contributes to cognitive impairment in chronic human immunodeficiency virus (HIV) infection. We explored whether combination antiretroviral therapy (cART) initiation during acute HIV infection impacts CD163 shedding, a myeloid activation marker, and in turn, implications on the central nervous system (CNS). Methods:We measured soluble CD163 (sCD163) levels in plasma and cerebrospinal fluid (CSF) by enzyme-linked immunosorbent assay in Thais who initiated cART during acute HIV infection (Fiebig stages I-IV). Examination of CNS involvement included neuropsychological testing and analysis of brain metabolites by magnetic resonance spectroscopy. Chronic HIV-infected or uninfected Thais served as controls. Results:We examined 51 adults with acute HIV infection (Fiebig stages I-III; male sex, >90%; age, 31 years). sCD163 levels before and after cART in Fiebig stage I/II were comparable to those in uninfected controls (plasma levels, 97.9 and 93.6 ng/mL, respectively, vs 99.5 ng/mL; CSF levels, 6.7 and 6.4 ng/mL, respectively, vs 7.1 ng/mL). In Fiebig stage III, sCD163 levels were elevated before cART as compared to those in uninfected controls (plasma levels, 135 ng/mL; CSF levels, 10 ng/mL; P < .01 for both comparisons) before normalization after cART (plasma levels, 90.1 ng/mL; CSF levels, 6.5 ng/mL). Before cART, higher sCD163 levels during Fiebig stage III correlated with poor CNS measures (eg, decreased N-acetylaspartate levels), but paradoxically, during Fiebig stage I/II, this association was linked with favorable CNS outcomes (eg, higher neuropsychological test scores). After cART initiation, higher sCD163 levels during Fiebig stage III were associated with negative CNS indices (eg, worse neuropsychological test scores). Conclusion:Initiation of cART early during acute HIV infection (ie, during Fiebig stage I/II) may decrease inflammation, preventing shedding of CD163, which in turn might lower the risk of brain injury.
Project description:OBJECTIVE:To investigate whether oral preexposure prophylaxis (PrEP) alters timing and patterns of seroconversion when PrEP use continues after HIV-1 infection. DESIGN:Retrospective testing of the timing of Fiebig stage HIV-1 seroconversion in the Partners PrEP Study, a randomized placebo-controlled clinical trial of PrEP conducted in Kenya and Uganda. METHODS:Specimens from 138 seroconverters were collected every 3 months and when HIV-1 infection was suspected based on monthly rapid HIV-1 tests. Progression of seroconversion was compared between randomized groups (PrEP versus placebo) and per-protocol groups (placebo versus PrEP participants with detectable tenofovir during the seroconversion period) using laboratory assessment of Fiebig stage. Delay in site-detection of seroconversion and association with PrEP drug-regimen resistant virus were assessed using logistic regression. Analysis of time to each Fiebig stage used maximum likelihood estimation with a parametric model to accommodate the varying lengths of HIV-infection intervals. RESULTS:There was a significant increase in delayed site detection of infection associated with PrEP (odds ratio?=?3.49, P?=?0.044). Delay in detection was not associated with increased risk of resistance in the PrEP arm (odds ratio?=?0.93, P?=?0.95). Estimated time to each Fiebig stage was elongated in seroconverters with evidence of ongoing PrEP use, significantly for only Stage 5 (28 versus 17 days, P?=?0.05). Adjusted for Fiebig stage, viral RNA was ?2/3 log lower in those assigned to PrEP compared with placebo; no differences were found in Architect signal to cut-off at any stage. CONCLUSION:Ongoing PrEP use in seroconverters may delay detection of infection and elongate seroconversion, although the delay does not increase risk of resistance.
Project description:Most HIV-1 infected individuals do not know their infection dates. Precise infection timing is crucial information for studies that document transmission networks or drug levels at infection. To improve infection timing, we used the prospective RV217 cohort where the window when plasma viremia becomes detectable is narrow: the last negative visit occurred a median of four days before the first detectable HIV-1 viremia with an RNA test, referred below as diagnosis. We sequenced 1,280 HIV-1 genomes from 39 participants at a median of 4, 32 and 170 days post-diagnosis. HIV-1 infections were dated by using sequence-based methods and a viral load regression method. Bayesian coalescent and viral load regression estimated that infections occurred a median of 6 days prior to diagnosis (IQR: 9-3 and 11-4 days prior, respectively). Poisson-Fitter, which analyzes the distribution of hamming distances among sequences, estimated a median of 7 days prior to diagnosis (IQR: 15-4 days) based on sequences sampled 4 days post-diagnosis, but it did not yield plausible results using sequences sampled at 32 days. Fourteen participants reported a high-risk exposure event at a median of 8 days prior to diagnosis (IQR: 12 to 6 days prior). These different methods concurred that HIV-1 infection occurred about a week before detectable viremia, corresponding to 20 days (IQR: 34-15 days) before peak viral load. Together, our methods comparison helps define a framework for future dating studies in early HIV-1 infection.
Project description:The extent of viral replication during acute HIV infection (AHI) influences HIV disease progression. However, information comparing viral load (VL) kinetics with and without antiretroviral therapy (ART) in AHI is limited. The knowledge gained could inform preventive strategies aimed at reducing VL during AHI and therapeutic strategies to alter the viral kinetics that may enhance the likelihood of achieving HIV remission.The analysis utilized VL data captured during the first year of HIV infection from two studies in Thailand: the RV217 study (untreated AHI, 30 participants and 412 visits) and the RV254 study (treated AHI, 235 participants and 2803 visits). Fiebig stages were I/II (HIV RNA+, HIV IgM-) and Fiebig III/IV (HIV IgM+, Western blot-/indeterminate). Data were modelled utilizing spline effects within a linear mixed model, with a random intercept and slope to allow for between-subject variability and adjustment for the differences in variability between studies. The number of knots in the quadratic spline basis functions was determined by comparing models with differing numbers of knots via the Akaike Information Criterion. Models were fit using PROC GLIMMIX in SAS v9.3.At enrolment, there were 24 Fiebig I/II and 6 Fiebig III/IV individuals in the untreated group and 137 Fiebig I/II and 98 Fiebig III/IV individuals in the treated group. Overall, the median age was 27.5 years old, most were male (89%), and CRF01_AE was the most common HIV clade (76%). By day 12 (4 days after ART in RV254), the untreated group had a 2.7-fold higher predicted mean VL level compared to those treated (predicted log VL 6.19 for RV217 and 5.76 for RV254, p = 0.05). These differences increased to 135-fold by day 30 (predicted log VL 4.89 for RV217 and 2.76 for RV254) and 1148-fold by day 120 (predicted log VL 4.68 for RV217 and 1.63 for RV254) (p < 0.0001 for both) until both curves were similarly flat at about day 150 (p = 0.17 between days 150 and 160). The VL trajectories were significantly different between Fiebig I/II and Fiebig III/IV participants when comparing the two groups and within the treated group (p < 0.001 for both).Initiating ART in AHI dramatically changed the trajectory of VL very early in the course of infection that could have implications for reducing transmission potential and enhancing responses to future HIV remission strategies. There is an urgency of initiating ART when acute infection is identified. New and inexpensive strategies to engage and test individuals at high risk for HIV as well as immediate treatment access will be needed to improve the treatment of acute infection globally.NCT00796146 and NCT00796263.
Project description:Fourth generation (4thG) immunoassay (IA) is becoming the standard HIV screening method but was not available when the Fiebig acute HIV infection (AHI) staging system was proposed. Here we evaluated AHI staging based on a 4thG IA (4thG staging).Screening for AHI was performed in real-time by pooled nucleic acid testing (NAT, n=48,828 samples) and sequential enzyme immunoassay (EIA, n=3,939 samples) identifying 63 subjects with non-reactive 2nd generation EIA (Fiebig stages I (n=25), II (n=7), III (n=29), IV (n=2)). The majority of samples tested (n=53) were subtype CRF_01AE (77%). NAT+ subjects were re-staged into three 4thG stages: stage 1 (n=20; 4th gen EIA-, 3rd gen EIA-), stage 2 (n=12; 4th gen EIA+, 3rd gen EIA-), stage 3 (n=31; 4th gen EIA+, 3rd gen EIA+, Western blot-/indeterminate). 4thG staging distinguishes groups of AHI subjects by time since presumed HIV exposure, pattern of CD8+ T, B and natural killer cell absolute numbers, and HIV RNA and DNA levels. This staging system further stratified Fiebig I subjects: 18 subjects in 4thG stage 1 had lower HIV RNA and DNA levels than 7 subjects in 4thG stage 2.Using 4th generation IA as part of AHI staging distinguishes groups of patients by time since exposure to HIV, lymphocyte numbers and HIV viral burden. It identifies two groups of Fiebig stage I subjects who display different levels of HIV RNA and DNA, which may have implication for HIV cure. 4th generation IA should be incorporated into AHI staging systems.
Project description:The timing and location of the establishment of the viral reservoir during acute HIV infection remain unclear. Using longitudinal blood and tissue samples obtained from HIV-infected individuals at the earliest stage of infection, we demonstrate that frequencies of infected cells reach maximal values in gut-associated lymphoid tissue and lymph nodes as early as Fiebig stage II, before seroconversion. Both tissues displayed higher frequencies of infected cells than blood until Fiebig stage III, after which infected cells were equally distributed in all compartments examined. Initiation of antiretroviral therapy (ART) at Fiebig stages I to III led to a profound decrease in the frequency of infected cells to nearly undetectable level in all compartments. The rare infected cells that persisted were preferentially found in the lymphoid tissues. Initiation of ART at later stages (Fiebig stages IV/V and chronic infection) induced only a modest reduction in the frequency of infected cells. Quantification of HIV DNA in memory CD4+ T cell subsets confirmed the unstable nature of most of the infected cells at Fiebig stages I to III and the emergence of persistently infected cells during the transition to Fiebig stage IV. Our results indicate that although a large pool of cells is infected during acute HIV infection, most of these early targets are rapidly cleared upon ART initiation. Therefore, infected cells present after peak viremia have a greater ability to persist.
Project description:<h4>Background</h4>Establishment of persistent human immunodeficiency virus type 1 (HIV-1) reservoirs occurs early in infection, and biomarkers of infected CD4+ T cells during acute infection are poorly defined. CD4+ T cells expressing the gut homing integrin complex ?4?7 are associated with HIV-1 acquisition, and are rapidly depleted from the periphery and gastrointestinal mucosa during acute HIV-1 infection.<h4>Methods</h4>Integrated HIV-1 DNA was quantified in peripheral blood mononuclear cells obtained from acutely (Fiebig I-III) and chronically infected individuals by sorting memory CD4+ T-cell subsets lacking or expressing high levels of integrin ?7 (?7negative and ?7high, respectively). HIV-1 DNA was also assessed after 8 months of combination antiretroviral therapy (cART) initiated in Fiebig II/III individuals. Activation marker and chemokine receptor expression was determined for ?7-defined subsets at acute infection and in uninfected controls.<h4>Results</h4>In Fiebig I, memory CD4+ T cells harboring integrated HIV-1 DNA were rare in both ?7high and ?7negative subsets, with no significant difference in HIV-1 DNA copies. In Fiebig stages II/III and in chronically infected individuals, ?7high cells were enriched in integrated and total HIV-1 DNA compared to ?7negative cells. During suppressive cART, integrated HIV-1 DNA copies decreased in both ?7negative and ?7high subsets, which did not differ in DNA copies. In Fiebig II/III, integrated HIV-1 DNA in ?7high cells was correlated with their activation.<h4>Conclusions</h4>?7high memory CD4+ T cells are preferential targets during early HIV-1 infection, which may be due to the increased activation of these cells.
Project description:The serologic testing algorithm for recent HIV seroconversion (STARHS) calculates incidence using the proportion of testers who produce a level of HIV antibody high enough to be detected by ELISA but low enough to suggest recent infection. The validity of STARHS relies on independence between dates of HIV infection and dates of antibody testing. When subjects choose the time of their own test, testing may be motivated by risky behaviour or symptoms of infection and the criterion may not be met. This analysis was conducted to ascertain whether estimates of incidence derived using STARHS were consistent with estimates derived using a method more robust against motivated testing.A cohort-based incidence estimator and two STARHS methods were applied to identical populations (n=3821) tested for HIV antibody at publicly funded sites in Seattle. Overall seroincidence estimates, demographically stratified estimates and incidence rate ratios were compared across methods. The proportion of low-antibody testers among HIV-infected individuals was compared with the proportion expected given their testing histories.STARHS estimates generally exceeded cohort-based estimates. Incidence ratios derived using STARHS between demographic strata were not consistent across methods. The proportion of HIV-infected individuals with lower antibody levels exceeded that which would be expected under independence between infection and testing.Incidence estimates and incidence rate ratios derived using methods that rely on the changing antibody level over the course of HIV infection may be vulnerable to bias when applied to populations who choose the time of their own testing.