Project description:BACKGROUND:Insulin pumps and continuous glucose monitoring (CGM) are commonly used by patients with diabetes mellitus in the outpatient setting. The efficacy and safety of initiating inpatient insulin pumps and CGM in the nonintensive care unit setting is unknown. MATERIALS AND METHODS:In a prospective pilot study, inpatients with type 2 diabetes were randomized to receive standard subcutaneous basal-bolus insulin and blinded CGM (group 1, n = 5), insulin pump and blinded CGM (group 2, n = 6), or insulin pump and nonblinded CGM (group 3, n = 5). Feasibility, glycemic control, and patient satisfaction were evaluated among groups. RESULTS:Group 1 had lower mean capillary glucose levels, 144.5 ± 19.5 mg/dL, compared with groups 2 and 3, 191.5 ± 52.3 and 182.7 ± 59.9 mg/dL (P1 vs. 2+3 = 0.05). CGM detected 19 hypoglycemic episodes (glucose <70 mg/dL) among all treatment groups, compared with 12 episodes detected by capillary testing, although not statistically significant. No significant differences were found for the total daily dose of insulin or percentage of time spent below target glucose range (<90 mg/dL), in target glucose range (90-180 mg/dL), or above target glucose range (>180 mg/dL). On the Diabetes Treatment Satisfaction Questionnaire-Change, group 3 reported increased hyperglycemia and decreased hypoglycemia frequency compared with the other two groups, although the differences did not reach statistical significance. CONCLUSIONS:Insulin pump and CGM initiation are feasible during hospitalization, although they are labor intensive. Although insulin pump initiation may not lead to improved glycemic control, there is a trend toward CGM detecting a greater number of hypoglycemic episodes. Larger studies are needed to determine whether use of this technology can lower inpatient morbidity and mortality.
Project description:ObjectiveAdvances in continuous glucose monitoring (CGM) have transformed ambulatory diabetes management. Until recently, inpatient use of CGM has remained investigational, with limited data on its accuracy in the hospital setting.Research design and methodsTo analyze the accuracy of Dexcom G6, we compared retrospective matched-pair CGM and capillary point-of-care (POC) glucose data from three inpatient CGM studies (two interventional and one observational) in general medicine and surgery patients with diabetes treated with insulin. Analysis of accuracy metrics included mean absolute relative difference (MARD), median absolute relative difference (ARD), and proportion of CGM values within 15, 20, and 30% or 15, 20, and 30 mg/dL of POC reference values for blood glucose >100 mg/dL or ≤100 mg/dL, respectively (% 15/15, % 20/20, % 30/30). Clinical reliability was assessed with Clarke error grid (CEG) analyses.ResultsA total of 218 patients were included (96% with type 2 diabetes) with a mean age of 60.6 ± 12 years. The overall MARD (n = 4,067 matched glucose pairs) was 12.8%, and median ARD was 10.1% (interquartile range 4.6, 17.6]. The proportions of readings meeting % 15/15, % 20/20, and % 30/30 criteria were 68.7, 81.7, and 93.8%, respectively. CEG analysis showed 98.7% of all values in zones A and B. MARD and median ARD were higher in the case of hypoglycemia (<70 mg/dL) and severe anemia (hemoglobin <7 g/dL).ConclusionsOur results indicate that CGM technology is a reliable tool for hospital use and may help improve glucose monitoring in non-critically ill hospitalized patients with diabetes.
Project description:Continuous glucose monitoring (CGM) is indicated in poorly controlled insulin-treated patients with type 2 diabetes (T2D) to improve glycemic control and reduce the risk of hypoglycemia, but the benefits of CGM for lower risk patients have not been well studied. Among 17,422 insulin-treated patients with T2D with hemoglobin A1c (HbA1c) <8% and no recent severe hypoglycemia (based on emergency room visits or hospitalizations), CGM initiation occurred in 149 patients (17,273 noninitiators served as reference). Changes in HbA1c and severe hypoglycemia rates for the 12 months before and after CGM initiation were calculated. CGM initiation was associated with decreased HbA1c (-0.06%), whereas noninitiation was associated with increased HbA1c (+0.32%); a weighted adjusted difference-in-difference model of change in HbA1c yielded a net benefit of -0.30%; 95% CI -0.50%, -0.10%; P = 0.004). No significant differences were observed for severe hypoglycemia. CGM may be useful in preventing glycemic deterioration in well-controlled patients with insulin-treated T2D.
Project description:BackgroundTwo weeks of continuous glucose monitoring (CGM) sampling with >70% CGM use is recommended to accurately reflect 90 days of glycemic metrics. However, minimum sampling duration for CGM use <70% is not well studied. We investigated the minimum duration of CGM sampling required for each CGM metric to achieve representative glycemic outcomes for <70% CGM use over 90 days.MethodsNinety days of CGM data were collected in 336 real-life CGM users with type 1 diabetes. CGM data were grouped in 5% increments of CGM use (45%-95%) over 90 days. For each CGM metric and each CGM use category, the correlation between the summary statistic calculated using each sampling period and all 90 days of data was determined using the squared value of the Spearmen correlation coefficient (R2).ResultsFor CGM use 45% to 95% over 90 days, minimum sampling period is 14 days for mean glucose, time in range (70-180 mg/dL), time >180 mg/dL, and time >250 mg/dL; 28 days for coefficient of variation, and 35 days for time <54 mg/dL. For time <70 mg/dL, 28 days is sufficient between 45 and 80% CGM use, while 21 days is required >80% CGM use.ConclusionWe defined minimum sampling durations for all CGM metrics in suboptimal CGM use. CGM sampling of at least 14 days is required for >45% CGM use over 90 days to sufficiently reflect most of the CGM metrics. Assessment of hypoglycemia and coefficient of variation require a longer sampling period regardless of CGM use duration.
Project description:BackgroundThe level of continuous glucose monitoring (CGM) accuracy needed for insulin dosing using sensor values (i.e., the level of accuracy permitting non-adjunct CGM use) is a topic of ongoing debate. Assessment of this level in clinical experiments is virtually impossible because the magnitude of CGM errors cannot be manipulated and related prospectively to clinical outcomes.Materials and methodsA combination of archival data (parallel CGM, insulin pump, self-monitoring of blood glucose [SMBG] records, and meals for 56 pump users with type 1 diabetes) and in silico experiments was used to "replay" real-life treatment scenarios and relate sensor error to glycemic outcomes. Nominal blood glucose (BG) traces were extracted using a mathematical model, yielding 2,082 BG segments each initiated by insulin bolus and confirmed by SMBG. These segments were replayed at seven sensor accuracy levels (mean absolute relative differences [MARDs] of 3-22%) testing six scenarios: insulin dosing using sensor values, threshold, and predictive alarms, each without or with considering CGM trend arrows.ResultsIn all six scenarios, the occurrence of hypoglycemia (frequency of BG levels ≤50 mg/dL and BG levels ≤39 mg/dL) increased with sensor error, displaying an abrupt slope change at MARD =10%. Similarly, hyperglycemia (frequency of BG levels ≥250 mg/dL and BG levels ≥400 mg/dL) increased and displayed an abrupt slope change at MARD=10%. When added to insulin dosing decisions, information from CGM trend arrows, threshold, and predictive alarms resulted in improvement in average glycemia by 1.86, 8.17, and 8.88 mg/dL, respectively.ConclusionsUsing CGM for insulin dosing decisions is feasible below a certain level of sensor error, estimated in silico at MARD=10%. In our experiments, further accuracy improvement did not contribute substantively to better glycemic outcomes.
Project description:BackgroundOlder adults may be less comfortable with continuous glucose monitoring (CGM) technology or require additional education to support use. The Virtual Diabetes Specialty Clinic study provided the opportunity to understand glycemic outcomes and support needed for older versus younger adults living with diabetes and using CGM.MethodsProspective, virtual study of adults with type 1 diabetes (T1D, N = 160) or type 2 diabetes (T2D, N = 74) using basal-bolus insulin injections or insulin pump therapy. Remote CGM diabetes education (3 scheduled visits over 1 month) was provided by Certified Diabetes Care and Education Specialists with additional visits as needed. CGM-measured glycemic metrics, HbA1c and visit duration were evaluated by age (<40, 40-64 and ≥65 years).ResultsMedian CGM use was ≥95% in all age groups. From baseline to 6 months, time 70 to 180 mg/dL improved from 45% ± 22 to 57% ± 16%; 50 ± 25 to 65 ± 18%; and 60 ± 28 to 69% ± 18% in the <40, 40-64, and ≥65-year groups, respectively (<40 vs 40-64 years P = 0.006). Corresponding values for HbA1c were 8.0% ± 1.6 to 7.3% ± 1.0%; 7.9 ± 1.6 to 7.0 ± 1.0%; and 7.4 ± 1.4 to 7.1% ± 0.9% (all P > 0.05). Visit duration was 41 min longer for ages ≥65 versus <40 years (P = 0.001).ConclusionsAdults with diabetes experience glycemic benefit after remote CGM use training, but training time for those >65 years is longer compared with younger adults. Addressing individual training-related needs, including needs that may vary by age, should be considered.
Project description:BackgroundWith the development of continuous glucose monitoring systems (CGMS), detailed glycemic data are now available for analysis. Yet analysis of this data-rich information can be formidable. The power of CGMS-derived data lies in its characterization of glycemic variability. In contrast, many standard glycemic measures like hemoglobin A1c (HbA1c) and self-monitored blood glucose inadequately describe glycemic variability and run the risk of bias toward overreporting hyperglycemia. Methods that adjust for this bias are often overlooked in clinical research due to difficulty of computation and lack of accessible analysis tools.MethodsIn response, we have developed a new R package rGV, which calculates a suite of 16 glycemic variability metrics when provided a single individual's CGM data. rGV is versatile and robust; it is capable of handling data of many formats from many sensor types. We also created a companion R Shiny web app that provides these glycemic variability analysis tools without prior knowledge of R coding. We analyzed the statistical reliability of all the glycemic variability metrics included in rGV and illustrate the clinical utility of rGV by analyzing CGM data from three studies.ResultsIn subjects without diabetes, greater glycemic variability was associated with higher HbA1c values. In patients with type 2 diabetes mellitus (T2DM), we found that high glucose is the primary driver of glycemic variability. In patients with type 1 diabetes (T1DM), we found that naltrexone use may potentially reduce glycemic variability.ConclusionsWe present a new R package and accompanying web app to facilitate quick and easy computation of a suite of glycemic variability metrics.
Project description:DisclaimerIn an effort to expedite the publication of articles, AJHP is posting manuscripts online as soon as possible after acceptance. Accepted manuscripts have been peer-reviewed and copyedited, but are posted online before technical formatting and author proofing. These manuscripts are not the final version of record and will be replaced with the final article (formatted per AJHP style and proofed by the authors) at a later time.PurposeInpatient diabetes management involves frequent assessment of glucose levels for treatment decisions. Here we describe a program for inpatient real-time continuous glucose monitoring (rtCGM) at a community hospital and the accuracy of rtCGM-based glucose estimates.MethodsAdult inpatients with preexisting diabetes managed with intensive insulin therapy and a diagnosis of coronavirus disease 2019 (COVID-19) were monitored via rtCGM for safety. An rtCGM system transmitted glucose concentration and trending information at 5-minute intervals to nearby smartphones, which relayed the data to a centralized monitoring station. Hypoglycemia alerts were triggered by rtCGM values of ≤85 mg/dL, but rtCGM data were otherwise not used in management decisions; insulin dosing adjustments were based on blood glucose values measured via blood sampling. Accuracy was evaluated retrospectively by comparing rtCGM values to contemporaneous point-of-care (POC) blood glucose values.ResultsA total of 238 pairs of rtCGM and POC data points from 10 patients showed an overall mean absolute relative difference (MARD) of 10.3%. Clarke error grid analysis showed 99.2% of points in the clinically acceptable range, and surveillance error grid analysis showed 89.1% of points in the lowest risk category. It was determined that for 25% of the rtCGM values, discordances in rtCGM and POC values would likely have resulted in different insulin doses. Insulin dose recommendations based on rtCGM values differed by 1 to 3 units from POC-based recommendations.ConclusionrtCGM for inpatient diabetes monitoring is feasible. Evaluation of individual rtCGM-POC paired values suggested that using rtCGM data for management decisions poses minimal risks to patients. Further studies to establish the safety and cost implications of using rtCGM data for inpatient diabetes management decisions are warranted.
Project description:BackgroundSimulated data are a powerful tool for research, enabling benchmarking of blood glucose (BG) forecasting and control algorithms. However, expert created models provide an unrealistic view of real-world performance, as they lack the features that make real data challenging, while black-box approaches such as generative adversarial networks do not enable systematic tests to diagnose model performance.MethodsTo address this, we propose a method that learns missingness and error properties of continuous glucose monitor (CGM) data collected from people with type 1 diabetes (OpenAPS, OhioT1DM, RCT, and Racial-Disparity), and then augments simulated BG data with these properties. On the task of BG forecasting, we test how well our method brings performance closer to that of real CGM data compared with current simulation practices for missing data (random dropout) and error (Gaussian noise, CGM error model).ResultsOur methods had the smallest performance difference versus real data compared with random dropout and Gaussian noise when individually testing the effects of missing data and error on simulated BG in most cases. When combined, our approach was significantly better than Gaussian noise and random dropout for all data sets except OhioT1DM. Our error model significantly improved results on diverse data sets.ConclusionsWe find a significant gap between BG forecasting performance on simulated and real data, and our method can be used to close this gap. This will enable researchers to rigorously test algorithms and provide realistic estimates of real-world performance without overfitting to real data or at the expense of data collection.