In This Issue...
Risk of Hypoglycemia Spans the Range of HbA1c Levels
Severe hypoglycemia is common in type 2 diabetics, regardless of their level of glycemic control measured by HbA1c, and risk of hypoglycemia is higher in patients with either near-normal or poor glycemic control (Diabetes Care 2013; doi:10.2337/dc13-0610). This finding, which runs contrary to conventional wisdom that risk of hypoglycemia is highest in patients with the lowest HbA1c levels, suggests that strategies to improve the safety of glucose-lowering therapies need to be directed not only to patients with poorly controlled disease, but also those with near-normal glycemic control.
The researchers conducted the study because recent guidance from professional organizations has called for individualized decision-making in diabetes that takes into account patient goals and preferences, and the risks and benefits of glucose-lowering therapy to set glycemic targets. These recommendations also suggest less intensive glucose control strategies for older individuals with limited life expectancy, advanced diabetes, cognitive impairment, or very poor health. While prior research has found an inverse relationship between HbA1c levels and severe hypoglycemia in type 1 diabetics, other analyses have shown conflicting associations between diabetic treatment and control and risk of hypoglycemia.
The study involved a survey of 9,094 type 2 diabetics receiving care in a large, integrated healthcare system who were 30–77 years and were being treated with glucose-lowering therapy. The survey asked about severe hypoglycemia requiring assistance and accessed respondents' last recorded HbA1c level in the year prior to the study period.
Overall, 10.8% of respondents experienced hypoglycemia. Compared to those with HbA1c levels of 7–7.9%, the relative risk of hypoglycemia in a fully adjusted model was 1.25, 1.01, 0.99, and 1.16 in those with HbA1c <6, 6–6.9, 8–8.9, and ≥9%, respectively. Based on these findings, the authors called for further research to identify management strategies and treatment factors that might mitigate hypoglycemia risk.
Timing of Stool Collection Key in Reducing False-Negative C. difficile Results
Empirical antimicrobial therapy for suspected Clostridium difficile infection (CDI) might result in false-negative polymerase chain reaction (PCR) test results if there are delays in stool specimen collection (Clin Infect Dis 2013;57:494–500). The findings suggest that in patients with suspected CDI and mild-to-moderate symptoms, it might be reasonable to stipulate that empirical therapy start only after a stool specimen has been collected, according to the authors. In the case of patients with suspected severe CDI in which immediate empirical therapy is indicated, expedited specimen collection is imperative.
Guidelines call for empirical antimicrobial treatment for patients suspected of having severe CDI, but in practice, clinicians routinely prescribe empirical therapy even in patients with mild-to-moderate symptoms, especially when they anticipate delays in collecting specimens. However, little information is available about how likely empirical therapy is to cause false-negative CDI test results.
Over a 4-month period, the authors examined the effect of CDI treatment on CDI test results. They looked at the time for test results, including PCR, glutamate dehydrogenase, and toxigenic culture, to convert from positive to negative during CDI therapy. The authors found these tests all converted at similar rates. In the case of PCR, 14%, 35%, and 45% of positive CDI tests converted to negative after 1, 2, and 3 days of treatment, respectively. Overall, 44% of empirically treated CDI patients converted to negative PCR and toxigenic culture results, compared with none of the CDI patients who did not receive empirical therapy. All the patients who converted to positive received at least 24 hours of CDI therapy prior to stool collection.
Urinary CXCL9 Early Detector of Acute Kidney Transplant Rejection
Urinary CXCL9 protein is a robust marker for ruling out acute kidney transplant rejection, and for stratifying patients for low- and high-risk of allograft injury (Am J Transplant 2013; doi: 10.1111/ajt.12426). According to the authors, the findings lay the foundation for future research aimed at improving kidney transplant outcomes through biomarker-guided decision-making.
The researchers conducted a prospective, multicenter observational study to validate biomarkers for diagnosing acute rejection and stratifying patients based on risk of developing acute rejection or progressive renal dysfunction. Single-center studies had suggested several biomarkers might be useful for these purposes, but without validation in a multicenter population, they have not been adopted in clinical practice.
The study involved 280 kidney transplant recipients. Blood and urine samples were collected prior to transplant surgery, and on day 3, weeks 1–4, and months 2–6, 12, and 24 after transplant. The investigators analyzed a number of biomarkers using different methods, ranging from gene expression and urine protein profiling to enzyme-linked immunosorbent assays.
Of all the biomarkers investigated for diagnosing acute rejection, urinary CXCL9 mRNA and urinary CXCL9 protein were the most robust. Levels were significantly higher in patients with acute rejection, as determined by biopsy, and elevations were detectible up to 30 days before graft dysfunction became evident clinically. The authors also found that absence of CXCL9 during acute graft dysfunction ruled out acute rejection with a negative predictive value of 92%. They also found that sustained low levels 6 months after transplantation uniquely identified patients at low risk for developing acute rejection over the next 18 months.