Saline, Buffered Crystalloid Pose Equal Risk for Acute Kidney Injury

Amid concerns that saline might increase risk of acute kidney injury (AKI), New Zealand researchers recently found that crystalloid fluid therapy posed no greater risk of AKI than saline (JAMA 2015;314:1701–10). The findings call for further large randomized controlled trials (RCT) to evaluate the efficacy of crystalloid fluid versus saline in higher risk populations, according to the authors. Saline—0.9% sodium chloride—worldwide is the most common resuscitation fluid, but questions have been raised about its role in AKI.

At the same time, observational studies have shown that buffered crystalloid solution might reduce the risk of AKI, but this intervention has not been studied in an RCT. This led researchers to conduct a prospective, blinded, double-crossover RCT comparing outcomes in intensive care unit (ICU) patients who received saline solution or buffered crystalloid. Participating ICUs used each type of fluid twice during the 28 week study period.

The study’s primary outcome was the proportion of patients who developed AKI within 90 days. The authors defined AKI according to risk, injury, failure, loss, and end-stage renal failure criteria, with an at least 2 times increase over baseline in serum creatinine, or a serum creatinine level ≥3.96 mg/dL with an increase ≥0.5 mg/dL. Of 2,278 eligible patients divided evenly between the treatments, 9.6% in the buffered crystalloid group developed AKI versus 9.2% in the saline group.

Overall, 3.3% of buffered crystalloid recipients and 3.4% of saline recipients required renal replacement therapy. An accompanying editorial noted that the study results should reassure providers that neither therapy is “particularly hazardous” when administered in a 2 L dose to patients at low-to-moderate risk. However, the authors cautioned that the large body of “circumstantial” evidence of a “harm signal” from saline with “scant, if any evidence of comparative benefit” should cause providers to proceed cautiously in ordering intravenous fluids.

Interventions to Reduce Lab Test Ordering

A systematic review of laboratory-based interventions to reduce lab test ordering by family physicians found that 10 studies achieved an average 35% reduction in 19 targeted tests (Clin Biochem 2015; doi:10.1016/j.clinbiochem.2015.09.014). The reported interventions included changing lab forms, negotiating a testing protocol, requiring lab approval to order a test, and providing feedback to providers.

The authors explored what had been reported in the literature on reining in lab test utilization because lab tests have “major downstream effects” such as heavily influencing other healthcare-related decision-making and causing patient inconvenience and discomfort. At the same time, lab test volumes have increased faster than population growth, and in Canada, where the study was performed, family physicians represent the largest cost center for test ordering.

Although the studies included in the review achieved an overall 35% reduction in test orders, there was a wide range, from 0% to 100% reduction. The two studies that required lab approval to order a test reported quite notable reductions, 100% and 61%, respectively.

The studies that reported changing lab forms focused on different tests and achieved varying results. One achieved 21%, 38%, 70%, and 73% reductions in orders for serum lactate dehydrogenase, calcium, C-reactive protein, and rheumatoid factor, respectively. Another reported separating vitamin B12, folic acid, and ferritin (which had been grouped and labelled “anemia”) and saw 58%, 50%, 39%, 4%, and 2% reductions in folic acid, ferritin, B12, hemoglobin, and iron tests, respectively. The authors included a table summarizing the 10 studies and their interventions and outcomes.

Finding the Best Osmolarity Equation for Assessing Dehydration

After evaluating 39 osmolarity equations across five cohorts of older individuals, British researchers have determined that one, proposed in 2005 by Khajuria and Krahn, had the best agreement with directly measured osmolality as well as good diagnostic accuracy in receiver operating characteristic (ROC) plots (BMJ Open 2015;5:e008846). The authors were interested in assessing the diagnostic accuracy of osmolarity equations because dehydration is common in the elderly, but directly measured serum osmolality is only partially automated in the U.K. and is expensive, making it impractical as a routine test.

The authors assessed osmolarity in five cohorts of 595 patients older than age 65, including those who were healthy and living independently, frail individuals in residential care, hospitalized patients, those receiving emergency care, and those hospitalized with liver cirrhosis. The participants also spanned European countries, had poor or adequate renal function, and some had diabetes.

Participants were considered hydrated, pending dehydration, or dehydrated based on serum/plasma osmolality levels of 275 to <295 mOsm/Kg, 295–300 mOsm/Kg, or >300 mOsm/Kg, respectively. The researchers considered osmolarity equations only if they included sodium, potassium, glucose, and urea, but not other analytes such as ionized calcium or lactate.

The difference between directly measured osmolality and calculated osmolarity in the various equations (expressed as mOsm/L) ranged from –37.6 mOsm to 31.8 mOsm. The researchers narrowed their analysis to equations in which for at least three of the five cohorts the mean difference was –1 to +1 or the p value was ≥0.01. Of these equations, just one, osmolarity = 1.86 x (Na+ + K+) + 1.15 x glucose + urea + 14, performed best in terms of percentage of patients with calculated osmolarity within 2% of measured osmolality, Bland-Altman analysis, differential bias, ROC plots, and sensitivity and specificity. They determined that this equation with a cutpoint of ≥295 mOsm/L yielded sensitivity of 85% and specificity of 59%.

“We propose that clinical laboratories use this equation to report on hydration status of older people when reporting blood test results that include sodium, potassium, urea, and glucose,” said first author, Lee Hooper. “We hope our findings will lead to pragmatic screening in older people to allow early identification of dehydration.”

Postoperative Cardiac Troponin Predicts Higher Risk of Death in Patients With Reduced eGFR

Postoperative cardiac troponin T (cTn T) ≥0.02 ng/mL in patients with estimated glomerular filtration rate (eGFR) >30 mL/min/1.73 m2 predicts a four-to-six times higher risk of death within 30 days of elective surgery (J Am Soc Nephrol 2015;26:2571–77). However, this same cTn T threshold reflected only a 1.5-fold increased risk in patients with eGFR ≤30 mL/min/1.73 m2. The findings suggest that while postoperative cTn T “remains an important instrument to identify patients at high risk of postoperative death,” they also merit further investigation to understand better how to use cTn T in patients at risk of death and other adverse events, according to the authors.

The study was a post-hoc analysis of prior research that showed a borderline statistical interaction between kidney function and abnormal cTn T which did not meet the investigators’ a priori threshold for statistical significance. This prompted the researchers to analyze the interaction between several strata of preoperative kidney function and previously defined prognostically important postoperative cTnT concentrations.

Adjusted hazard ratios for death with an abnormal cTn T concentration were greatest for eGFR 45 to <60 mL/min/1.73 m2 and 30 to <45 mL/min/1.73 m2, at 6.15 and 6.30, respectively. The authors proposed explanations for why elevated cTn T might have less prognostic value in patients with eGFR ≤30 mL/min/1.73 m2 than in those with better kidney function, including that these patients might develop measurable cTn T concentrations with less cardiac injury than patients with normal kidney function, and that cardiovascular events were not their primary postoperative risk factor (because of rigorous preoperative screening).