Standardized Prediction Rules Based on Serial hCG Results Improve Outcomes

A study that compared observed human chorionic gonadotropin (hCG) curves with expected curves in women with early suspected pregnancy validated that standardized prediction rules based on serial hCG concentration can be applied accurately to these patients, thereby saving time and the number of visits to diagnosis (Fertility and Sterility 2012;97:101–6). However, as the investigators found misclassifications in each model they considered, they emphasized that clinical judgment should trump serial hCG results and any prediction rules.

Serial hCG results can be valuable in distinguishing early intrauterine from ectopic pregnancies when transvaginal ultrasound findings are unclear and baseline hCG levels are close to or below the hCG discriminatory level. While comparing serial hCG levels and expected values has shown to be accurate and to decrease the time and number of visits before a definitive diagnosis, this strategy has not been validated in a population distinct from which it was developed, according to the authors.

The study involved 1,005 patients with suspected early pregnancy who were followed with serial hCG measurements until they had a definitive diagnosis of ectopic pregnancy, intrauterine pregnancy, or miscarriage. To predict outcomes, the researchers compared actual hCG levels with recommended thresholds to assess deviation from defined normal curves. They then used several statistical models to evaluate how well various combinations of expected hCG increases and decreases predicted outcomes.

The optimal balance in sensitivity and specificity used an expected 2-day increase in hCG levels of 35% and an expected 2-day decrease of 36–47%. This optimal combination did not reflect the highest overall accuracy for all three outcomes, as the latter misclassified more ectopic pregnancies than the optimal model.

HbA1c Cutoff for Prediabetes: 5.7% is the Most Cost-Effective for Prevention Efforts

Using an HbA1c cutoff of 5.7% or higher for prediabetes would be cost-effective in focusing prevention efforts, but lowering the cutoff to 5.6% also could be cost-effective if prevention costs could be lowered according to new research (Am J Prev Med 2012;42:374–381). These findings are intended to facilitate discussion around which HbA1c cutoff to use in defining prediabetes.

The American Diabetes Association in 2010 recommended HbA1c testing as one means of diagnosing diabetes, with a cutoff of 6.5%. However, determining a cutoff for prediabetes has been more challenging because the relationship between the incidence of type 2 diabetes and HbA1c <6.5% is continuous, with no clear threshold associated with an accelerated risk of diabetes or other diabetes-related complications. Therefore, professional organizations have recommended at least three different cutoffs, 6.0%, 5.7%, and 5.5%.

Since being diagnosed with prediabetes presumably would determine patients’ eligibility for interventions that might prevent diabetes, the authors were interested in exploring how different cutoffs and levels of interventions might affect costs and benefits associated with these interventions. Lowering the HbA1c cutoff would increase health benefits of interventions aimed at preventing diabetes, but at higher costs. The researchers used a Markov simulation model and a representative sample from the 1999–2006 National Health and Nutritional Examination Study to examine the cost-effectiveness associated with progressive 0.1% decreases in the HbA1c prediabetic cutoff from 6.4% to 5.5%.

Assuming the conventional $50,000 per quality adjusted life year (QALY) cost-effectiveness benchmark, the authors found that lowering the HbA1c prediabetes cutoff incrementally from 6.0% to 5.5% would increase the QALY gained, but at ever-higher costs, eventually becoming economically inefficient. They determined that a cutoff ≥5.7% would be most cost-effective.

Four-Marker Panel Improves AKI Risk Prediction After Cardiac Surgery

A panel of four urinary and plasma biomarkers measured on the day of acute kidney injury (AKI) diagnosis improves risk stratification and identifies cardiac surgery patients at higher risk for AKI progression and worse outcomes (J Am Soc Nephrol 2012 doi: 10.1681/ASN.22011090907). The findings may improve monitoring and care of postoperative patients, guide patient counseling and decision-making, and facilitate participation in interventional trials of AKI, according to the authors.

The researchers used samples from the Translational Research Investigating Biomarker Endpoints in AKI (TRIBE-AKI) study to evaluate four biomarkers, urinary interleukin-18 (IL-18), urinary albumin to creatinine ratio (ACR), and urinary and plasma neutrophil gelatinase-associated lipocalin (NGAL). TRIBE-AKI patients, all adults, underwent cardiac surgery, and 34.9% developed AKI post-operatively.

Among the urinary biomarkers, ACR >133 mg/g reflected a 3.4-fold odds of AKI progression compared with ACR <35 mg/g. Urine IL-18 levels >185 pg/mL were associated with a three-fold risk of progression compared with lower levels in adjusted analyses, while urine NGAL concentrations were not significant after adjustment for clinical variables. Plasma NGAL had the most predictive power: levels >323 ng/mL conveyed more than a seven-fold risk of AKI progression after adjustment for clinical variables, compared with patients in the two lowest quintiles.

The findings build on the authors' previous research that showed measurement of these same biomarkers within six hours of surgery forecast the future rise of serum creatinine as well as adverse patient outcomes. The results elevate the role of ACR in the setting of AKI and also support a role for proteinuria as a biomarker of AKI after cardiac surgery, according to the researchers.

Since 95% of all AKI cases were diagnosed within the first 72 hours after surgery, and the majority of progressive AKI cases also became evident within that timeframe, the authors recommended that future investigations of this nature should focus on this early but crucial post-operative window.

Higher FIT Screening Rates Offset Slightly Lower Performance

Interim results from a randomized, controlled trial comparing colorectal cancer screening strategies showed that the number of cancer cases detected by fecal immunochemical testing (FIT) and colonoscopy were similar, although more adenomas were found in the colonoscopy group (N Engl J Med 2012;366:697–706). The researchers also found that subjects in the FIT arm were more likely to participate in screening than those assigned to the colonoscopy group. While the superiority of colonoscopy in detecting adenomas should be considered a potential advantage of this strategy, the lower colonoscopy participation rate diminished this benefit, and may reduce the apparent benefit of colonoscopy, according to the authors.

The study opened recruitment in 2009; 10-year follow-up will be completed in 2021. Of an initial 57,404 subjects randomly assigned to undergo either colonoscopy or FIT, more than 26,000 were eligible for each group. However, the colonoscopy participation rate was 24.6%, versus 34.2% for FIT. The primary outcome of the study is the rate of death from colorectal cancer at 10 years. The interim reported described participation rates, diagnostic findings, and major complications from baseline screenings. Colorectal cancer was found in 0.1% of both groups; however advanced adenomas were detected in 1.9% of colonoscopy participants versus 0.9% of those in the FIT arm.