Statistical Method Influences cTn Reference Interval

New research adds statistical methods to the list of factors that can markedly change the 99th percentile upper limit for cardiac troponin I (cTn), the formal cutpoint used to detect myocardial infarction (MI) recommended by the third universal definition of MI (Clin Biochem 2016; doi.org/10.1016/j.clinbiochem.
2016.08.012). Depending on the statistical methods used, the authors found up to 64.1% lower cTn 99th percentiles.

Prior studies have shown that sample size, age, sex, and subclinical disease all can affect determination of the cTn 99th percentile upper limit derived from a healthy population. The new findings make the case for establishing formal criteria for “population selection and statistical analysis for any studies performed to define the 99th percentile reference interval,” according to an accompanying editorial.

The study involved 521 subjects participating in the Prospective Investigation of the Vasculature in Uppsala Seniors study, 266 men and 255 women. The investigators used three statistical software packages (Analyse-it, MedCalc, and SPSS 21) and employed the nonparametric method, the Harrell-Davis bootstrap method, and the robust method in determining the cTn 99th percentile. Current guidance recommends using the nonparametric method, and the researchers calculated the 99th percentile according to this method as 37 ng/mL for the total population, 42 ng/mL in men, and 25 ng/mL in women. They found the most marked difference when comparing cTn calculated by the robust method to the nonparametric method, which differed by -12.3% to -64.1%. cTn 99th percentiles calculated by the robust method also varied by up to 44.2% depending on the statistical software package used. The differences between the Harrell-Davis bootstrap method and the nonparametric method were not as notable: by -4.1% to -7.4% in the total population, and by +4.6% to +5.7% in men and by -1.8% to -5.2% in women.

The method used to detect presumed outliers had even more of an effect on cTn 99th percentile calculations; “the 99th percentiles were up to 60.2% lower following outlier elimination using the method of Tukey,” according to the authors. Based on their findings the authors recommend use of the nonparametric method, and “more conservative methods to detect outliers.”

Pediatric Septic Shock QI Initiative Reduced Mortality, Improved Patient Care Processes

Children who received care according to a septic shock treatment bundle were five times less likely to die than those who did not receive such care (Pediatrics 2016;138:320154153). The septic shock treatment bundle was part of a septic shock quality improvement (QI) initiative undertaken by Primary Children’s Hospital in Salt Lake City, Utah.

In assessing the results of the QI initiative between 2007 and 2014 researchers also found increased adherence to QI process measures. For instance, during the first 2 years of the initiative there was 55% compliance with the goal for attending physicians to assess within 15 minutes of room placement patients who had positive sepsis screening criteria. However, during the last 2 years of the initiative, compliance with this measure had increased to 84%.

Diagnostic testing in the septic shock treatment bundle includes complete blood count with differential, blood culture, and analysis of lactate, venous or capillary blood gas, and selected electrolytes, the latter three assessed via point-of-care testing.

The septic shock treatment bundle focuses on early recognition and timely delivery of intravenous fluids and antibiotics. The Primary Children’s team also developed a screening tool/algorithm; in evaluating its effectiveness researchers found the tool’s sensitivity and specificity were 97% and 98%, respectively, in 2013 and 100% and 97% in 2014. 

The authors concluded that the QI initiative “improved septic shock program goal adherence and decreased mortality without increasing … admissions or … length of stay.”

MS-based Analysis Holds Promise for Non-invasive NASH Diagnosis

A score combining mass spectrometry (MS)-based analysis of lipids and metabolites in blood samples, fasting insulin and aspartate aminotransferase (AST) results, and PNPLA3 genotype was “significantly better” than a score based on clinical or metabolic profiles in determining risk of nonalcoholic steatohepatitis (NASH) (Clin Gastroenterol Hepatol 2016;14:1463–72). The findings could aid in developing a noninvasive means of diagnosing NASH, currently identified via liver biopsy.

The study involved 318 patients who underwent a liver biopsy as part of a diagnostic workup for suspected NASH. In addition to biopsies, the patients had traditional laboratory analysis done for various analytes, including HbA1c, plasma glucose, low- and high-density lipoproteins, triglyceride, AST, alanine aminotransferase, alkaline phosphatase, and γ-glutamyl transpeptidase. They also had genotyping for PNPLA3.

The researchers also used ultra-performance liquid chromatography and two-dimensional gas chromatography combined with time-of-flight MS to conduct lipidomic and metabolomic analysis on the patients’ blood samples. With data from these analyses they developed separate metabolomic- and lipidomic-based risk models, as well as a model based on all diagnostic and clinical data (NASHClinLipMet Score). The investigators also randomly divided the patients into estimation and validation groups to build and validate these risk models.

The authors found the area under the receiver operating curve for the NASHClinLipMet Score was 0.86, in comparison to 0.779, 0.719, and 0.792 for risk models based on lipidomic, metabolomic, and clinical variables, respectively. The authors cautioned that with an 80.6% sensitivity and 75.3% specificity, NASHClinLipMet Score would miss 19.4% of patients who have NASH, while falsely identifying nearly one-quarter who don’t have the disease. In addition, the score was developed in European Caucasians and might not be valid in other patient populations.