Reference interval cut-points for general diagnostic screening are usually determined by a methodology unrelated to medical outcomes: the central 95% of test values for a “healthy” cohort are defined as being the “Reference Interval” and the other 5% are flagged as “Low” or “High” to guide the physician and patient.  There are many problems with this method, including difficulty of identifying a healthy cohort, assembling the number of subjects required for statistical power, and the indefensible and self-contradictory logic of flagging as abnormal those beyond arbitrary limits when the entire cohort was defined as healthy.

These problems arise from an antiquated methodology, established in an era long before access to electronic medical records (EMR). We propose a replacement methodology from perspectives of modern informatics and clinical chemistry, associating risk of patient outcomes with analyte test results. Gathering tests and outcomes from whole populations via hospitals’ EMR’s, we avoid problems of defining a “healthy” population, relying instead upon the analysis of big data to determine clinically-safe reference interval cut-points.

We have studied last in-hospital test results for serum potassium, sodium, and chloride (K, Na, Cl) for data from several medical centers, with over 400, 000 total cases.  For each analyte, we calculated an Outcome Risk function:

OR(x) = (ONOwithinDx) / (ONOwithoutDx)

where             ONOwithinDx  = odds of Negative Outcome for test results within Dx;

ONOwithoutDx = odds of Negative Outcome for those not within Dx;

x = mean value of test results within an interval Dx;

Negative Outcome = all-cause in-hospital mortality.

We found risk of mortality to be below average within these analyte intervals:

K = 3.4 to 4.4 mEq/L;   Na = 136 to 144 mEq/L;   Cl = 100 to 109 mEq/L.

In other words, we are suggesting that any test result associated with an outcome ratio which places the patient at or below average risk of mortality for all patients who took the same test, i.e., where the OR is less than or equal to 1, would be considered acceptable.  This would be the standard independent of analyte tested, and gives rise to the intervals cited above.

Since we provide evidence-based risk estimates (mortality odds ratios) for values outside of these cut-points, the "new" outcome-based reference interval, along with a patient's result and its associated odds ratio will be displayed on patient reports so that the physician ordering the test can interpret the result.    For example, a Serum Sodium test result of 132 mEq/L has an odds ratio of 2.0 compared with the average, while the risk at 145 is 3.0 times the average. This is much more informative than a simple “high” or “low” flag, and requires no regulatory changes, just a redefinition of “reference interval.”

We found similar cut-points with other Negative Outcomes, (e.g.,1-year post-discharge mortality; unfavorable discharge) and when using data from other medical centers. Our high serum potassium cut-point is much lower than the current standards (which vary from 5.1 to 5.4), but is in excellent agreement with recent clinical studies of AMI patients.  We find risks with odds ratios from  2 to 4 within the distribution-based normal reference interval, between 4,8 and 5.4 mEq/L.

Our methodology allows reference interval cut-points to be generated by calculation of outcome risk functions, and implemented simply by spreadsheet in the lab from readily-available EMR data. This associates actual patient outcomes with analyte values. This has not been applied in any medical laboratory setting yet, so we ask: Do our AACC colleagues agree that replacing the current population-distribution method with this risk-function method will provide more meaningful guidelines from the lab to physicians? And if so, how do we get this implemented as the new paradigm?

0 comments on this article.

To join the conversation, Login to AACC.