Listen to the Clinical Chemistry Podcast



Article

Tony Badrick, Andreas Bietenbeck, Alex Katayev, Huub H van Rossum, Mark A Cervinski, Tze Ping Loh on behalf of the International Federation of Clinical Chemistry and Laboratory Medicine Committee on Analytical Quality. Patient-Based Real Time QC Clin Chem 2020; 66: 1140–1145.

Guest

Dr. Tony Badrick is CEO of the Royal College of Pathologists of Australasia Quality Assurance Programs based in Sydney, Australia.



Transcript

[Download pdf]

Bob Barrett:
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.

Analyzing quality control specimens is a tool routinely used by medical laboratories to help ensure that correct results are reported for patient samples. That involves periodic analysis of materials with a known analyte concentration and estimating if measurement error is within acceptable limits. However, the adequacy of those samples to represent authentic patient samples is not often addressed. But what if we were to use the patient samples themselves as a source of quality control data? A Review paper appearing in the August 2019 issue of Clinical Chemistry re-examined this concept and was the topic of an earlier podcast on patient-based real-time quality control.

Now in the September 2020 issue of Clinical Chemistry, a Q&A feature follows up with differing perspectives on this topic. We asked five experts from the International Federation of Clinical Chemistry and Laboratory Medicine Committee on Analytical Quality to discuss the advantages of patient-based real-time quality control. The moderator of that Q&A session was Dr. Tony Badrick, who is CEO of the Royal College of Pathologists of Australasia Quality Assurance Programs based in Sydney, Australia.

So let’s start off, Dr. Badrick, by reminding us what exactly patient-based real-time quality control is and how does this relate to or integrate with traditional internal quality control?

Tony Badrick:
So the concept between patient-based real-time quality control is that you use some characteristic of the patient population to detect error in your assay. So for example, let’s assume that you’re measuring sodium on some instrument, what patient-based real-time QC would do is take the sodium values for a group of patients and then add a block of those sodium values together—the sodium value from this patient together with the sodium value from the patient that was analyzed before, together with the sodium value from the patient that was analyzed before that—and form a mean.

You might take 20 or 30 of those patient values and use those 20 or 30 values, consecutive values, to calculate a mean. So you use that mean, that mean of those 20 patients as your quality control parameter. So when you analyze the next patient sample, you add that new sample into the mean, but you drop the 20th sample off. So the mean keeps moving along. As you get another patient in, you drop the 21st patient off, but you use the new patient value to detect a new mean, to calculate a new mean.

So the mean keeps moving along. So that’s the basis of it. You’re using some characteristic of the patient values—in this case, the mean, to generate something which tells you something about the assay. So for example, let’s assume that something goes wrong with the assay and let’s look at sodium again. So we were calculating our mean for our sodiums and in a normal population, you might expect that the mean of those consecutive sodium values might be 140. If you had a shift in bias in your assay, rather than the mean being 140, it might move to 142. So that’s how you can detect changes in the assay, things like bias changes, by monitoring something like the average of all your normal sodium.

You don’t need to use mean. You can use other things. You might use a median, particularly when the population might not be normally distributed. Or you might transform some characteristic of the population, take the square root of the values or something like that. But you use something that’s related to the assay values for the population of your samples. You can use other things as well. So you might calculate the standard deviation of the population, or you might look at the percentage of the patient values that are outside of the reference interval and all those things will be affected if there’s a problem with the assay. But what you’re using is the actual values of your patient samples to detect these changes and you compare perhaps the mean of the sodium, you compare that with a normal control limit plus or minus 2 SD of your population so you can detect errors the same way that you detect errors using conventional internal quality control. But in fact, what you’re using is these patient values.

This certainly isn’t a new concept. It’s been around for a long time. Now the hematologists use it routinely, it’s called Bull’s Algorithm. So they use it to detect errors in hemoglobin and some of the parameters. So it’s certainly not a new concept. Why I think it’s become sort of topical right now is that laboratories are now analyzing a lot of samples. Most laboratories analyzing lots and lots of samples, so there’s a lot of values. We’ve got very sophisticated IT systems now that can calculate these moving means or medians and this software can be connected to your analyzer. So we’ve now got the means, no pun intended, to be able to use these systems more consistently.

The advantage of using this sort of patient-based real-time QC over conventional and internal QC is that it’s continuous, it’s at real-time. So every time you analyze a new sample, you use that sample value to put into your means. So every sample almost will contribute to this QC process. It’s commutative because you’re using patient samples, you’re not using some artificial QC sample. It’s low cost because you’re not again using expensive commercial material, you’re using your patient samples, and it’s been shown that it’s got the best error detection for those assays where there’s a low sigma, less than four, or that is where the ratio of biological variation to analytical imprecision is low.

So it ultimately, when implemented, it’s saves a lot of time because you detect errors sooner than you would normally using conventional QC. The other aspects about patient-based real-time QC is that you can detect not only problems with the assay, but if you design it appropriately, you can detect preanalytical errors, so you can detect—if you can identify where the samples came from, if perhaps there was a problem with the transport of the samples for some assays or if there was deterioration in the sample for some assays.

The other exciting idea about this concept is that you could use it for EQI as well. You can look at the medians of analyzers from different places and see that they’re not varying as there’s a change in lot number or a change in calibration, specifically for the assay, it has that advantage too.

You asked about, “How do you integrate this patient-based real-time QC with conventional QC?” There’s probably four ways you could do that. As I said, this type of QC, patient-based real-time QC, is particularly sensitive for those assays where there’s low biological variation and a lot of signal-to-noise ratio, so where you got a high analytical imprecision compared with sort of a low biological variation. So that’s where patient-based real-time QC might be very useful to complement conventional QC or where you might have difficulty finding conventional QC material.

Another way that you could integrate the two forms of QC is you could run them in parallel, so just do what you’ve always done with conventional QC, but run patient-based real-time QC over the top of that. It’s another layer, but it may give you more confidence in your assays. You may move a little bit further down the pathway and use patient-based real-time as your main form of QC. So you use that all the time and perhaps when you detect there’s been a shift, you might revert back to conventional QC techniques just to see that in fact there’s really been a shift there and you may use conventional QC under those circumstances just to perform some sort of corrective action and ensure that you’re back in control after that.

And perhaps the last form that you could use where in fact you replace conventional QC completely with patient-based real-time QC, that’s the sort of the furthest down the pathway.  I should also say another reason why you might use patient-based real-time QC or which of those four options would you use? Certainly patient-based real-time is an adjunct and a new tool to look at QC. So when you’re considering which of those four options you should use, it should be based on risk. You should really critically look at your current QC practice and see whether or not there are problems with the current QC practices. Do they have sufficient error detection capability for the assays that you’re running? Do they allow the sort of detection of increased imprecision which sometimes evades normal IQC? So patient-based real-time should be implemented on those assays where there’s obviously a risk and patient-based real-time will improve that risk.

It can also be useful with very, very high volume assays where you’re doing a lot of sodiums or whatever a day. Inevitably what happens under those circumstance is that people run just a few internal QC samples, maybe once or twice a day or once a shift. There’s high risk in those situations that if there’s an error, you will have released a lot of results that may be wrong that you will have to re-assay and subsequently resend those results out. Patient-based real-time QC provides this real-time component of assay control, so it’s a real benefit and again obviously reduces your risk. And then sometimes again, there may be the economics of some assays where it’s much more cost-effective to run patient-based real-time QC all the time rather than add more internal QC.

Bob Barrett:
Well it seems that from the responses to the Q&A session in Clinical Chemistry that a variety of software and middleware are used implement patient-based real-time quality control. Can you tell us about those solutions and how our listeners might look into obtaining these?

Tony Badrick:
Yes, and there’s a lot of interest in this and as I said, people have been talking about this in chemistry for 30 years, but we really haven’t had the means or really I think the confidence in the statistical processes to move down this pathway. Even though as I reiterate, the hematologists have been using this routinely since the ‘70s. So this is nothing new. It’s just new in our world, in the clinical chemistry world.

So there’s three ways that you could look at what’s available. The aficionados have already got their own in-house software. So many laboratories will be using in-house software that they’ve developed themselves. They’re the sort of the trailblazers.  Many manufacturers of instruments are now seeing the benefit of using the sort of technique and they’ve in fact got the software already resident in their software platforms, but they’re not really pushing at this stage the benefits of this because they really want the users, the customers, to come along and say, “I really want to use this now.” And as we’ll no doubt speak later, there are some complexities with introducing this patient-based real-time QC. So the second source, as I said, there’s actually software on many large mainframe chemistry analyzer platforms sitting there now just waiting to be switched on. And it’s being honed all the time that better development will be pushed by more customers saying, “This is what I want.”

The third source is add-on middleware and there are certainly providers out there now who can provide you with very sophisticated systems that you can add on into your IT LIS to provide this patient-based real-time QC. As I say, vendors are actively working to improve their onboard systems because of this customer demand—and many instrument vendors will see this is a competitive advantage. It’s coming and there are options available.

Bob Barrett:
Well one of the respondents indicated that their vendor charged extra for real-time capability. Do you see that as a disincentive to adopting it?

Tony Badrick:
I think it’s a question about value. I’m not sure what the vendor would actually be charging for. But to implement patient-based real-time QC does take a fair amount of work because you need to identify in your patient population what’s the best parameter to use and some other variables. So if you can get a vendor to do that hard work for you, that’s value. You then have to train your staff and provide ongoing support for staff to maybe change some of the variables that you will need to set as you optimize the software for your particular population.

So if you take into account based on what you get, do you get optimization for your population and the analyte what you measure? Do you get training of your staff? Do you get ongoing support of your staff? Perhaps when you take those things into account, bearing in mind that if in fact you moved towards patient-based real-time QC, you will save a lot of money on the cost of internal QC, then that becomes a question of value to the particular laboratory.

I’ll just mention that one site, a community lab, that’s mentioned in the Q&A that we released today, they saved about 85% of the cost of their internal QC. So that’s a significant saving. You need to bear that in mind. There are savings associated with this and if by paying your vendor to provide that transition, that may well be very cost-effective.

Bob Barrett:
It seems that one of the most challenging issues confronting laboratories is obtaining initial settings for their real-time QC settings. Tell us what’s involved there and how laboratories have addressed that?

Tony Badrick:
And as I said, that’s where the value from the vendor charging for this optimization could come in. So the settings that you need are first of all, as I said, you could use the mean or the median or maybe you might use SD or you might use some other variable based on the population that you’re using—so you first of all, what’s the best thing to use? Is it the mean or is the median? The next thing I mentioned is that if you’ve got a mean or a median, you need to have a number of samples—it’s called the block size—that you’re going to use as the basis for calculating the mean or the median as well as what you’re going to use. How many patient samples are you going to use to calculate that variable? Then inevitably, there will be some patient samples that are at the extreme levels. With sodium, it may be 170 or 115 and they will have an impact on the ongoing calculation of your mean or median. So there will be some samples that probably, some patient sample results, that you will have to exclude—so which ones do you exclude? That’s another variable that the extreme limits. And the last aspect is, what are you actually going to use as your control limit? So what level of, say, sodium do you flag that there’s a problem? So they’re the things that you need to determine and that becomes your algorithm. You need to do this for every analyte so there is a fair amount of setup cost. But once it’s done, it should be stable as long as the population is stable.

So how do you do this? The way that it’s done is you in fact you use simulation, so you get your patient population because this approach is patient-centric. It depends on the population that you’ve got. You may need to in fact dissect your population. So you can imagine that if you’ve got an analyte such as urate where there’s a sex difference, you may in fact need to split the population into male and female. It may well be that if you’re in a hospital situation dealing with outpatients, you may need to have an outpatient and an inpatient population because we know that some parameters such as potassium and calcium and albumin are different in the in-population group to the out-population group. So you may need to do some sort of splitting of your population. But you basically take those population parameters and use simulation and you try and identify what’s the best block size, what’s the best parameter mean, median, what values might I need to exclude and what control limits do I set. How you do it once you’ve got your population, you can get some support or there are number of papers that have been published recently that tell you what things should you be looking at when you do your simulation to optimize the values of those four or five things that I’ve mentioned.The other thing that you need to do is obviously you need to validate. You need to validate this approach against probably your conventional QC approach. So you need to go through a process of formal validation and then you need to go through a training process.

Bob Barrett:
Are there certain analytical assays or patient populations that do not lend themselves well to patient-based real-time QC?

Tony Badrick:
There are. It’s very useful for high volume assays.  As I said, it’s very useful for assays where the sigma is low, less than four, they’re the optimal populations. But you do need a stable population. You may need to dissect your population into a male-female, inpatient-outpatient, but you need that population to be stable. So an instance where you might not have a stable population might be if you’re looking at real-time monitoring of glucose, patient-based real-time QC for glucose, and you’ve got a relatively small population. You’re in a hospital, but every Tuesday afternoon you’ve got a diabetic clinic and Wednesday afternoon you’ve got a diabetic clinic. That population, their average glucoses will change when those outpatient values come in, and corrupt your other population.

So where you got unstable population such as that, it’s not suitable. It’s not suitable for low test volumes. So if you’re running an assay with small numbers of samples, there’s no point in aggregating the means or medians because the population is too small and you’re better off dealing with conventional QC. If you’ve got highly skewed distributions for an analyte, it doesn’t work as well. As you can imagine, means and medians will be affected by very skewed populations. In other assays such as tumor markers where you may have a lot of very abnormal values or very unstable population based on what patients present—their hormones, it doesn’t work well for hormones because of the cyclic nature of hormones and when they’re taken. Or in pediatric populations where there’s a lot of variation over the growth spurts of children or babies and a lot of changes over short period of time as babies’ growth, so they’re not suitable there and they’re not suitable for some point of care assays such as glucometer.

Bob Barrett:
Patient-based real-time QC seems rather complex compared to routine internal QC. How will staff understand how to troubleshoot problems and what about instances where patient-based real-time QC and routine internal QC are at odds? How do you deal with that?

Tony Badrick:
I think it’s complex to implement, but once it’s been implemented, most patient-based real-time QC still relies on something being out of control limits, and those control limits are compared with some forms of conventional quality control. The limits are really quite simple, it’s out of control or it’s in control. So once it’s been implemented, it’s relatively easy to understand. And I think it’s important to realize that conventional internal QC—I don’t think it’s that well-understood. I think there’s been recent papers published where it’s been shown that in fact despite the number of potential rules that laboratories could use, they all revert to very simple rules and the rules that are used conventionally by most laboratories actually aren’t very good rules. So I’m not sure that conventional QC actually works all that well now. With patient-based real-time, once it’s been implemented, all that somebody has to do is, if it’s out of control, what do I do about it? So people need training not on patient-based real-time QC per se, but they need training on, “How do I troubleshoot an assay when it’s out of control? What are the likely things that could cause that? How do I fix those problems? How do I ensure that the assay is back in control?”

And more importantly I think, when an assay is out of control, what do I do with those patient results that may in fact be wrong? Do I re-run them all? What do I do? I think the questions about QC failure are important, but it doesn’t matter whether it’s conventional QC or patient-based real-time QC. But I think that there is an issue with the way that people react to failure, failure on analyzer, and I think it’s almost a human thing, particularly if the failure occurs at the end of the shift, people think, “I’ll leave it for somebody else or I won't fix this problem.”

So I think the bigger issue is in fact that probably what analyzers need to do is to become more autonomous and analyzers need to have onboard software that in fact analyses the patient-based real-time QC or conventional QC. And on the basis of errors detected by that QC algorithm, the analyzer are using perhaps artificial intelligence, looks at its own flags, internal flags. It might decide that it’s time to recalibrate and in fact take a lot of that decision-making process out of the hands of the operator. I think that’s the next phase for analyzers that they become more autonomous in terms of running themselves but in fact patient-based real-time QC fits nicely in that scenario because you got this continuous monitoring of the patient situation.

And if there’s a difference between conventional QC and patient-based real-time QC—I mean I think it depends on the situation.  But there are problems with conventional QC. The material is synthetic. It might not be commutable. It involves somebody making the material up, often diluting or adding a dilute to it -- and sometimes the QC sample is left on the bench uncapped or it may have deteriorated. So quite often when there’s an error detector with conventional QC, there’s a problem with the material itself rather than the assay. I think people who troubleshoot no matter what form of QC it is; they need to be aware of what are the limitations of both types of QC, but I feel that patient-based real-time QC is easier to troubleshoot than conventional QC. And I also feel that if there’s an inconsistency between the two forms of QC, the signals that you’re getting, you need to look critically at the internal QC and just ensure that the material is adequate.

Bob Barrett:
Well, finally Dr. Badrick, how will regulatory authorities that almost universally require routine internal QC procedures react to the use of patient-based real-time QC?

Tony Badrick:
In fact, it’s probably not as difficult as you might think. Certainly, people have said, “We won’t be able to get through our regulator if we move to this form of QC.” And certainly that will depend on the jurisdiction and what the requirements of the regulator in that jurisdiction are. But in fact, if you look at the US, their regulatory requirements&mdassh;the most current CLSI Guidelines—describe a risk-based approach to QC and they in fact include patient-based real-time QC, in a separate section as part of their recommendations.

It’s recognized in the US as a risk-based way of approaching a quality control strategy. And often what labs will still do is that they’ll use a combination—they’ll use less internal QC and they’ll switch to a combination of using perhaps some conventional QC at the beginning of the day and perhaps at the end of the day - but during the rest of the day, they’ll be using patient-based real-time QC to provide most of their signals. So in fact there will be a combination of both systems which certainly meet the US regulatory guidelines. Similar in Australia, our guidelines would allow, as long as you’ve validated the QC approach that you’ll use, they would allow patient-based real-time QC. And I think as long as you’ve got adequate validation and documentation, there won’t be a problem with your regulator.

Bob Barrett:
That was Dr. Tony Badrick, the CEO of the Royal College of Pathologists of Australasia Quality Assurance Programs based in Sydney, Australia; He has been our guest in this podcast on a Q&A feature on patient-based real-time quality control, where he served as moderator.  That paper appears in the September 2020 issue of Clinical Chemistry. I’m Bob Barrett, thanks for listening.