Listen to the Clinical Chemistry Podcast



Article

Tony Badrick, et al. Patient-Based Real-Time Quality Control: Review and Recommendations. Clin Chem 2019;65:962-71.

Guest

Dr. Tony Badrick is CEO of the Royal College of Pathologists of Australasia Quality Assurance Programs based in Sydney, Australia.



Transcript

[Download pdf]

Bob Barrett:
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.

Analyzing quality control specimens is a daily component of modern medical laboratories to help assure that correct results are reported for patient samples. That involves periodic analysis of materials with a known analyte concentration and estimating if measurement error is within acceptable criteria. The QC samples may be obtained commercially or prepared within the laboratory. However, how adequately do those samples represent authentic patient samples? What if we were to use the patient samples themselves as a source of quality control data? That’s not as radical as one might think as implementation and application of moving averages of patient results was suggested as early as the 1960s, but the practice never really caught on.

A Review paper appearing in the August 2019 issue of Clinical Chemistry re-examines this concept and perhaps newer software can provide such data in near real-time, or as the paper calls it, patient-based real-time quality control. The paper comes from the International Federation of Clinical Chemistry and Laboratory Medicine Committee on Analytical Quality, and the lead author is Dr. Tony Badrick. He is CEO of the Royal College of Pathologists of Australasia Quality Assurance Programs based in Sydney, Australia.

Dr. Badrick is our guest in podcast. So doctor, first off, just tell us what you mean when you talk about patient-based real-time quality control?

Tony Badrick:
What it is, is using a patient results often a mean or a median but it can be other parameters that are derived from the patient population that you’re analyzing to come up with a QC process. So basically, it’s using your patients or some aspect of the patients that you’re analyzing to produce a QC process.

Bob Barrett:
So doctor, you mentioned in the article that moving averages and similar techniques have not had much traction in clinical chemistry laboratories. Why would they consider patient-based real-time quality control now?

Tony Badrick:
It’s interesting that the history of these techniques goes back to the 60s, so people have been talking about doing them for a long time. And indeed, the hematologists do use these sort of techniques and they’ve been using them since about the mid-70s. So they’re not new in pathology, they’re just new in clinical chemistry. Why are they being used now? There are a number of reasons.

I think we analyze a lot more patient results now because of aggregation of laboratories, laboratories are getting bigger. So because of that, they’re getting more samples but also a lot more people have pathology tests done on them, either for screening or all sorts of other reasons. So we’re analyzing a lot more samples. So we’ve got a lot more samples to use, that’s one thing.

We find that conventional QC techniques probably are not delivering the level of control over our assays that we think they should. It’s become obvious in the last few years that people aren’t using conventional quality control as well as they should. So consequently, there’s a risk that patient results are being released, they’re out of control. So that’s another reason. There’s been sort of a lack of confidence in the old way of doing QC.

And I think the other thing that you need to make this type of QC work is you need to have good middleware or good data capability so you can in fact convert these parameters that come from your population into something that’s useful. So it’s about the number of patients that we’re analyzing, it’s about the loss of confidence in conventional QC techniques and it’s about the ability to do something with the data.

Bob Barrett:
You mention that labs are actually using it now. Do you have any idea how widespread it is, how many are using it? Do you use it in your own lab?

Tony Badrick:
I’m not sure how widespread it is, but I do know of the group that published this paper, there are a number of people who are using it routinely in their labs and they have been using it routinely for four to five years. One of the members is Alex Katayev who’s from LabCorp, well-known I suspect to lots of people, they’ve been using it for a number of years. So it’s used in large community-based laboratories like LabCorp in the states. Other members of the group are from Singapore and again from a large tertiary care hospital in the states and one from the Netherlands. And again, they’ve been using these techniques in a hospital situation because that’s important. It’s a different population to a community-based laboratory practice and they’ve been using it for a number of years.

So they’re certainly being used in different situations, in community laboratories, in tertiary care hospitals, and Ben Roston(ph) who’s on the group from the Netherlands, he’s used it in both a tertiary-based hospital but also now in a cancer center. So it can be used in lots of different situations. And I don’t know how widely spread its use is, but it is being used in lots of facilities. I also say hematologists have been using this for a long time.

Bob Barrett:
Dr. Badrick, you touched on this a little earlier, but what are the advantages and any disadvantages of patient-based real-time quality control over conventional QC procedures?

Tony Badrick:
The advantages are that sometimes other controls, conventional QC controls, might not be available or impractical to some assays. It’s commutable. One of the problems with conventional QC is that the material isn’t the same as a patient sample, and it may react differently in your analyzer to the way a patient sample acts. You’re using patient samples, so there’s no issues about commutability. Because you’re not using conventional QC, there’s virtually no cost. Once you’ve set up the informatics to do this calculation, you don’t have to pay for expensive QC. So there’s a cost benefit.

The other thing is that you may well be able to detect errors using this technique that you cannot detect with conventional QC. All QC is retrospective. So you’d analyze your samples, you analyze a QC sample, and then on the basis of that QC sample, you determine whether or not the samples that you’ve analyzed up to that QC sample are indeed in control. So conventional QC is a retrospective process.

With real-time patient-based QC, in fact it’s more in real-time. So each patient sample adds to the QC information, so you can detect out of control situations much earlier on than if there’s a QC every hundred patients or so. So it’s real-time. That’s another benefit of it. It’s been used a long time so it’s not new. And the other sorts of errors that you may be able to detect are things like reagent lot-to-lot variation and some pre-analytical factors. So if there’s been a problem with the transport of samples, a group of samples, or if there’s been problems with the collection of a group of samples, they will also be reflected in the QC process. So with conventional QC, you only are looking at the analytical process, patient-based real-time QC allows you to look at some aspects of pre-analytical focus. So the benefits are that, the advantages are, that the material is commutative, you may be able to detect errors that you can’t detect with conventional QC, there’s much lower cost, and it’s real-time so you’re not waiting for the next QC sample to detect that there’s been a problem.

The issue with that is too that in many labs, you only run a QC sample between a hundred or so patient samples. So it may well be that you’ve released patient samples because of its retrospective nature of conventional QC, you may well have released patient samples that were actually out of control, so you will have to repeat those samples and retract those results. There are all the advantages. The disadvantages are really that you have to set up this process. It has to be based on your patient population. You have to have a good understanding of your assay and your patient samples. There’s a cost with developing this understanding of your patient samples. So it’s not as simple as conventional QC. You have to spend some time and some effort in setting up the parameters for patient-based real-time QC and to retrain your staff on how to use it. There are the downsides.

Bob Barrett:
So how would a laboratory implement patient-based real-time quality control? Are there any differences in implementing it for a certain measurands or specific instrumentation?

Tony Badrick:
It’s not instrument-based. You certainly need to understand your measurands so you really need to understand the sort of biological and analytical characteristics of the measurand of interest. You need to know the reference intervals, the pathological values, the biological, analytical variation of what you’re going to use it for, the measurand. You need to understand your patient population. Is it a pediatric population for which this process is difficult to use, because the patient averages for different age groups engenders change as we age, grow up from sort of being a neonate to teenager. Do you have in your population, are there mainly community-based patients, so they’re basically normal, or do you have patients in a hospital situation that will impact on some of the results? What patients do you have in your population?

Do you have any sort of periodicity in the way those patients might present? So sometimes inpatients are bled and analyzed in the morning and outpatients in the afternoon. Some days of the week, you may have diabetic clinics or renal clinics or something, so that will have an impact on these moving averages. As well as understanding the measurand, you need to understand your patient population and you need to understand, what can my laboratory information system do.

This is different to what most laboratory information systems do. It needs to have the capabilities of taking this data real-time, doing some analysis on it and producing a QC-like process. So you need to have an informatic system that will allow you to do that.

Initially, as I said earlier, you also need to be able to look at your patient population, your measurand, run some simulations using that data against different types of algorithms that can calculate different types of patient-based real-time QC and see which is the optimal algorithm to use for this measurand on your population. And then validate by simulation by introducing errors into that patient population and seeing how effectively the algorithm detects error. And then to validate that against what you’re doing with QC.

So that’s the burden of introducing this, but once it’s been done, it’s been done. But you do need to understand those things, your population, your analytes, and your IT system.

Bob Barrett:
What’s the availability of current software to implement this type of quality control? I mean, can you just get that off the shelf?

Tony Badrick:
Many analyzers are currently provided by vendors, in fact, have the software there. We don’t believe that in many cases it’s sufficient, and you need more than what’s being provided and that’s the issue with it. You do need to have simulation software so you can look at your population, introduce errors to your population parameters and see how well you can detect those errors with the algorithms that we are talking about. That’s the limitation of this stage.

But the focus of this group, the IFCC Group, really now is publicizing the benefits of this. But our next step will be to engage with vendors both of instrumentation, and with middleware, to identify the gaps in what they currently provide against what we believe needs to be in the software to make this patient-based real-time QC work, and assist them with moving towards a more suitable software for implementation.

Bob Barrett:
How does one integrate patient-based real-time quality control along with conventional quality control, which is often required to meet certain regulatory obligations? What if the results from the two approaches point to different conclusions?

Tony Badrick:
I think that’s a good question and a lot of people ask that question. So, with patient-based real-time QC, there’s a number of ways that you can use it. You can use it really as a system of early warning and when it flags, when the patient-based real-time QC process tells you that there’s an issue, you can then resort to using conventional QC to either confirm that there’s problem or to troubleshoot the issue.

So that’s how you use them hand-in-hand. And it may be that when you start your assays first thing in the morning to ensure that everything is in control, you may well need to at that point in time run a QC in the morning. So, you certainly still need to use QC.

In terms of QC, this is quality control and providing you validate that the QC system picks up the errors, that should meet most regulatory requirements. And I say, in terms of the U.S. market, both LabCorp and another major hospital that I’m aware of in the U.S. is using this routinely. So they’ve been able to convince the regulator that what they’re doing meets the regulations.

But as I said, the benefit of using patient-based real-time QC is that you don’t need to use as much conventional QC, and it’s been reported in a community-based laboratory situation that you may reduce the cost of your conventional QC by up to 80%. That’s the benefit, but they’ve still met in terms of the U.S. market, the regulatory requirements of the U.S.

Bob Barrett:
Well finally doctor, what are the other implications of using patient-based real-time quality control?

Tony Badrick:
I think as I’ve come along this journey using this patient-based real-time QC, a couple of things have become apparent to me. I think there are some risks with the technique if people apply it and don’t validate fully, the sensitivity or the appropriate use of this technique, there are dangers. So the risks might be that people might not understand the basis of the calculation because it’s different to what people are used to. And it may well be that they don’t fully utilize the power of the technique, so it may well be that they make their limits too wide and they don’t use the full sensitivity of the patient-based real-time. Or they may set it too wide and they may miss an error.

So the first issue is that there are problems with the interpretation of this new technique. So one of the reasons why you would use patient-based real-time QC is conventional QC doesn’t seem to be as effective as it should be. And I honestly believe that part of the problem is that humans have problems with looking at QC graphs over a long period of time. I think it may well be that the best way to implement patient-based real-time QC is to in fact have a completely automated machine learning-driven process where in fact it’s all automated. So it’s AI that’s controlling the instrument.

The other benefit from using a patient-based real-time QC is that in fact more power comes if you look at the same analyte being measured on the same analyzer using the same reagents and the same calibrators across a lot of sites. There’s a benefit to look at more QC from different sites centrally, because you’re using something that’s really inherent in the population. You can pick up changes that may become apparent because of a change in reagent lot or calibration. So there’s a lot more power that can be gained from using this type of QC. I think they’re the major implications I can think of.

Bob Barrett:
That was Dr. Tony Badrick, the CEO of the Royal College of Pathologist of Australasia Quality Assurance Programs based in Sydney, Australia. He has been our guest in this podcast on patient-based real-time quality control. That paper appears in the August 2019 issue of Clinical Chemistry. I am Bob Barrett. Thanks for listening.