Listen to the Clinical Chemistry Podcast



Article

Ronald Jackups, Jr. The Promise—and Pitfalls—of Computerized Provider Alerts for Laboratory Test Ordering. Clin Chem 2016;62:791-792.

Guest

Dr. Jackups is Assistant Professor of Pathology in Pediatrics at Washington University in St. Louis, specializing in clinical informatics and transfusion medicine.



Transcript

[Download pdf]

Bob Barrett:
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.

Utilization management of laboratory test is an important facet of providing high quality care. The widespread implementation of electronic medical records and their clinical decision support functionality has the potential to facilitate evidence-based ordering of laboratory tests. However, a number of challenges must be mitigated to ensure successful implementation of clinical decision support tools.

Dr. Ann Moyer and colleagues published an article in the June 2016 issue of Clinical Chemistry describing a successful electronic order alert protocol. The intervention aimed to reduce unnecessary repeat tests on intensive care unit patients at their institution. An editorial by Dr. Ron Jackups accompanied that article, and he joins us for this podcast. Dr. Jackups is Assistant Professor of Pathology in Pediatrics at Washington University in St. Louis, specializing in clinical informatics and transfusion medicine. He will serve next year as the Program Director for the Clinical Informatics Fellowship at Washington University.

Doctor, could you define clinical decision support, or CDS, and describe how it’s been used in the laboratory setting?

Ron Jackups:
Yeah, sure. So clinical decision support, or CDS, is a fairly new term that’s been used. The actual techniques have been used for quite awhile, at least, ever since we’ve gotten off of paper ordering and moved on to electronic ordering of tests in large academic systems.

I would define CDS as the use of computer technology to help clinicians in making better evidence-based medical decisions. It has a lot of different pathways, and although the most common pathway that people think about nowadays is the use of alerts in the computer ordering system, and of those, the most common is the interruptive alert, or sometimes called pop-up. While someone is trying to order a test, a window pops up and suggest maybe this is not the right test or maybe there’s a better way to do this, and the physician then decides either to continue with the test or to make a different decision.

However, this is a really, actually a limited part of CDS for laboratories. Alerts themselves don’t have to be interruptive, they can be passive. For example, just a helpful informational box next to the test order so that the clinician can look quickly at what the current thinking or current evidence is on the use of that test. But more broadly, there have been things that we have done for years that really do fall under CDS. Another example is order sets, in which instead of forcing a perhaps inexperienced clinician like a resident or nurse practitioner, expecting them to always know what test order and order the right test, why not instead we give them a menu of order possibilities that they can select from, and it’s a menu that we can design to be smart, or at least evidence-based.

Another example in that same line is the use of algorithms or a reflex test protocols which again, instead of expecting the order provider to select the right test, have then order something more general such as an anemia workup or a coagulopathy workup in which all they need to do is click a button and then the lab will take all the clinical information, and then select the right tests in a sequence in order to find the etiology of the patient’s problem.

Those are some examples. There are quite a lot more, but those are definitely the ones that lab directors and lab staff are going to see on a fairly regular basis.

Bob Barrett:
What evidence is out there for the usefulness of CDS in lab test ordering?

Ron Jackups:
So there is a small body of evidence right now, but it’s growing quite rapidly, that these interventions can reduce inappropriate ordering. Part of the problem with building these studies and proving that it works is that the success rate really depends both on the type of the intervention, but also on the outcome you measure. So most of these studies look at reduction in test volume or the number of tests that are avoided compared to a pre-intervention period, and interventions that are -- we could call them slam dunk interventions, like convincing a clinician to order the test that they actually wanted instead of ordering a test with a similar name that is inappropriate, for example, 1,25- hydroxyvitamin D is often ordered instead of the more commonly ordered 25-hydroxyvitamin D.

CDS intervention to prevent that is actually quite successful, often 80% reduction or even more. The things that are more difficult are preventing clinicians from ordering unnecessary duplicate tests because often times, the clinician doesn’t really believe that that repeat ordering is unnecessary, or just doesn’t have the time to look back at previous results in care.

Those interventions, the success rate on that can be as low as single digit percent test avoided to as much as 50%. But I think a recent study out of The Cleveland Clinic showed something very interesting, which was that if you provide an interruptive alert, again, a pop-up, but you let the clinician decide whether to continue ordering the test or not, you get a somewhat good response, somewhere in the 40% range. But if you take that decision away from the clinician and you do what’s called a hard stop, in which the only way that they can proceed and get the test that they originally requested is to contact the laboratory, you can get as much as a 90% success rate. So again, the choice of intervention can really change the success rate.

As far as economics, the studies that are published tend to suggest that there is a great financial boon that can come from these, as low as the thousands per year and as high as hundreds of thousands per year. That’s definitely promising, but it’s also important to consider that that number may not be due solely to the intervention, it may be due to other quality improvement that has been done at the same time as the intervention, or it could just be due to the clinicians changing, becoming more educated, or just becoming themselves more restrictive in the way they order tests.

Bob Barrett:
Nothing’s perfect, so what do you see as the potential pitfalls of widespread use of CDS in lab test ordering?

Ron Jackups:
Yes. So there are definitely a lot of potential problems that people can fall into when designing CDS. The most common is a term called “alert fatigue,” in which the order providers have seen so many alerts that they eventually start to ignore it because it just interferes with their workflow. You can sort of, as an analogy, think of this as the lab crying wolf that it continues to say, “You probably don’t really want these tests,” when in fact it is clinically necessary. And so the clinician will just continue to ignore it. Well, the problem is in the future, if they do hit an alert that really should be listened to, they’ve just been trained not to care about it anymore.

I like to use the analogy of sensitivity and specificity, the same way we use for test accuracy. In this case, alert fatigue is due to too many false positives, in other words, the alert is not specific enough. It’s firing when it shouldn’t be firing, and so it leads to people ignoring it ultimately and then ignoring useful alerts in the future. So even though alert fatigue is probably the biggest problem that we face in building CDS alerts, the other side of it can occur, too. Alert fatigue is caused from poor specificity, but we also have a problem sometimes with poor sensitivities, so that the alerts don’t really fire when they need to. That is in some ways not as important as over firing, but that can really have a potential to limit the usefulness of the alert.

Another problem that I think is very common and not really investigated enough is that the alert may target the wrong person, so not just the wrong test, but the wrong person. The order provider, the person who’s actually placing the order, may not be the person that really made the decision to order that test. This problem comes up a lot in academic hospitals where the resident or the nurse practitioner is expected to be the person doing most of the work. We sometimes call that “scutwork” in medical school, where they have to order the tests that their attending or their specialty service normally wants. Even though these residents who tend to be younger and honestly, more well informed about current evidence-based guidelines for test ordering, even though these are the people who have to put in the orders, they didn’t want the order in the first place. So when they get the alert, in a sense, they feel a pressure to ignore the alert even though they might agree with it. So there’s definitely a problem with CDS just finding the wrong target if it’s not designed well.

Bob Barrett:
So, how can clinical labs develop CDS tools while avoiding those pitfalls?

Ron Jackups:
There are actually a lot of ways to do it. I think that the most important step before even starting any sort of CDS program is to get supported from the top. I mean, the very top of the hospital system, most notably, the chief medical officer or the chief medical information officer.

This can be hard. First of all, some hospital systems don’t even have a chief medical information officer. The CMIO is not the person who just keeps the machines running. This is the person who actually makes sure that the machines are doing something useful for patient care, an actual medical doctor who makes sure that the computer technology supports appropriate use of clinical resources.

These two individuals, the CMO and CMIO, they are the ones that can really set policy, and if they are not onboard with this concept of CDS, it can cause a lot of problems down the line, because CDS is often not limited to just a single service, it’s often widespread across the entire hospital. So, if the lab just makes a decision to change something, it can affect multiple services’ workflows. If they were not told from a source of authority like a CMO that this is something important to do, then they will rebel and certainly, rebelling is the worst thing that can happen to lab directors, not only because it will end the CDS intervention that they were attempting, but it can also create a toxic environment where it’s harder to get more interventions in the future.

Along that same line, not only do you need the authority from the top, but you really need to talk to the physicians or the clinicians most affected by these interventions. So that really means building multidisciplinary teams ahead of time before implementing anything, so you will find some academic centers and other large hospitals have lab utilization committees where these sorts of questions can be brought up. Unfortunately, some hospitals simply do not have that. So until that can be built, again, with authority from the CMO, we often have to do CDS piecemeal where we build clinical experts and clinical leaders who are willing to sit down and make these sorts of decisions and then pass it on to other people in their departments.

Some other ways to make CDS successful: I had already mentioned one of the pitfalls being targeting the wrong person for the alerts. It really is important to target the decision maker and quite often, it is the attending or the clinical service who has built some sort of guidelines that need to be updated.

Other things that can be done on the backend after the CDS has been formed is to actually look at the progress of the CDS. The mistakes that people can make are to just put the CDS intervention in motion and then never check back on to whether it was successful or not. But there are definitely really simple ways to make sure it’s successful, to build dashboards that can look at test utilization before and after the intervention, and also build teams with your information system support groups to fine tune the alerts or fine tune the intervention based on the results you get.

Finally, something that I’ve done on occasion is to go back and find the people who have seen this alert, who have interacted with the alert. So again, the common order providers like residents and nurse practitioners, and make a focus group. So get people together and ask them, “What do you think about this alert? Do you find it useful?” and you often get from that more useful information from even the dashboards. So even thought it’s not quantitative data, you really get a sense of how it’s being used down in the trenches.

Bob Barrett:
Okay, finally, doctor. Let’s look ahead, where do you see laboratory CDS heading in the near future?

Ron Jackups:
I think this is a very interesting question because there are so many avenues that CDS can take, and so many areas that it just hasn’t been taken in test ordering. Right now, a lot of what’s being done is one at a time interventions based on this guideline or that guideline, so one very specific area of one subspecialty of lab testing. Those are good, but I think looking more broadly and more collaboratively, I think is the direction where CDS needs to go in order to be not only successful on a wide scale, but also more accepted by the clinical community.

The most obvious next step is that all of these hospitals and medical systems that have designed these interventions, they know what has been successful for them, but they do not necessarily know what’s been used elsewhere. So I think building a network where medical centers can share the interventions they’ve found successful, it would be really helpful to spread the word. This could also be done within large electronic health record corporations. I won’t name any, but there are a few that really have dominated the market and have many systems under their management.

Those corporations have already built a menu of CDS tools and they share with each of their client services, and I think that’s very good and that needs to continue, even though not all of these CDS interventions are one-size-fits-all.

Other areas that I think need to grow, I think that algorithms which I mentioned at the very beginning. They are very useful, but I don’t think that they had been leveraged enough. I think that algorithms and reflex testing is really a way for lab directors to in a sense, take back the decision making processes that are often left to inexperienced clinicians. I think that can be done in a way that instead of us sounding like dictators, we really are part of the clinical team, we really help clinicians who may not really understand the most recent best practiced evidence. I think we can really help them to make the best decisions for the patient, and then when they see that they become more accepting of the CDS itself.

We had talked a little bit about research and the evidence supporting these guidelines. As I've said, that’s pretty spotty right now. I think that that needs to be built up and really solidified. So just the research design itself of these CDS interventions really hasn’t matured. It’s hard to decide with the intervention, should it be time-control. In other words, we look at data before and after the intervention and hope that the improvements are due to the intervention and not to something else that was happening currently. I think that’s helpful, but it has weaknesses. I think randomized control trials would certain be useful, but they would extremely hard to implement. And so I think we really need to sit down and look at how we design these projects and even how we just define what a successful outcome is.

Lastly and more broadly, I think that right now, what we’re doing with just putting an intervention, seeing if it works and then reporting on its success, I think that needs to now does scale into a broader concept, which is how do people actually interact with these CDS tools. And that brings us into an area, often called human factors engineering underneath which we have sort of subsets of human computer interaction and implementation science. So really, looking at first of all, the psychology of the users, so why do they do what they do when they encounter a CDS tool.

Then on the next side is, how do we make machines actually adapt to the cognitive and psychological practices of the people who use this alert. I think that there are a lot of people who can do that sort of research, but they’re quite spread out in academic centers. They really need to be put together and really collaborate on what I think could be some very interesting and useful research.

Bob Barrett:
Dr. Ron Jackups is Assistant Professor of Pathology and Pediatrics at Washington University in St. Louis, specializing in the clinical informatics and transfusion medicine. He’s been our guest in this podcast from Clinical Chemistry. I’m Bob Barrett. Thanks for listening!