Listen to the Clinical Chemistry Podcast
O.J. Driskell, D. Holland, F.W. Hanna, P.W. Jones, R.J. Pemberton, M. Tran, A.A. Fryer. Inappropriate Requesting of Glycated Hemoglobin (Hb A1c) Is Widespread: Assessment of Prevalence, Impact of National Guidance, and Practice-to-Practice Variability Clin Chem 2012; 58: 906-915.
Tony Fryer is a Professor of Clinical Biochemistry at the University of Keele and Consultant Clinical Biochemist at the University Hospital of North Staffordshire, England.
This is a podcast from Clinical Chemistry. I am Bob Barrett.
Healthcare budgets are facing increasing pressure to reduce cost yet maintain quality. In the laboratory, all focus is on inappropriate test requests. Reports estimate that 25% or more of medical laboratory tests may be unnecessary.
Joining us in this podcast from Clinical Chemistry is Tony Fryer, Professor of Clinical Biochemistry at the University of Keele and Consultant Clinical Biochemist at the University Hospital of North Staffordshire, England. He also sits on the UK National Pathology Benchmarking Service panel whose aims include managing demand for laboratory. In the May 2012 issue of Clinical Chemistry, Fryer and his colleagues published a report examining hemoglobin A1c requests in laboratories noting how many of these tests were appropriately ordered.
Professor Fryer, what are the key drivers for reducing inappropriate test requesting?
Okay well, certainly in the current economic climate, everybody is looking at ways in which we can reduce cost both in the clinical laboratory setting and elsewhere across the healthcare system. I'm much in the context of trying to maintain or even improve quality despite the fact that many laboratories are experiencing increases in their workload. And that really has been the focus of some work in the UK, looking at how we might rationalize the quality of services across England and Wales. And that was probably back in 2008 in the Carter Report, this was commissioned by the UK Department of Health. And one of the things that they identified was that inappropriate testing is a potential target to reduce waste in the healthcare system.
Outside of that, I think really as laboratory medicine professionals we do have a duty of care to our patients in order to deliver the best quality of care to them. So I think there's a driver for this, it’s not just about money. It’s about improving quality and about making sure the patients got the best out of it. So that’s the driver, as I see, for reducing inappropriate testing in the clinical laboratory setting.
Well, let’s back up just a second and how would you define an inappropriate request and how common is this -- how widespread is it?
Okay, that’s an interesting question. It does very much depend on what you define as an inappropriate test, and I think of it in terms of -- I call it the 4 Rs, but I think of in terms of the right test and the right patient at the right time using the right methodology. But obviously when we’re talking about inappropriate testing, we’re talking about doing the wrong test in the wrong patient at the wrong time and using the wrong process.
Either way, lots of studies have looked at this and suggested that round about a quarter of tests that go into laboratory are inappropriate, and that’s a pretty big sum. So that's why I think if they are trying to get attentions from the people that are paying for healthcare here in the UK, I might suspect that's true right across the world.
In our study, we've used a very specific definition. But I just want to give you some examples of what I mean by what an inappropriate test is. When we talk about the wrong test, that might be doing a test in somebody that is not right for the job. And the example I use for that is something like doing a prolactin test in a request that we get from a doctor for somebody who may be having menopausal symptoms.
Under those circumstances, we would normally do an FSH whereas we’ve been, I think, requesting for kinds of requesting that sort of context.
The other one that we might look at is testing the wrong patient. A classical example that we see occasionally in that context is doing a prostate-specific antigen, that is the prostate cancer in women. And that's obviously not the right context, and you think that a doctor would recognize that. But what we’ve used in this context is the wrong time. So this is doing the right test in the right patient but not doing it at the right time.
There are lots and lots of examples we could use for that. There will be things like doing a testosterone. In the afternoon it shows variation, and what we’d want to do ideally is do it in the morning in order to reduce that variation, or we might be looking at doing a therapeutic drug monitoring test if we take something like digoxin where the ideal time would be to take it six hours post the previous dose. If you take it two hours, it’s going to give you a falsely elevated result.
What we’ve done in our study is looking at hemoglobin A1c in the diabetic context, so a test that's done a week after the previous request for test is too soon. If it’s two years after the previous test, then it will be too late. And then of course, the other aspect, the fourth R if you like is doing it the right way.
One of the ones that we see very, very frequently is doing potassium measurements on a sample that’s taken into an anticoagulant containing bottle where the anticoagulant is potassium EDTA. By definition, if you’ve got potassium in the first place, it’s going to give you a falsely elevated result, and there are lots of other examples we could use in that.
So how common inappropriate testing is, it mostly depends on how we define an inappropriate test.
How do we know that the problem actually exists?
Well, what we’re doing here at the University Hospital of North Staffordshire is developing a strategy to identify inappropriate requesting, and you can do that by several different methods if you like. Largely it comes down to identifying information from the laboratory computer systems and things like that.
So for example, if the workload for a particular test has suddenly gone up or gone down for that matter, it might suggest there has been some change somewhere to requesting patterns that has prompted that. That might be something that would make you think, well, is that a real increase or decrease, is there something going on that we don't know about. So the ability to review change in workload pattern gives you some idea of where you might pick a target for investigation.
The other one that we've used quite a little bit here is the variability in requests between individual requestors. That’s easier in general practice where they all are generally speaking looking at the same sorts of patients for the same sorts of diseases and doing the same sorts of tests. So if we are showing variability between two general practitioners in their proportion of tests that they ask for glycosylated hemoglobin or hemoglobin A1c, that might suggest that one is either requesting too few of them or another is requesting too many of them and so on.
The other thing to think about is how does our level of requesting and our laboratory compare with other laboratories elsewhere in the country or indeed worldwide, because what you might argue is that if we are doing an awful lot of tests compared with our neighbor then the practices that are going on here might be different, not necessarily better, not necessarily worse, but different than those of our neighbors and that again gives you a clue as to where to start looking for these things.
The other thing that we would recommend doing is, having a look at whether the test that’s been asked is actually worth doing anymore. We regularly review the repertoire of tests that we offer to see whether there are better replacements for that test or even whether the test is doing the job that it should be doing. So we would recommend reviewing the repertoire on a regular basis, getting rid of a test that you don’t need anymore, value, and improving with other ones.
So those are sort of classical ways that you can systematically evaluate possible targets who are in the appropriate requesting. Clinical audit is another one that’s often used where you identify a group of requests from a practitioner to see whether or what they have asked for fits with the clinical scenario that they're trying to investigate. So that’s all very well, it's a little bit more labor-intensive and it requires a little bit of more careful thought. You can't really automate that as a process. So that’s about its value but in a different context.
One of the things that I find from working in a clinical laboratory is by just seeing lots of requests coming in, you get an instinct for what's appropriate and what isn’t, what sort of targets there might be. But I think my main focus on this is that every single clinical laboratory should be developing the strategy to review the volumes and the changes that are going on in their work patterns on a regular basis so they can identify possible causes of inappropriate requesting.
Well, we've been talking over-testing and the opportunities it presents to reduce waste, but it could go too far. What would be the implications of under-testing?
That's a really interesting one. There has been a lot of emphasis from the drivers that we talked about earlier on to reduce the tests that are done that we don’t need to do. A really important one, and potentially more important aspect is actually what about those tests that we should be doing, that we're not doing.
And if we take how hemoglobin A1c as an example, if we miss a test on a particular patient or severe test more particularly, then that might lead to poor access to medical review for that individual patient. It may mean that the doctor who is looking after the patient doesn’t know that the diabetic control is going off. It may then have a knockoff effect in terms of clinical outcome. So it may develop complications associated with their diabetes. That might lead to more admissions to hospitals, referrals to hospitals, and indeed as far as the patient is concerned to a poor quality of life.
So this is a really very important area that we identify how many tests are being missed. I think in terms of the overall cost savings this might be more important area than identifying the tests that have been done but don’t need to be done unless it's aspect of under-testing.
Interested to know why you use the diabetes marker, hemoglobin A1c as your model.
I suppose there are a few reasons for that; two main ones really. Firstly, that we wanted to pick a marker where the guidance on how often you should test was readily available in national guidance so that we can identify exactly when a too soon or too late test is. And secondly the advantage of glycosylated hemoglobin (HbA1c), is that in the UK certainly up until perhaps 2011 it was not used at all except in diabetes patients in the monitoring of their condition.
We recently started to up here as a diagnostic tool. I think that’s affecting the results that we may have on that sort of analysis moving forward. So what we ended up with really is picking out this one as a model, simply because it’s quite clean. There we have clear guidance and it’s only focusing on diabetes patients. So it’s a nice tight model in order to examine.
Having said that because of that it might suggest other areas whether there’s not so much guidance and where there is a cluster of uses of a particular test. Over-testing and under-testing might even be more common.
How did you define inappropriate requesting in your study? Dr. Tony Fryer: Okay, so earlier on we talked about better on the right test, the right patient, the right time and the right process. What we wanted to do to do hear is focus on something again which was very clean and so what we used was the recommended, frequency of requesting between tests for hemoglobin A1c.
So what the guidance would say from a National Institute of Health and Clinical Excellence in UK and it’s replicated elsewhere in American Diabetes Association guidance and so on, is the recommended timing tool between one test and the next, and what we call the minimum retesting tool. For example an individual who is well-controlled with their diabetes is meeting that treatment targets.
What the guidance would suggest is that those individuals should be tested every 6-12 months. So what we used is the definition for a test too soon well the test that was before six months and the test too late will be a test that was after 12 months. And so we wanted a very, very definite -- definition of we defined is an inappropriate craft in that context.
Well let’s get some answers, in your study what was the prevalence of over and under requesting?
Okay, right. It’s an important one, this is because if you’ve got the crux of the study that we conducted. What we did first is to remove single diagnostic tests. So what we only worked at was repeat requests. So any individual who have the single request, by definition you can’t define whether that’s too late or too soon in the first place. So we could only assume that test was an appropriate test. But chances are there’s an increased likelihood that that was a test for a diagnostic purpose rather than for a monitoring purpose. So we excluded those, and that helped us also to compare our data with some data from underlines grouping in Calgary in Canada. When we did that, when we looked at the proportion of repeat requests that were too soon we identified that 21.3% of those tests were requested too soon according to guidance. So one in five tests were outside guidance too soon, not so waste of resources.
However, when we looked at those tests that were too late, but the tests that were missed if you like, what we identified there was 29.9%, so almost a-third if you like of tests that will request it worked too late. And so potentially for those patients they are having somewhat more diabetic control. I’m not going to have significant consequences through down the line for those patients.
In your analysis you’ve mentioned that the prevalence of under-testing is likely to be an underestimate of the number of missed tests, now why is that?
Okay, what we did in our analysis is we looked at whether this particular test was too late. Now that might be too late, because if we take our example that we used earlier on of a well-controlled diabetic patient, whose 12-week interval was between 6 and 12 months, if that test that we identified came in at 13 months, then that will be identified as one test too late.
However, it’s also possible that that test may have come in three years after the initial investigation. In which case we’ve not just missed one test, we’ve missed at least three tests. And so our analysis is, when we first did this underestimated the number of tests that should have been done, but weren’t done. What we did is just identified whether this particular test was too soon or too late. So about 30%, 29.9% is an underestimate of actually the number of tests that were missed.
What are the relative cost implications of this for the laboratory?
Okay, so that means that if we were to do all of the tests but needed to be done, in order to just grape into the guidance if you like, then we would need to do around about 34% more testing. So that’s excluding all those tests that we did that we didn't need to do. In our case it was around about 11,000 tests per year, and doing the tests that we should have done. And by doing that we need to do around about 34% more testing, okay, so that’s an increased cost implication for the laboratory.
However, if you think about that in the long-term and you think about the improved outcomes that those patients may benefit from. Then not only do you do less test because more patients become well-controlled and that will have to have a testing less frequently. But from the patients’ perspective there are very significant benefits. So cost is not just in terms of financial benefits in this context.
So we might have an individual who is less anxious because they have to have a test that they don’t need. It’s less in terms of inconvenience for them, having to go to the doctor for test that’s not necessary; they don’t have to have a full bottom episodes or a needle stuck in their arms so frequently. Indeed, some of the very practical issues with the patient, they don’t need to then have time off work in order to go to the doctors or the nurses to have their blood test.
They’d have to find their parking space for their car and all those sorts of aspects, and of course, there are issues in terms of other healthcare costs that has nothing to do with the laboratory per se and that’s the cost of having their blood sample taken, the consultation of seeing the doctor or the nurse in the first place.
So there is a wide ranging impact of this on a whole range of healthcare aspects. So the implications for the laboratory maybe significant but the implications for the wider healthcare economy are much more so.
Dr. Fryer, you collected data on hemoglobin A1c request from January 2001 through March 2011, but your data on prevalence is based on request in 2010 only, now why was that?
This is early so this is what we called the "running period." What we wanted to do with our study is to get an accurate estimate of prevalence as possible. That’s for both under- testing and for over-testing.
Now, if you think about the over-testing side of things, if you are talking about a cutoff of 6-12 months in order to be in guidance, then you will need to get six months worth of data in order to get a reasonably accurate estimate of over- testing. The same is not true for under-testing and that’s why we need to have an assessment of a longer prevalence, a long lead-in period in order to estimate that. So a test too late if you like, maybe one month late, likely we are talking earlier on about the example where we’ve got something that’s either 13 months or three years.
So the length of time that you follow the patient up means the greater chance they will have in order for them to attend for a particular test. And so your estimate gets more- and-more accurate as your lead-in period increases.
What we found when we looked to this on a year-by-year basis is that two-year lead-in period is the minimum, it gives you a reasonable estimate, but when we followed it for three years, for four years, for five years and so on, what we found is they plateaued out in nine years. So we were reasonably confident, therefore, that by taking nine years worth of data as a lead-in period to get a very good estimate in 2009, then nine years was the best way of getting a value for that as possible.
Well, how and why does the length of this running period affect the prevalence estimates?
What we found was that the longer the running period, the more accurate the estimate of the prevalence had became. So when we looked at the prevalence of the test that were requested too late for the first year in 2001, we found that only around 4.5% of those tests were requested too late.
Now by definition, that’s going to be an underestimate because if you are collecting data for one year, then when your guidance says up to one year is okay, it’s impossible to have a test too late in those individuals who are well- controlled. So all you end up with then is an estimate and those are the quality control.
When you look for two years, it goes up to just over 15%, and all this information is available in the supplemental basis from the paper. So that gives you the better estimate, but again, the longer you study your population, those individuals who are being tested too late by two, three, four years then become part of your prevalence estimates, and so you get a much more accurate figure.
What we found is that in two years, you are getting around 18% and that’s a reasonably consistent value on the volume basis. But as you creep up with increasing your running period, then you get a plateau that plateaus out to around nine years. So by nine years, we were confident that we were going to get a good estimate of particularly those tests that requested too late.
What other factors might cause an over or underestimate of prevalence using these data?
Okay. One of the things that I mentioned earlier on is the use of hemoglobin A1c as a diagnostic tool, and if we are increasing that as a part of our prevalence estimates then that’s going to give a slightly false value, and that’s why the reasons why we excluded those single tests so that we can renew a vast majority of those circumstances where a test is being used for diagnosis.
Also, that’s the reason why we only collected data up until March 2011 because in the UK, that guidance for the use of HbA1c as a diagnostic tool was not available.
One other area which might give us that false estimate of, particularly those tests that are too late, are situations where a patient has moved out of the area for a period of time and then come back. Where they may have had that test done elsewhere within guidance and yet they’ve come back and it looks like from our laboratory data that they’ve gone missing for a year or two.
A classical example of that is students who are studying away from home, where they maybe away for two, three, four years and that might give others, as a laboratory here, the impression that they are not being tested at all whereas an actual fact they maybe tested elsewhere.
The other thing that we’ve had to bear in mind here is that we are actually using quite fixed limits for the recommended retest intervals based on the guidance. So for example, if our minimum request interval for an individual who is well- controlled, hitting their management targets, they end up might be every six months.
If an individual peers of the doctor has that test a week or two earlier than that, then that’s strictly speaking outside guidance, but it maybe more convenient from the patient’s perspective. So what we’ve done to trying to overcome that is to look at the data by giving them a bit of a grace period. Now we looked at really for the data, giving them two weeks either side of the guidance to see where that made a difference.
Although as you’d expect, they’d reduce the prevalence; it didn’t do that much. So the prevalence of under-testing went down from around about 30% to 27% and over-testing from 21.3% to 18.2%, so not a huge impact.
And another area that we might think about in this context is individuals where a particular patient might have multiple tests for whatever reason and a typical example of that would be perhaps during pregnancy where more frequent testing is perhaps more clinically useful.
So what we wanted to do is to make sure that the data we got on that number of tests, the prevalence of over-testing and under-testing was to determine whether individual patients were biasing in some fashion, and that’s why we use this statistical technique which I am afraid I don’t understand but I have a professor of Statistics on our panel who knows about these things. And what he showed was to using this thing called Multilevel Modeling, was that they weren’t an actual group of patients who were biasing those off by having lots and lots of tests themselves, whereas everybody else was comparing it to guiding and things like that.
But in order to make sure that all these factors weren’t making a major impact on our results, what we actually did is went through a random sub-tests of the results and looked at them eye-by-eye. There is a long-ending process to do them all, but we did some tests. And I think what it suggested is that we are not going to be far off in terms of our estimates maybe by 1% or 2%, but I am serving up very much.
Okay. Well, Doctor, you found that the prevalence of test requested too soon, the over-requesting was higher in hospital patients while the under-requesting was more common for requests from general practitioners. Why might that be?
That’s an interesting one, and that’s really the reason why we use this technique called relative frequency plots. That was an idea that we actually got from the group in Calgary and what that is showing is that individuals who come into a hospital, they can really come under an acute kind of setting. What we showed there is there is really no pattern to the requesting. It’s almost as though the doctor when he is seeing a patient in an acute context, he is not really interested in when the last test was and what the guidance is; he was interested in where the patient is now and what he needs to do in terms of managing that patient in a very acute context.
And that’s perhaps why there is a tendency to ignore guidance or previous results and so on and get a result now in the hospital context. In general practice, where it’s much more about planned care, about thinking ahead about those individuals who need long-term monitoring, then they have systems in place. Having said that, in the UK, a lot of this is based on guidance that comes out of the equality and outcomes framework for general practitioners, unless it was whereby the general practice gets funding based on certain criteria running it.
In the context of hemoglobin A1c, it’s really about how many patients on your register with diabetes have had a HbA1c tested in last 15 months. So there is really no financial incentive to test more frequently, particularly in those patients who’ve got poor control who perhaps might need that more intensive management.
And so that’s perhaps why we are seeing the fact that patients in hospital trying to get tested over frequently whereas in general practice, it’s more of the other way around; they are getting more tested, but too late sometimes.
Doctor, in your paper, you mentioned a six-fold variation in over and under-requesting between general practitioners, what does that tell us?
That’s an interesting one. It really supports what we were expecting to find in that some general practitioners are doing lots and lots of testing, in that case, they are doing lots of over-testing, whereas other general practitioners are really not testing very much at all. Now that tells me something about the fact that inappropriate testing is lot, it’s pretty widespread in that context.
Because we are looking at the proportion of inappropriate testing, it’s not really affected by differences in volume of requesting, i.e. things like ethnic differences, prevalence of diabetes in a particular population and so on.
It's about in this individual patient is this test on too soon or too late. And because we’ve seen such wide variety, that suggests that there are different processes going on in different general practices.
Now whether that's something to do with the systems that they have in place, patient practice or something to do with the way in which healthcare professionals have been in that practice behave is an interesting one, and that's I think perhaps the reason why we are seeing this variation in under-requesting and similar magnitudes between general practitioners.
What do we know about the differences between general practitioners that might determine their test requesting behavior?
I am sure there are multiple reasons why different general practices request in different ways, and if you think about it from their context then it comes right-away back to their basic medical training, how much they were taught in their basic medical training about what to test, how often to test and in what context.
And then following on from that, whether they were trained subsequently and develop their continuing professional development in the area of diabetes in this particular context. In UK what we find is that some general practices have specialist diabetic nurses or doctors with a particular interest in diabetes, in some practices, but not necessarily in all. And so that might going to accomplish some variation. But certainly differences between practitioners in terms of attitudes to risk, it's becoming increasingly common that practitioners, I am sure this is true in other parts of the world too, certainly is the case is in UK where there is this fear of litigation that the doctor almost needs to be seeing to be doing something in order to petrify the patient, whether or not it's a clinical need or not, and I think that's certainly happening.
The other thing that we heard they are looking at is whether an individual practice has been developed with the local guidance on how often a test should be done.
So what we find is that some practices are very, very tight and handles on guidance and/or implementing that guidance at a local level. Well there are practices that depends really on who is looking after the patient and that can vary considerably and that really leads us to another issue that’s relevant in this context, and that's really communication between doctors and practice nurses.
So what we find in some circumstances is that the practice nurse will see the patient on an annual review for example, and how the test requires for the annual review, the patient feels a little bit unwell a week or two later, goes to see the doctor and the nurse and has the same test again. So there needs to be that improved communication between the doctors and the nurses.
We have mentioned the differences in requesting behavior between healthcare professionals, what other potential causes of over and under-requesting have you found?
I would classify those into two categories really. There are those factors which are patient-related and there are those factors which are more system-related. For example, from the patient context there is a lot more emphasis these days where the patient comes into the doctor for surgery, and they have already done their own research. They have looked on the Internet, they’ve said, I need this particular test because it will help in the control of my condition and therefore they are putting subtle pressure on the practitioner to do testing when perhaps there isn’t really a clinical meaning for it.
On the flip-side of that, in the context of under-testing, sometimes you maybe inviting a patient along for a particular test, but because the working individual that live busy lives, the access to the services whereby they maybe all that forget the test may not be available. This surgery may not be open after their normal working hours.
Also that the patient may have other diseases, other conditions which may affect their mobility. Therefore they may not be able physically to get to the full bottom episodes. And there maybe issues around awareness of the need for testing. So the patient may not see the importance of having the test done and maybe that's a challenge that we have in the laboratory context as well emphasizing the importance of these sorts of aspects.
And certainly diabetes has been a good example. There are issues where language and cultural barriers maybe important. So we may need to translate the guidance that we have into a language that the patient may be able to understand. And that's particularly true with something like diabetes, which is more common in certain groups.
Then there are the issues that I mentioned about system practice, for example, one of the issues that's been a problem here in the UK up until relatively recently is that a general practitioner can't necessarily see all the results of the tests that are done in a hospital.
So a patient may go to the general practitioner for a particular test. Repeat the test that was done two weeks earlier in the hospital because they didn't know the tests are being done. So there are lots of issues, there could be reasons why under or over-testing is performed.
Your data suggests that National Guidance on frequency of monitoring appears to be ineffective in preventing inappropriate requesting, well then, how might these causes be addressed?
Yeah, so I think that’s one of the sad findings of our report that really National Guidance wasn’t doing the trick in terms of influencing, requesting behavior. I think when you look at the literature available on how we might go about changing requesting behavior it's certainly suggesting that we need more than just a simple one-step approach. We need a multi-system approach. We need to address in terms of education using IT systems and so on.
So we need to have something that's ongoing in order to make sure that the behavioral changes sustain and it needs to address really each step along the patient pathway so that the patients themselves has the right information that the healthcare professional who is initially looking at the patient that has the right information and education, and the same way in the secondary care and likewise in the laboratory.
So we need to make sure that everybody in the process knows what to do. So that means that we need to look at all of the stakeholders and that includes the patients, includes those that are paying for test to be done and it needs to be agreed at a local level too. You get the local people engaged in the process. But we need to make sure it's standardized too, which is almost a contradiction. So we need a local agreement one that’s based on national recommendations and guidance.
So what I would advocate from a laboratory’s perspective is that we as laboratory professionals need to develop an overarching strategy that incorporates all these key stakeholders and ensuring that tests are requested appropriately.
Well, I guess the bottom-line is, does it really matter how often these tests are requested, and if so, how and to whom?
That’s a really interesting one, and I think that brings us back to the whole purpose of why we didn’t think in the first place. One of the things that came out of the review of the literature and the guidance that was available is that the evidence underpinning the guidance on how often you should be doing tests was actually reasonably inconsistent, a little bit thin, a little bit lightweight.
So I think there is really an issue about making sure that we've got evidence that supports the fact that testing should be done in this particular way, at the frequency in order to bring the best of the patients. If we don’t do that, then we are wasting resources both in terms of laboratory cost and a wider healthcare economy and we are also not doing the best for our patients.
It has the impact also on the national economy, because if we are wasting time, patients taking time off work in order to go to the doctors for blood tests, then that’s a loss for the national economy. And then we can think of even wider than that in terms of the whole society effect in terms of better control for patients and so on.
So what we need is to assess whether the evidence underpinning the guidance is correct or not. And actually whether it's correct for all patients. One of the things that we've been looking at more recently is identifying whether individuals who have diabetes and something else, perhaps need a difference recommended testing frequency than those who just have diabetes. I think that’s sort of direction that we are looking at, at the moment.
Beyond that once we've identified what the best testing frequency should be, we need to show that it makes a difference. So we need to see whether it actually changes the way in which we manage the patient.
The other bits of early evidence that we are getting from follow-up studies to this one suggests that although we have this variation in the testing frequency, it doesn’t correlate with management. So the number of prescriptions for diabetes associated with medications doesn’t really seem to follow the frequency of testing.
So while some doctors are testing very frequently, that’s not being followed up by changes in their management in terms of prescriptions at least. So that tells us something about actually we need to review whether the testing frequency guidance is in itself is correct. Then we need to take that further down the line and see whether it makes a difference through outcome. Whether the patients who are testing more frequently do better or whether they do worse.
Now there is some evidence in the literature that suggests that if you test more frequently then individuals reach their hemoglobin A1c targets more quickly than those who are testing less frequently.
Now that’s reasonably self-explanatory, but the evidence to supply how often meets that criterion is really important. So we then need to think about a person who is at the end of all this, not the patient. We need to be able to assess whether testing frequency impacts on patient quality of life.
One of the things that we've noticed when we've been talking to patients around is that when we ask them shall we get rid of these tests that are not necessary according to guidance, there is a reluctance, because there is this perception amongst patients that if you test more frequently that’s somehow it is translated into better care.
So we have a real piece of work to do in order to convince patients that doing tests at this particular frequency is best for them, and then we need to make sure that all of our information gets channeled into a system that allows us to ensure that the testing is done at the right time and that largely is going to come down to IT system, education, and those sorts of aspects. And that really is where we are going with the results of this study.
Tony Fryer is a Professor of Clinical Biochemistry at the University of Keele and Consultant Clinical Biochemist at the University Hospital of North Staffordshire, England. He has been our guest in this podcast from Clinical Chemistry. I'm Bob Barrett. Thanks for listening!