September 2007: Volume 33, Number 9
Autoverification of Test Results
How to Avoid Mistakes and Improve Efficiency in Your Lab
By Judith Johnson MT(ASCP)SH, and Deborah Stelmach MT(ASCP), MS
With the pressure on clinical laboratories to do more with less, many laboratorians want to institute autoverification of test results to speed turnaround time, improve workflow, and Lean processes. The autoverification process involves automatic review of test results based on a predetermined set of boundaries or rules established by the laboratory, eliminating the need for a qualified laboratorian to approve results before they are sent to the attending clinician. A carefully designed system improves operational efficiency and helps labs tackle crucial problems like medical errors, test turnaround time (TAT), and personnel shortages.
But knowing how and where to begin the process of establishing autoverification of test results represents a daunting task, especially when a lab may already be short staffed. Many labs start with a project that is too big and end up with an unsuccessful outcome. This was the first mistake our laboratory made. In this article, we share some of the mistakes we made, as well as some of the successes. Today, all the major instruments in our automated testing department run autoverification of test results. The information presented here can help your lab avoid similar mistakes and instead enjoy the benefits of this important tool.
A Snapshot of Our Lab
Bon Secours HealthPartners Laboratories serves four hospitals in Richmond, Va., and an outreach program serving facilities within an 80-mile radius, including nursing homes, physician practices, and hospitals. We have an experienced team of lab professionals, including pathologists and technologists, performing a total of more than 2.3 million billable tests per year in our four labs, each of which is located within a hospital. Our core laboratory resides at our largest facility, St. Mary’s Hospital. As of February, 2004, all our sites autoverify test results using the Misys Healthcare (Raleigh, N.C.) laboratory information system (LIS).
During strategic planning sessions in early 2000, our hospital administrators identified the need for greater efficiencies in our system. We were asked to grow our business and expand our test menu, but at the same time not add any full-time equivalent (FTE) positions.
From the lab’s standpoint, we also wanted to improve patient safety and shorten TAT. In order to achieve these seemingly opposing goals, we instituted autoverification of all test results from automated analyzers. In order to implement automatic release of test results, we decided that all newly acquired instruments would be brought online using autoverification, and as time permitted, existing analyzers would be set up to autoverify.
Early Lessons Learned
We were delighted at the opportunity to purchase a new immunoassay system and implement our strategy for conversion to autoverification. The lab purchased an Advia Centaur (formerly Bayer, Tarrytown, N.Y.), but right from the get-go, our simple autoverification strategy suffered its first blow. Because it was a new instrument for our lab, the technologists were not comfortable with the system. It was also a new methodology for us, and so things did not go smoothly at first. Our lab information system (LIS) vendor did not recommend bringing this new instrument online with autoverification, but we ignored their warning and went ahead with our plan. First lesson learned—never go live on a new instrument with autoverification until technologists and the system have reached a satisfactory comfort level in the lab’s operation.
Our next new instrument was a Bayer Clinitek Atlas for the core lab. This analyzer turned out to be a better starting point for autoverification as it has fewer parameters and less preanalytic variability, in addition to only one specimen type. Because our volume of urine specimens was quite high, autoverification with this instrument had a big impact on workflow efficiency and also improved our TAT for specimens from the emergency department (ED) that required urine macroscopics.
For this instrument, we built rules allowing all normal and abnormal urines to autoverify when there was no instrument error code, abnormal color, absurd specific gravity, or critical value. We eventually rolled out autoverification to our Clinitek 500s at our smaller lab sites. Presently, we have three automated urine workcells in our system that autoverify urine macroscopics and microscopics, which saves a tremendous amount of time.
Another early mistake we made was our choice of results to release by autoverification. We initially set the LIS interface to only autoverify normal results, which had little impact on the lab’s efficiency. Second lesson learned—choose your rules and parameters carefully so that the gain in efficiency is noticeable.
Overall, converting to autoverification of test results requires careful planning and many months of work. Each area of testing needs to be dealt with individually. Below we present some of the specifics of how we set up autoverification in various testing areas at Bon Secours HealthPartners Laboratories.
Autoverification in Hematology
Following successful autoverification in our urinalysis area, we choose to work on autoverification of results on our coagulation analyzers. At the time, we had one ACL Advance analyzer (Beckman Coulter, Brea, Calif.) and seven XE2100 and XT1800i hematology analyzers (Sysmex, Mundelein, Ill.). On the Advance analyzer, we file results by Misys sequence number using cup collation, a process that holds the results until all testing on the cup is complete and allows the technologists to review the results of all the tests ordered for the patient before the results are released.
Another helpful rule for smoothing the transition to autoverification involved coagulation results from children 12 and under and pre-operative patients. We wrote a rule using a calculation on our LIS that flags abnormal results from these patients because we did not want to have blood redrawn from children or surgeries cancelled until we ruled out specimen integrity issues that could cause an aberration. These rules generate a flag that stops the normal autoverification rules and tells the technologist to check specimen integrity. The technologist must then check the sample and answer with the code “INTEC,” that translates to “instrument and technical errors have been ruled out” before the sample results are released.
Partial thromboplastin time tests (PTT) also required a special rule. Our medical director requested that a disclaimer be attached to these results to place the onus of possible heparin contamination back on the caregiver. Consequently, we include the following comment on the lab report: “When clinically evaluating a prolonged PTT, pre-analytic variables including possible fluctuations in levels of heparin should be considered.” The parameters we used for coagulation were simple—all INRs without a 20-second delta check on the PT are released. All PTTs between 21.0 and 90 seconds and fibrinogen levels of 150–600 mg/dL are released.
On the Sysmex analyzers, we file results by sequence and include the differential, if ordered. The monitored parameters include critical values, instrument flags, delta checks, and technical limits flags. The LIS fails all pre-operative platelet counts less than 100,000 using a calculation similar to the one used in coagulation testing so that the technologist can investigate and verify the result. We also capture the analysis mode on the Sysmex so that results from diluted capillary samples will not autofile. This allows the technologist to verify that the proper dilution technique has been used before the result is released.
By and large, we have been happy with the rules that we instituted for autoverification in hematology. Table 1 presents an overview of autoverification parameters for eight hematology tests.
Autoverification in Chemistry
Multiple sample types, method interferences, and sample integrity issues can complicate autoverification of chemistry results. In addition, certain tests have strict tube requirements, such as therapeutic drug levels, which require non-gel-barrier sampling tubes.
We started the process for general chemistry review at a high level. Because we did not have specimen container identification in our LIS, we eliminated certain tests completely from qualifying for autoverification. For example, our lab does not autofile chemistry results from fluid or urine samples. This prevents serum creatinine results from filing in urine creatinine report fields. In addition, we do not autofile ammonia levels because they have specific specimen-type requirements.
We also found that our hospital information system (HIS)/LIS interface can help us by assigning separate accession numbers to certain tests, such as phenytoin, that must be collected and analyzed from red-top tubes. We allow splitting of inpatient samples into different accession numbers, which are then allowed to autofile.
However, we must depend on human intervention to order outpatient samples on a separate accession number. Therefore, no results for outpatient drug levels autofile. We prevent autofiling by appending a comment, “check tube type,” which triggers the technologist to check that the result came from a red-top tube as required. This is one example of how autoverification improves patient safety by providing a consistent mechanism for human intervention on possible problems.
Another issue that complicates chemistry autoverification is the presence of method interferences. The LIS must be able to capture all instrument error flags and use them to prevent autoverification. Our chemistry analyzer measures hemolysis, icteria, and lipemia. Samples with these problems generate flags, which block autoverification of the results.
After tackling the high-level issues for chemistry tests, we focused on more specific parts of testing. The most common criterion used for autorelease of chemistry results is the critical or analytical range, so this is an important aspect to focus on. In reality, laboratorians use mental algorithms every day when they review results. We started by asking staff how they judge a result as acceptable and then created a test matrix. The matrix listed every test performed in chemistry, except those that fall into the category of “never autoverify.” To the matrix, we added the following parameters for every test: 1. reference range (normal range); 2. analytical measurement range (technical range); 3. critical range or value (panic values); and 4. any special needs for specimen type. We then gave the matrix to the technologists and asked them to record specific details of every autoverification failure. So, for example, they noted any result that required any sort of action. In some cases, we found that the technologists were just looking at the test value, a sign that we selected the wrong range.
After some discussion, it became clear that the technologists thought that some technical ranges were too wide to autorelease results with reasonable comfort. When we looked at electrolytes, we felt the normal range was too tight but the critical range was too wide, so we compromised ( Table 2 ). In addition, we did not feel comfortable autofiling results when the lower end of the assay range was zero. For tests in this category, we split the difference between zero and the normal range. If a result is below the autoverification level, the technologist checks the sample for fibrin and repeats the test to confirm the low result. For example, this prevents autofiling an aspartate aminotransferase (AST) level of 3 IU/L on a clotted sample.
Patient ID with Delta Checks
Since one of our goals was to reduce medical errors, we used this opportunity to institute delta checks to verify patients’ identity. Years ago, medical technologists used cholesterol results as an identity check since levels changed very slowly. Unfortunately, cholesterol is no longer part of the chemistry profiles; therefore, setting up valid delta checks today requires a lot of homework. We started with an excellent reference titled Biological Variation: From Principles to Practice by Callum Fraser. We calculated the biologic variation expected between samples and used that as a starting point in our delta calculation. Then we tested the idea with data from real samples. It is a painfully repetitive, but worthwhile, process. We checked and adjusted delta check ranges on albumin and total protein four times before we were satisfied that we were catching mislabeled samples without driving staff crazy. Table 3 presents three tests that we use to perform delta checks.
Delta Check Values
||Delta check |
<20 mg/dL change of 10 mg/dL
≥20 change of 38%
4.0 mg/dL change of 1 mg/dL
≥4.0 change of 10%
<6.0 mg/dl change of 1 mg/dL
≥6.1 change of 10%
To be successful, be sure that your staff knows how your delta calculation works and what time parameters are used, and perform periodic validations to ensure that your delta checks are appropriate. For example, we originally had a delta check on potassium that was used to check sample integrity. Once we had instrument indices for hemolysis, we discontinued that delta check. It is also helpful that our LIS can use absolute delta changes and percentage changes. We settled on a combination of these to cover the assay range.
Some Words of Advice
In addition to avoiding the pitfalls described above, we offer some other recommendations to labs considering autoverification of results. The first recommendation is to get buy in and support from technologists. Our technologists initially thought autoverification was going to replace them. We presented the project in a way that showed how they would be able to spend more time on the more difficult test results/samples and be able to perform more highly complex tests on an expanded menu if they didn’t have to spend time reviewing unremarkable results. We also sought out their opinions on how rules and parameters should be built, which made them more comfortable with the process.
The next recommendation is to get the support of your institution’s medical director. This person is responsible for every test result leaving the lab, even the autoverified results. Some medical directors like to be very involved in the selection of parameters and rules; others like to just review the final results of testing. Find out where your director stands before laying out your plan.
Also, get to know your LIS analyst very well. If the analyst has not been on the bench recently or has not worked in all areas of the labs, you may have to educate him or her about the lab’s processes. Our analyst had been a chemistry tech when last on the bench, so we only had to provide information on hematology processes. Use the analyst as a resource with your LIS vendor, and follow the vendor’s recommendations for the particular interface/instrument combinations you are working with.
A helpful instrument specialist from the LIS vendor is also a valuable resource. We resolved a lot of issues by conference calls with our analyst, LIS vendor, instrument vendor, and technical specialist from the lab. Learn what your LIS software can do and what your instrument software can do. All vendors publish host communication protocols for instruments, so be sure to use them. The instrument may be able to send something to enhance the autoverification process, but the LIS may not be set to capture it.
We also strongly recommend that labs write a policy for autoverification, describing how you determine what results qualify for release versus those that get held for review and how the process is tested and periodically validated. You should also define steps for autofiling of test results. These rules use standard criteria that are created and maintained by the LIS manager to determine whether a result should route directly to a patient file or be reviewed by a technologist. Studies show that the majority of results uploaded from a lab instrument require no technologist intervention and can be automatically forwarded to patient files. Incorporate the steps for autofiling in the policy for autoverification, as well as in individual standard operating procedures (SOPS). Have an inclusive list of all parameters and rules that may be used in autoverification. This document should include a table or listing of the method codes using autoverification for easy reference.
Know Thy Regulations
Obviously, one thing that labs can’t overlook is regulatory requirements. Some states do not allow autoverification. In addition, the College of American Pathologist’s general lab checklist has several questions regarding autoverification. These concern monitoring quality control, suspension of autoverification, rules-based checking, rules validation, and medical director oversight.
To fulfill these requirements, we documented all of our parameters in a policy, which includes backup documentation. We also had our medical director review the policy. The policy maps out thorough testing of autoverification parameters during implementation with periodic checks post implementation. Our LIS system provides the required audit trail, and our technologists have access to an LIS analyst 24/7 if the need to rapidly suspend autoverification should arise. A very helpful guide to meeting such regulatory requirements is document AUTO10, “Autoverification of Clinical Laboratory Test Results” from the Clinical Laboratory and Standards Institute.
The Improvements Speak for Themselves
The huge effort that goes into setting up autoverification reaps many rewards. We saw tremendous improvement in TAT for tests from the ED. For example, our TAT for CBCs has gone down from 22 minutes to 15 minutes, even with a 50% increase in volume since 2003. Similarly, TATs for urine macroscopics have improved from 20 minutes to 15 minutes, with a 62% increase in volume. We struggled for years to get 90% of our troponins back to the ED in 45 minutes. Once we set autoverification to allow all normal results to autorelease, we made the TAT goal for the first time on that assay.
We actually had a great deal of consensus from the technologists at all our sites that routinely released chemistry results that autoverification improved workflow. In essence, autoverification is allowing your LIS to review results using the same thought processes your staff uses. Once everyone is on board with that idea, then the stress level also decreases.
A Look Back
Now that our laboratory staff is accustomed to autoverification of test results, they become upset when it must be discontinued for any reason. In fact, when we converted to a new chemistry vendor in 2006, we did not immediately autofile results from the analyzer. Not only did we see a major increase in chemistry TAT, but we also observed a major increase in the stress level among our technologists. There was much rejoicing when we turned on autoverification for the new instruments, and we saw immediate improvement in TAT and compliance rates. Furthermore, we have been able to maintain and improve TAT for ED samples, despite consistent increases in volumes, as well as the additional growth in inpatient and outreach testing.
Autoverification is definitely worth the work and occasional headaches. It can Lean your process, reduce TAT, increase patient safety, standardize result review, and improve employee satisfaction. Start small, do lots of homework, and keep your staff involved, and you will also reap the rewards. The bottom line for us has been increased capacity, the ability to bring some tests in house that were previously sent to a reference lab, and improved staff morale.
Clinical and Laboratory Standards Institute. Autoverification of Clinical Laboratory Test Results, Approved Guidelines (AUTO 10-A). Wayne, Pa.: Clinical and Laboratory Standards Institute, 2006.
College of American Pathologists. Laboratory General Checklist. Northfield, Ill.: College of American Pathologists, 2006.
Fraser, C. Biological variation: from principles to practice. Washington, D.C.: AACC Press, 2001.
Judith Johnson MT(ASCP)SH is the Clinical Pathology Director for Bon Secours HealthPartners Laboratories, Richmond, Va.
Deborah Stelmach MT(ASCP), MS, is Automated Testing Supervisor and Chemistry Regional Technical Specialist for Bon Secours HealthPartners Laboratories, Richmond, Va.