Competency—the ability to accomplish a task successfully—in clinical laboratories means testing the right sample from the right patient in the correct way to produce the best result possible. Competency in this context encompasses all phases of testing—preanalytic, analytic, and postanalytic. As straightforward as the concept of competency might be, implementing it in practice is much more difficult. As evidence of this, one need look no further than the many competency-related deficiencies in regulatory inspections, which are among the most common.

Labs struggle with this issue because most use paper-based systems to document competency assessments, an approach with many downfalls. Paper records involve manual processes prone to errors, with missing or wrong information often not discovered except through audits. Yet paper records are difficult and time-consuming to audit and to update.

The techniques organizations use for oversight also limit competency management. Group announcements reminding personnel to complete their competencies only go so far. Eventually, individuals disregard these warnings, thinking they apply to others, not themselves. Knowing the real-time competency status of all staff is simply not feasible given how long audits take. A difficult process ensures that most time and energy go toward managing the documentation of competency, rather than ensuring that employees are competent. Our own audits uncovered these issues, so we decided to find a better approach.

Going Electronic

Our search led us about a year ago to implement electronic competency software in our laboratory department, starting with pilots in our core laboratory and microbiology sections then extending throughout the lab. This system has transformed our documentation management into an automated process, and while not a silver bullet, it has proven to be a powerful tool that has enabled us to automate tedious tasks. We found the time and effort we invested setting up the system paid back very quickly, as maintaining it requires very little of either.

Among the advantages of this new system is that our staff members can access it online via individual logins that track their activities. Once they set up an account, users also receive automatic notifications. In addition, we can assign different access levels for individual users, and our records are backed up regularly. Finally, we can run customized reports rapidly, enabling faster, up-to-date, and complete audits of competency records.

Plotting the Learning Curve

Even with all these benefits, implementing this change proved to be difficult, underscoring the importance of our initial planning. We learned several key lessons along the way.

First, it is important to keep notifications and signoffs as simple as possible in the beginning. This will help staff members avoid becoming overloaded as they get used to the new system. We also learned that it pays to get input from supervisors and techs when creating competencies, as they will be the ones using the system. This gave them an opportunity to contribute to the process of performing competencies, which helped with buy-in.

We tried several small pilots with defined deadlines, instead of setting one large go-live date. Smaller pilots not only offer learning experiences but also are more manageable, without the stress and panic associated with an “all in” go-live. Although the pilots were small, we set deadlines to keep the projects moving. In addition, we worked closely with a group of champions for the pilots, including individuals with different roles. These team members became the system resident experts and helped achieve alignment and standardization across different sections of our laboratory.

We also invested time in training our observers—individuals authorized to sign off other users. This ensures uniform completion of our competency assessments. Like the pilot champions, the observers proved critical in both the rollout of and buy-in for our new process.

During our implementation we learned the importance of being flexible and patient. The pilots enabled us to live-test the system and keep what worked while tossing what didn’t. We also found that the new electronic system uncovered issues that were there before, but of which we had not been aware. These became opportunities for improvement.

An overriding goal of our new system was to make the competency process standardized and efficient. Because the technology alone couldn’t accomplish this, we set the expectation up front that employees would be responsible for achieving and documenting their own competency requirements with the tools we were providing them. We also made clear via a written policy that there would be consequences when competencies slide into overdue status.

Finally, we sought help from our vendor in modifying the software to match our workflow.

Improving as the Rollout Continues

A big benefit of our new system is that it enabled us to uncover and address weaknesses in our approach to competency. This might have happened even without transitioning to an electronic system, but the new system enabled us to make rapid improvements.

With the new system, lab supervisors get the real-time status of competencies and can address any issues directly with the individuals involved rather than in a group setting. Our next challenge is to transition our point-of-care users to the system. This will raise its own set of challenges, but we’re looking forward to the efficiencies and standardized and timely documentation of competency that we expect to achieve.

Van Leung-Pineda, PhD, DABCC, FAACC, is the section director of clinical chemistry and point-of-care testing in the department of pathology and laboratory medicine at Children’s Healthcare of Atlanta and adjunct assistant professor in the department of pathology and laboratory medicine at Emory University School of Medicine. +Email: Van.PinedaWung@choa.org