Antidote

With the ever-increasing price tag of Medicare contributing to a mindset of perpetual fiscal crisis in Washington, policymakers and pundits alike are demanding that more healthcare choices in the U.S. be steered by real data rather than guesses, predilections, or profits. So it's no accident that at the same time Medicare is moving to reward providers for improving quality and thrift, public funding for evidence-based medicine is also coming into its own, with billions of dollars for comparative effectiveness research becoming available under the healthcare reform law.

In the midst of this push for better evidence, AACC has partnered with researchers in the federal government to steer attention toward lab medicine. Major systematic evidence reviews proposed by the association are now completed or underway in the U.S. Agency for Healthcare Research and Quality's (AHRQ) Effective Healthcare Program.

"Among the biggest problems in laboratory medicine is the lack of evidence for professional activities that we do every day," said AACC President Robert Christenson, PhD, who has served as a liaison, expert panelist, and reviewer for AHRQ reports. "That's why systematic reviews of evidence by AHRQ can be so valuable. For example, AACC's academy can use this evidence in Laboratory Medicine Practice Guidelines. This is an unbiased, objective way that our tax dollars have the potential to improve the practice of safe, timely, effective, and efficient laboratory medicine." Christenson is a professor of pathology and director of clinical chemistry laboratories at the University of Maryland Medical Center in Baltimore.

Looking for Answers

The Obama administration has put a lot of faith in comparative effectiveness research (CER) as a means of coping with soaring healthcare costs. What began as a trickle of funds for CER at AHRQ became a steady stream in 2009 following a cash infusion from the economic stimulus package, the American Recovery and Reinvestment Act (ARRA). Now, the floodgates are opening with a new $3.5 billion independent research organization called the Patient Centered Outcomes Research Institute (PCORI), created under the healthcare reform law. For 2013, PCORI will fund $350 million in comparative effectiveness studies, with this level of funding continuing at least through 2019. In addition, $10 million from the PCORI trust fund will go to AHRQ.

In a 2009 interview with the New York Times Magazine, President Obama explained that he saw CER as one way to deal with soaring healthcare costs. "When Peter Orszag [former director of the Office of Management and Budget] and I talk about the importance of using comparative-effectiveness studies as a way of reining in costs, that's not an attempt to micromanage the doctor-patient relationship," President Obama said. "It is an attempt to say to patients, 'you know what, we've looked at some objective studies out here, people who know about this stuff, concluding that the blue pill, which costs half as much as the red pill, is just as effective, and you might want to go ahead and get the blue one. And if a provider is pushing the red one on you, then you should at least ask some important questions.'"

While few comparative effectiveness studies end up being quite so clear cut as red versus blue—especially in the area of diagnostics—experts in evidence-based medicine believe that the healthcare community does exhibit a new desire to employ evidence-based practices. "I think there has been a sea-change, starting with the Institute of Medicine's reports on evidence-based guidelines," said Elisabeth Kato, MD, MRP, a medical officer in AHRQ's Center for Outcomes and Evidence who has led several reports on diagnostics. "These reports are very clear on what kind of evidence and process must undergird clinical practice guidelines. We are also seeing more clinical decision supports out there for clinicians, as well as evidence-based initiatives focused on consumers."

At the request of Congress, the Institute of Medicine (IOM) developed two consensus reports in 2011. One, "Clinical Practice Guidelines We Can Trust," laid out eight core standards for improving the quality of clinical practice guidelines. The second, "Finding What Works in Health Care: Standards for Systematic Reviews," set standards for CER itself, such as managing bias of the researchers and ensuring stakeholder input. As researchers, clinicians, and payers coalesce around a push for more evidence, lab medicine too will need to bolster guidelines with more evidence, experts told CLN.

As the evidence base expands, guidelines will need to move away from mere consensus, Christenson noted. "It's becoming more and more important not to rely on consensus guidelines when there actually is evidence out there for us to use," he said. "Consensus is clearly not the most scientific way of putting together guidelines. Guidelines backed by evidence will always be more reliable."

Real-World Problems Drive Research Questions

In all, AHRQ has taken up more than a dozen comparative effectiveness studies at the request of AACC, with two finalized last year and more on the way (See Box, below). Recent studies tackle topics such as diagnosis and management of sepsis, management of heart failure, and use of cardiac troponin in patients with renal failure. In a separate effort, AACC leaders have also worked with the Centers for Disease Control and Prevention (CDC) on the agency's evidence-based best practices initiative for labs (See Sidebar, below).

Comparative Effectiveness Reviews in Laboratory Medicine



Agency for Healthcare Research and Quality Studies Initiated by AACC
Published Studies
2012

Serum Free Light Chain Analysis for the Diagnosis, Management, and Prognosis of Plasma Cell Dyscrasias

Procalcitonin for Diagnosis and Management of Sepsis

Biomarkers for Assessing and Managing Iron Deficiency Anemia in Late-Stage Chronic Kidney Disease

2010

Assessment of Thiopurine Methyltransferase Activity in Patients Prescribed Azathiopurine or Other Thiopurine-based Drugs

2008

HER2 Testing to Manage Patients with Breast or Other Solid Tumors

Utility of Monitoring Mycophenolic Acid in Solid Organ Transplant Patients

Ongoing Studies

Use of Natriuretic Peptide Measurements in the Management of Heart Failure

Monitoring of Maintenance Immunosuppressants in Solid Organ Transplantation

Troponin Cardiac Marker Interpretation During Renal Function Impairment

Published studies and research protocols for ongoing studies are available from the AHRQ Effective Healthcare Website.

Working with AHRQ, AACC has championed studies that deal with real-world clinical problems for laboratorians, according to Susan Maynard, PhD, a member of the joint AACC Evidence-Based Laboratory Medicine Committee that has worked with the agency. Maynard has served as a key informant for a recently-published AHRQ study that compared a serum-free light chain (SFLC) assay to traditional urine protein electrophoresis tests for diagnosing and managing plasma cell dyscrasias, a group of cancers that includes multiple myeloma.

"For the AHRQ comparative effectiveness research reviews, I tend to focus on research questions that relate to my job as a lab director," Maynard said. "For example, I wanted to know if we should be performing the serum-free light chain test in my own lab or keep it as a sendout test." Maynard is the director of chemistry, toxicology, and blood gases at the Carolina Medical Center in Charlotte, N.C.

The SFLC study is a good example of the unique approach of comparative effectiveness reviews, which evaluate tests in the context of other factors, such as patient needs, outcomes, and other tests, according to AHRQ's Kato. "This report was especially interesting because here we have a new test, but there are a lot of other tests that labs had been using before. So the question becomes, where does this fit in?" she said. "We wanted to know, could we use the new test to replace something—such as the 24-hour urine test, which everyone hates—or if not, how much information will it add on top of what else we're doing."

In addition to replacing serum urine electrophoresis and other diagnostic assays, the SFLC assay also has demonstrated potential in disease management, Maynard noted. Researchers have examined whether the SFLC assay could supplant bone marrow testing—a much more invasive procedure—and offer quicker, more sensitive monitoring of patients' individual responses to therapy.

This was an area where patient engagement was key during the AHRQ review process, according to Kato. One of the hallmarks of comparative effectiveness studies at AHRQ is engaging patients and other stakeholders that traditional research might never consider, she said. During the study, researchers learned that the more sensitive, blood-based SFLC test could potentially spare patients pain. "At one of our meetings, the patient representative brought up something that no one else had thought about—avoiding complications of relapse," Kato said. "Patients are really important in helping us get the questions right." In fact, The Binding Site, the manufacturer of the sole U.S. Food and Drug Administration-approved Freelite SFLC assay, expressed interest in exploring this use of the test after hearing Kato speak about the AHRQ study at an AACC-sponsored webinar.

CDC Develops Evidence for Everyday Lab Challenges

The Agency for Healthcare Research and Quality has been a leader in comparative effectiveness reviews, but it is not the only agency working on evidence-based lab medicine. AACC leaders, including current AACC president Robert Christenson, PhD, and others, have worked closely with the Centers for Disease Control and Prevention's (CDC) Laboratory Science, Policy and Practice Program Office on the Laboratory Medicine Best Practices (LMBP) Initiative. The program produces systematic evidence reviews on topics that present immediate, practical challenges for labs, such as critical value reporting or bar coding of specimens.

The LMBP initiative published four systematic evidence reviews in a special section of the September 2012 issue of Clinical Biochemistry on using barcoding for patient specimen identification, blood culture contamination, critical value result reporting, and hemolyzed blood samples.

For 2013, the LMBP initiative will be working on three new systematic evidence reviews, according to a CDC spokesperson. These will cover lipid biomarker testing to stratify patients for cardiovascular disease risk; effective utilization of coagulation testing in emergency departments and pre-surgical hospital patients; and safe and effective practices for reducing unnecessary blood utilization in hospitalized patients. For 2013, CDC is also working on interactive, web-based educational materials to help laboratories design quality improvement studies and participate in the agency's LMBP initiative evidence reviews.

More information about the CDC program is available online.


Likewise, PCORI—the new organization funded by health insurance fees under healthcare reform—aims to define its role as driver of engagement with patients. According to PCORI's executive director, Joe Selby, MD, MPH, being patient-centered is more than a buzzword. "PCORI was established to conduct research that helps patients and clinicians make decisions, so we will always start by asking, 'is this a question that patients, their caregivers, and clinicians face?'" Selby said. "We hope to avoid research questions that are off the mark by just so much, or building new bodies of evidence that lack information on outcomes essential to patients. We're after research that helps people make decisions and measures outcomes that matter to them."

Selby, who joined PCORI after 13 years as director of the division of research at Kaiser Permanente, believes that more intimate engagement with patients and other stakeholders will mean PCORI-funded research can move more quickly into practice. "I think it's fair to say that, building on the work of AHRQ, FDA, and others, we are now going even farther in including patients and other decision-makers in the research process—all the way from choosing which questions to ask, to reviewing the applications that come in, and deciding on funding," he said. "We tell applicants that they must include patients and other stakeholders in the research teams, and by having those key decision-makers involved in the process even before day one, we think that we are in a better position for getting the research findings out."

Dealing With a Dearth of Data

While CER seems like a boon for lab medicine and other areas that are seen as lacking evidence in patient outcomes, so far studies of diagnostics have not often resulted in the kind of definitive answers researchers hope for. Part of the reason is that comparative effectiveness reviews can only be as good or as complete as the body of literature on which they're based. But this doesn't mean studies that find "insufficient evidence" aren't valuable, Kato emphasized.

The SFLC review was one such report, with a broad mix of studies but few of high quality directly comparing the test to others. Yet Maynard and Kato both stressed that they believe the review was worthwhile. "People often think that 'insufficient evidence' is not a useful result, but knowing exactly what we are not sure about is very valuable," Kato explained. "It allows decision-makers to figure out how to integrate the test into current practice based on reality instead of guesses or hype."

Moreover, comparative evidence reports outline exactly where researchers should look next, according to Maynard. "The review from AHRQ very clearly outlines for anyone who wants to do the next paper, how to do it properly," Maynard said.

Significantly, insufficient evidence also must be interpreted in light of the questions CER asks. These are not questions about a test's performance per se, but examine the test in relationship to other tests and other factors that prior researchers have too often left out. This can make it hard to collect a body of high-quality research and simply add it up. "The main reason the SFLC review came up as insufficient was not that no studies were available, it was that the majority of studies were looking at the test in isolation—does it predict disease, does it predict relapse—rather than in conjunction with all of the other tests we have," Kato said. "It's important to note that absence of evidence is not the same thing as evidence of absence. If we say that the evidence is insufficient, we're not saying that something doesn't work. We're simply saying the studies have not yet been done to say with certainty how it compares to other tests."

How can laboratorians apply comparative effectiveness reviews when insufficient evidence is found at the end? Kato recommends that lab professionals examine the reviews with an eye toward other factors in their labs that affect decisions on how to use the test. When the evidence for a test is good, but "insufficient" in a comparative effectiveness review, labs should focus on reducing uncertainty by being more deliberate in how they evaluate and roll out a new test. "It's still preferable to know if something is uncertain than not to know anything," Kato said. "And I think it does actually give you a firm basis from which to take the next steps. It may be a more complicated decision-making process than simply yes/no, but at least it's a firm place from which to start."

Maynard said she would like to move forward with plans to bring the SFLC assay in-house. Were the AHRQ review more definitive, she might have been able to use the results to argue for investing in the necessary instrumentation sooner. But the International Myeloma Working Group has endorsed the test as an adjunct to existing tests, and over time, new indications for the assay seem to be gaining traction, Maynard commented. For example, studies have demonstrated that the SFLC assay can be used to predict risk for patients with monoclonal gammopathy of undetermined significance, some of whom progress to myeloma or other malignancies. Meanwhile, Maynard has a short list of some other assays she'd like to bring in-house that require a new instrument—one which she can use for the SFLC assay. Taken together, these factors will likely put the test on Maynard's in-house menu soon.

Will Clinicians Pay Attention?

As the money in Washington begins to flow into more comparative effectiveness projects, many observers have questioned whether clinicians or patients will heed this new source of advice. Payers, policymakers, and pundits have long lamented that clinicians often act contrary to published guidelines, even with good evidence. However, as comparative effectiveness funding has developed a greater emphasis on stakeholder engagement and patient participation, there is some reason for optimism, said Justin Timbie, PhD, a researcher with the RAND Corporation who studies evidence-based health policy and quality measurement.

"In the past, studies have not been designed using techniques that ensure the evidence is readily taken up at the back end," Timbie said. "PCORI has an emphasis on stakeholder engagement that comes across very clearly, and there is now a concerted effort to improve on some of the shortcomings of clinical research in the past. I'm optimistic that we're headed in the right direction."

However, Timbie cautions that recent history shows evidence moves very slowly into practice. In a recent Health Affairs study, Timbie and his colleagues at RAND outlined reasons that comparative effectiveness studies in the past have failed to change practice (2012;31:2168–75). Problems included misalignment of financial incentives, failure to address the needs of end users, and ambiguity of results. "Even the most carefully designed and conducted comparative effectiveness studies rarely produce definitive results," the authors noted. To improve translation, they recommended a three-pronged approach to improve uptake: strengthen study design and interpretation through consensus; adopt IOM standards in developing guidelines; and fix financial incentives.

Why fight this uphill battle? According to Timbie, consensus does exist that the current healthcare system has to change. "I think what's really driving the agenda is an awareness by everyone—patients, health plans, clinicians, and policy makers—that there is very limited evidence out there to help clinicians and patients make decisions," he said. "As a result, right now a lot of decision-making occurs with poor evidence, which leads to poor outcomes, waste, and medical errors—all of which are costly. The challenge is to fill this information gap and help clinicians and patients make better decisions."