Jane Dickerson, PhD, and Brian Jackson, MD, of CLN Laboratory Stewardship Focus recently interviewed physician and author, H. Gilbert Welch, MD, MPH, about the problem of overdiagnosing patients. A general internist who for the past 2 decades has studied the problem of overdiagnosis, Welch is nationally recognized for his many journal articles and books on the topic, including Overdiagnosed: Making People Sick in the Pursuit of Health (1) and most recently, Less Medicine, More Health: 7 Assumptions that Drive Too Much Medical Care (2).

How did you first come to think about the issue of overdiagnosis?

There’s no single answer to this question. I could credit my mother; she was a hospital trustee at Boulder Community Hospital, and I remember her asking hard questions about the medical profession. Did our town really need another CT scanner? Did the orthopedic surgeons really need a laminar flow room? These kinds of questions about medical technology were common in our household.

I also could credit my internal medicine residency training at the University of Utah, where I saw well people become patients because of small abnormalities on scans that led to further testing and sometimes complications. Finally, I should credit William Black, MD, a radiologist who taught me the basic conundrum of early detection, that whenever we look harder for a disease, we find more, and the typical patient appears to do better.

Could you share an example of a poor patient outcome due to a lab-related overdiagnosis?

I’ll share an example that I include in Overdiagnosed. I was managing ulcerative colitis in a 74-year old man. One day his routine lab tests showed that he had elevated blood glucose. It wasn’t that high, but it prompted more testing and ultimately the testing confirmed the diagnosis of diabetes. He had no symptoms, but this was during the period when we were getting much more aggressive about treating type 2 diabetes, and I started him on the sulfonylurea drug, Glyburide. Six months later he blacked out while driving on the interstate and his car went off the road. Paramedics on the scene measured his blood sugar and it was extremely low due to this medication. He had a long recovery from the accident. I took him off Glyburide, and he lived 2 more decades without any symptoms or complications from diabetes.

You’ve mentioned that it is important to consider how well a screening test has been studied, and that we need to educate patients accordingly. Would you walk us through examples of these considerations?

Yes, it is important to carefully evaluate the available evidence for how a test performs and the benefit and harms of our likely response, i.e. the subsequent interventions based on the test. Given the evidence, you might decide it’s a slam dunk—something where everyone would look at the data and say, yeah, that’s a good thing to do. Or you might decide it’s a close call, which happens when different people look at the data and make different decisions depending on how they value things. Just to give you two examples, lowering really, really high blood pressure is a slam dunk, as is taking a statin post-heart attack. There are big benefits from these interventions and the harms are relatively small.

When I make the statement, In general, cancer screening is a close call, I know some people will think, this guy is extreme. Sometimes we have a lot of evidence, as is the case for screening mammography. More than half a million women have participated in randomized trials but the evidence of benefit for this test is mixed. A few probably benefit, but many more are harmed. The benefits and harms are different; they’re apples and oranges. The benefit—avoiding a breast cancer death—is extremely important but also extremely rare.

The harm of false alarms—false-positive results leading to a cascade of subsequent testing and anxiety—is less important but extremely common. The harm of overdiagnosis—being treated for a disease that was never destined to bother you—is less common but clearly more common than the benefit.

Different people can look at these data and come to different conclusions because it is all about how you value the various outcomes. One person might decide, I want to do everything I can to avoid a cancer death, and I accept the additional harms of potentially becoming a cancer patient unnecessarily, or of having scans with abnormal results that require me to come back for biopsies and other treatments. Another might reasonably arrive at exactly the opposite conclusion.

There is no right answer to this question; it is a value judgment. Cancer screening involves value judgments. Are these types of discussions being had with patients? They certainly are in the realm of prostate-specific antigen (PSA) screening for prostate cancer, which isn’t very different from breast cancer screening. Both have the same basic trade-offs, but we approach those trade-offs differently. We give men a choice about PSA screening, but we tell women they must have mammography screening.

The lab industry is constantly developing and marketing new lab tests. Many new tests are proprietary, making it hard to assess manufacturers’ claims, which often involve risk scores and fancy reports. When you encounter new tests, how do you assess the evidence for those tests?

I’m worried about the frothy nature of test marketing and promotion. It has become a big business and it’s relatively unregulated compared with the pharmaceutical industry. A lot of promotion goes well beyond the pale from my standpoint. To be useful, a test must have analytical validity, meaning that it measures what it claims to measure (precision, accuracy, sensitivity). The second criteria is clinical utility: Is there good evidence for what should be done with a positive or negative result? Absent such proof, providers should be inherently skeptical and treat the patient, not the lab abnormality.

Tests that offer risk ratios or scores based on mathematical modeling are basically just associations based on biomarkers looking at which combination best discriminates healthy individuals from diseased ones. It is important that these assays be re-validated with a different population of patients, independent from the company that developed the tests.

What are the key drivers of overdiagnosis? How would you rank these in terms of influence?

There is a complex web at work, but the two most important forces are true belief and money. True believers think that early detection can only help, i.e. that it has no harms, and that too much screening is impossible. Then there’s the money—whether for pharma or device manufacturers, or increasingly, our hospitals. The easiest way to make money is to expand indications and recruit new patients. Screening and early detection are great ways to do this.

Many patients are true believers. They ask for more testing because we have trained them to believe the path to health is through testing. But we have to ask ourselves some hard questions: Is looking hard for things to be wrong good for a healthcare system? Or does it simply make the population more anxious (the “worried well”) and distract them from activities more important to their health, like eating healthy food, moving regularly, and finding purpose in life?

We also need to consider survivor stories. Patients who become really strong advocates and organize into advocacy groups are particularly misleading. I wish I could say that most survivors were actually helped by the process, but most breast and prostate cancer survivors who were detected by screening are actually much more likely to have been overdiagnosed than to have been truly helped. This sets up a powerful, paradoxical feedback loop in which the more overdiagnosis a test creates, the more survivor stories there are, and the more popular screening becomes. Therein lies the paradox: The major harm of early detection (overdiagnosis) is perceived by patients as a benefit.

Every doctor would want me to add malpractice law into the mix of this complex web. Doctors aren’t stupid: We know we’re punished for underdiagnosis but not punished for overdiagnosis. This results in an unbalanced set of forces, and so, of course we’re going to err on the side of overdiagnosis.

What are a few specific ways that laboratorians can help reduce the overdiagnosis dilemma? Could you share an example of an intervention that was successful in reducing overdiagnosis related to lab testing?

I want to leave this last challenge to your readers. I will say that the first step is recognizing and then highlighting the problem. It is important to shine a light, and sometimes it really does make a difference. Be inherently skeptical when new tests come to market and think about the population on which the test is likely to be used. Don’t simply focus on the few who might be helped; think about what happens to everyone else.

Jane Dickerson, PhD, DABCC, is clinical associate professor at the University of Washington and co-director for clinical chemistry at Seattle Children’s Hospital in Seattle.+EMAIL: [email protected]

Brian Jackson, MD, MS, is associate professor of pathology at the University of Utah and medical director of IT and pre-analytic services at ARUP Laboratories in Salt Lake City. +EMAIL: [email protected]

References

  1. Welch HG, Schwartz L, Woloshin S. Overdiagnosed: Making People Sick in the Pursuit of Health. Beacon Press; 2011.
  2. Welch, HG. Less Medicine, More Health: 7 Assumptions that Drive Too Much Medical Care. Beacon Press; 2015.

CLN's Laboratory Stewardship Focus is supported by Seattle Children's Patient-Centered Laboratory Utilization Guidance Services

Seattle Children's Patient-Centered Laboratory Utilization Guidance Services