“It is important that physicians and patients have some insight into the accuracy of tests that will influence their medical decisions,” says Brian Jackson, MD, MS, Vice President and Chief Medical Informatics Officer, ARUP.
A lot of confusion surrounds the accuracy of clinical laboratory tests. This came to mind recently in light of an article about a celebrity diagnosed with Lyme disease. The article claimed that Lyme tests from mainstream laboratories have mediocre accuracy (which is true), then evolved into an advertisement for a boutique lab that offers Lyme testing, implying that its tests are more accurate (which is not true).
Brian Jackson, MD, MS
“Doctors and patients who are psychologically vested in a positive outcome of a test will sometimes select a test with high sensitivity, while ignoring the specificity… So they either select antibody tests that have high false positive rates, or they engineer their tests to favor sensitivity at the expense of specificity.”
Vice President and Chief Medical Informatics Officer
Confusion about lab test accuracy is not uncommon, even among physicians and other medical professionals. Much of it stems from a misunderstanding of the relationship between sensitivity and specificity.
Sensitivity is the probability of getting a positive test result if your patient has the suspected condition/disease.
Specificity is the probability of getting a negative test result if your patient does NOT have the suspected condition/disease.
To better understand the relationship between sensitivity and specificity, imagine flipping a coin to test whether you have disease X. Heads you have the disease. Tails you don’t. In this example, the coin-flip test is 50 percent sensitive and 50 percent specific—equally weighted for each side. Yet such a test obviously offers zero information value since the result isn’t dependent on whether you have the disease.
Now imagine that you want to improve the sensitivity of this (coin) test. To do this, you add some weight to one side so it comes up heads more often, let’s say 80 percent of the time. Now you’ve got 80 percent sensitivity and 20 percent specificity, meaning it’s more likely that the test result is positive than that the test result is negative. (And it’s still just as useless for diagnosing disease X).
Doctors and patients who are psychologically vested in a positive outcome of a test will sometimes select a test with high sensitivity, while ignoring the specificity. This is what sometimes happens in Lyme testing, where clinics that “specialize” in diagnosing chronic Lyme disease are motivated to get positive diagnoses as often as possible. So they either select antibody tests that have high false-positive rates, or they engineer their tests to favor sensitivity at the expense of specificity. The tradeoff is that many patients who don’t actually have Lyme disease will get misdiagnosed as having it, ending up on long-term IV antibiotics.
Another real-world example is inflammatory bowel disease (IBD), which includes both Crohn disease and ulcerative colitis. It is difficult to definitively diagnose these diseases. Certain blood tests can contribute to the diagnosis, but for the most accurate answer, they must be interpreted by an expert, taking into account the patient’s history, physical exam, and often a colonoscopy.
A few years back, a boutique commercial lab attempted to differentiate its own IBD profile in the marketplace by adding a proprietary antibody that (according to the company’s own data) had approximately 80 percent sensitivity and 20 percent specificity. In their marketing materials, they acknowledged that this marker had poor specificity but claimed that it improved the overall sensitivity of the panel. Baloney! This test was no better than flipping a weighted coin.
In the end, no test is perfect. Some, like cardiac troponins, are highly accurate; others, such as Lyme antibody testing, are less so.
It is important that physicians and patients have some insight into the accuracy of tests that will influence their medical decisions. One quick rule of thumb for doing so is to add the reported sensitivity and specificity numbers (if available). The closer to 200 percent you get, the more confidently you can rely on that test; the closer to 100 percent, the more you’re relying on a coin flip.
Brian Jackson, MD, MS, Vice President and Chief Medical Informatics Officer, ARUP