Physicians throw around the term “evidence-based medicine” a lot. Whether it’s an antibiotic, IV fluid, or blood-pressure pill, the decision about how to use a drug often comes down to the question: is the treatment evidence-based? But what does that mean? Evidence-based medicine is “the conscientious, explicit, and judicious use of current best evidence in making decisions” about patient care. This definition suggests that clinicians or researchers fastidiously tested and confirmed the effectiveness of an intervention with a robust, replicable, and accurate scientific study.

Designing a valid study, however, is difficult because there are many potential biases that can render its conclusions inaccurate. Here are some examples:

  • Selection bias occurs when subjects are assigned in a nonrandom manner to different study groups. If a physician runs a trial to test the efficacy of a drug he may put those who have a better prognosis in the treatment group, as opposed to the non-treatment group. Consequently, scientists can claim this new treatment is successful even though it was tested on those who were most likely to improve anyway.  
  • Sampling bias, where subjects chosen for the study do not represent the general population, can mean that a study’s findings do not apply to the general population.  
  • The Hawthorne effect arises when subjects change their behavior because they know they’re being watched by a researcher or physician.  
  • Confounding bias describes a situation in which one factor can distort the effect of another. If a researcher studies the effects of alcohol on health but ignores the fact that many people who drink alcohol also smoke, alcohol will appear to have a worse effect on one’s health due to the consequences of smoking.  

Another kind of bias has been in the news a lot recently with regard to prostate-cancer screening. Here’s how Dr. Michael S. Cookson, a urologist at Vanderbilt University, describes this kind of bias:

Lead-time bias suggests that the natural history of the disease is not truly affected by screening. For example, a patient may be diagnosed with prostate cancer at 50 years of age through … screening. He then undergoes treatment but ultimately progresses and dies at 60 years of age. Accordingly, the same patient without screening develops symptomatic bony metastases [late stage cancer] at age 58, undergoes treatment with androgen deprivation therapy, and dies at age 60. Thus, in this theoretical scenario, even though he was diagnosed 8 years prior through screening, his death was not affected by screening or early detection.

In other words, early detection of cancer makes it seem as if your lifespan is increased simply because you know that you have cancer for a longer period of time. But you don’t necessarily live longer because of that.

Image via Shutterstock

There are many other kinds of bias but the descriptions above give a sense of how difficult it is to design experiments without it. The most powerful studies account for bias with a double-blindedrandomized, and controlled trial. Participants and researchers are both blind in that they do not know who is getting the placebo treatment and who is getting the trial treatment. Participants must also be randomized to the treatment group or the placebo group — that way, there is no selection bias and there is less confounding bias. Controlled just means that there must be a control group, which is a group that does not receive the disease therapy or that receives the current best therapy for the disease. Researchers can then compare the effectiveness of the newest therapy to the current best available therapy. Another way to avoid confusing results is to use crossover studies, where a patient serves as his or her own control. The patient receives the real therapy for a given period of time and then receives the placebo for a given period of time thereby eliminating confounding bias.

A statue of Avicenna in Tajikistan Nikita Maykov / Shutterstock.com

Interestingly, this approach to scientific studies, albeit a much less sophisticated version, dates back to the eleventh-century Islamic philosopher and physician, Avicenna. In his Canon of Medicine, a multivolume medical encyclopedia, Avicenna expanded upon the work of Galen, the ancient Greek physician. In her 2008 article “Islamic Pharmacology in the Middle Ages: Theories and Substances,” Danielle Jacquart explains that Avicenna endorsed the concept of using drugs based on past results of experiments:

As for the powers only known through experiment, these were not deduced from the qualities or the appearance of pharmaceutical ingredients, but they rather acted through their whole form or substance. Their action could only be revealed by an experimental test. Yet this did not mean that ordinary physicians themselves had to undertake such experiments. Rather, they relied upon experiments carried out by their predecessors.

Similarly, when today’s physicians choose, say, an antibiotic for a bacterial infection, they rely upon experiments carried out by their predecessors.

When I started medical school, I assumed that everything in medicine was evidence-based; that scientists rigorously studied and validated every treatment. After all, we should not treat a patient with a drug unless we know it works. But it turns out that there is not always evidence to support every decision physicians make. Perhaps a study has simply not been done or the evidence collected was equivocal or inconclusive. Or perhaps some real-life situation has arisen that is complicated in ways that could not possibly have been tested in an experiment. In these cases, physicians must base their decisions on experience.

Let’s take the example of IV fluids, which are a basic staple of medical care, as I’ve mentioned in multiple posts. One would think that the data would be fairly clear on which types of IV fluids are best. Unfortunately, it’s not at all evident. Some background: there are two major types of IV fluids, colloids and crystalloids. Crystalloids contain water and electrolytes that are similar to those circulating in the blood. Some examples of these are Lactated Ringer’s and Normal Saline. Colloid fluids contain water and electrolytes, too, but they also contain osmotic substances like albumin, which draw fluid into the vascular space. Fluid in the body can be inside the blood vessels or outside the blood vessels, and colloids keep fluids in the vessels.

Ostensibly, colloid fluids ought to work better in certain situations. For instance, when a patient has very low blood pressure, the way to increase blood pressure is to increase fluid within the vasculature. However two studies, one in the New England Journal of Medicine in 2004, and one in the Annals of Internal Medicine in 2001, concluded that there were no significant differences in mortality in various medical situations when using one type of fluid versus the other. So, barring significant differences in cost, which fluids does one use in the hospital when patients need hydration or increased blood pressure?

Image via Shutterstock

Given that the evidence is unclear, we use what our mentors use. During surgery rounds, for example, I asked “why are we using Lactated Ringer’s (LR)?” A resident replied that the evidence was inconclusive and the attending used LR so he used LR. Until we have better evidence, this seems completely legitimate even if it makes us uneasy because there’s no clear consensus. Furthermore, this demonstrates that though certain ideas may make sense in theory, they fail when standing against the test of scientific rigor. Thus, evidence-based medicine also requires open-mindedness.

Let’s also look at an example of how evidence-based medicine changes medical practice rapidly on a day-to-day basis. This past summer, the treatment for Parkinson’s disease (PD), a disease of certain neurons in the brain, underwent a change. Previously, movement disorder neurologists recommended dopamine agonists as a first-line treatment for the disease. The alternative is carbidopa-levodopa, a medication that is more effective at controlling PD symptoms. However, carbidopa-levodopa causes more side effects, such as dyskinesias, or compulsive and uncontrollable movements (some of these can be irreversible), the longer one takes the medication. And, given that patients with PD can live a long time, neurologists wanted to put off using it so that patients would not experience these effects so soon after starting medication.

But this past June, a study in The Lancet compared starting a dopamine agonist with starting carbidopa-levodopa in patients with newly diagnosed, early PD. And the researchers found that there is not a significant difference in patient-rated mobility scores (a fancy way of saying movement difficulties as well as quality of life) when starting with levodopa rather than dopamine agonists. I observed the direct practice changes as a result of this study. In the neurology clinic, the attending, after reading this article, changed the way he spoke to patients with newly diagnosed PD. Instead of saying that it is better to avoid carbidopa-levodopa first, he told patients that it was their choice what drug they wanted to start taking. This is a wonderful example of why evidence-based medicine and research is so important and how it can affect the practice of medicine — very concretely, very directly, and very soon after the research is published.

1 Comments

  1. Urology is something that can be a little confusing. But, that almost just makes the urologist seem even more like a professional in their field. The less people there are in a field that understand everything, the more of a specialist they are. But, I do agree with you, that bias is something that will usually come into play in most experiments. It is just human nature, and we can't help it. http://drmatthewbui.com/dr-matthew-bui-ph-d/

Comments are closed.