Early in medical school, I was involved in the care of Ted, who could have been my grandfather. At 76 he was as spry as any of the patients on the ward and always welcomed me with a “morning, Doc!” He was admitted because he was having concerning chest pain several times a week. Opening and closing 2.8 billion times throughout his life, his heart valves had gradually become hard and inflexible preventing blood from leaving at its usual rate. Now, it was risking his life. He had several treatment options available to him: valve replacement through open-heart surgery, a new minimally-invasive procedure where they snaked a new valve through the body’s blood vessels and into the heart, or just taking medications to help with his symptoms. It was my job to help Ted figure out which option was best for him.


Medicine is a highly cognitive discipline, demanding deliberate analysis and careful attention. Moreover, the body of evidence-based practices and medical knowledge continues to grow. Indeed, it has far outstripped physicians ability to stay up-to-date on the latest research. As a result, researchers and businesses have since the 1960s been working to codify medical practice and knowledge so as to offer cognitive support to health care providers trying to advise patients like Ted. These software-based tools usually place textbook knowledge at a doctor’s fingertips. Many of them such as INTERNIST and DXPlain became highly complex diagnostic tools.

However, despite the tens of thousands of person hours that went into developing them they failed to see widespread adoption. This seems strange, maybe even a tragedy, when one considers that as many as 15% of diagnoses made in the US are wrong. That number approaches 50% when considering physician’s management decisions. So why haven’t these tools been more widely adopted?

In general, getting physicians to use decision support tools has significant barriers. For one, there is a perception of a highly-optimized workflow being very sensitive to disruption of change. However, this is not the main obstacle. Physicians are amenable to tools that genuinely save time. However, many decision support tools require a substantial investment of time.

Take, for example, the STS calculator assessing Ted’s risk during cardiac surgery. Insofar as it can predict morbidity and mortality, it is a very useful tool. As you can see, though, there are a number of variables to contend with. Over 40 once you start answering questions and getting into the decision tree logic. Unfortunately, in this example it is hard for a physician to really weigh the different probabilities of death and injury that it presents. Would you prefer a 5% risk of dying from a procedure with a 90% chance of improving symptoms or a 2% risk of dying with a 70% chance of improving symptoms? “How much risk of death am I willing to tolerate for a shot at a cure?”

To answer that question you would need to know about your life expectancy, your quality of life with and without your symptoms, and critically, your own preferences about the type of life you want to lead. Everyone wants to live a healthy and happy life for as long as possible, but when you have to make trade-offs, the decision becomes a deeply personal one.

Of course, there are cases when the trade-offs seem very small. Prescribing a well-studied medication with few risks can dramatically increase a person’s years of healthy life. In these cases, though, no calculator is needed. A physician’s intuition and expertise alone are usually enough to guide a patient in their care.

But when it comes to the decisions at the margins, when even a physician is ambivalent about the right course of action, there can be no other way than to have a frank conversation about a patient’s values. In other words, borderline numbers quoting risk will not appreciably change the final decision. And a decision support tool that cannot change someone’s decision is useless. So, it’s really up to the patient. In these cases, it’s easier and more valid to ask a patient about their values and avoid exposing them to risk calculators derived from special study populations with extensive caveats.

Of course, this may not be true of all physician decision support, just a lot of it. Yet, the story goes a little differently for patient decision support. While there is generally good agreement between a physician’s intuition and actuarial risk, there is a huge gap between patient’s intuition and said risk. That means patients stand to gain much more from the tools that physicians have for the most part rejected. Patients can derive benefit from risks and recommendations quoted by these tools not just when the difference among choices is marginal, but potentially every time.

It’s encouraging to witness a burgeoning of apps designed to effectively communicate to patients the risk of medications and procedures. There are a great deal of new usability and vocabulary challenges to address in these efforts, however.

At Symcat, we’re trying to combine our medical expertise with sophisticated user-interface design to improve patient decision support, but there are others trying to do this as well. What attempts to communicate medical information to patients have impressed you the most?

-Craig