Iatrogenics: Why Intervention Often Leads to Worse Outcomes

Iatrogenics is when a treatment causes more harm than benefit. As iatros means healer in Greek, the word means “caused by the healer” or “brought by the healer.”  Healer, in this sense, need not mean doctor, but anyone intervening to solve a problem. For example, it could be a thought leader, a CEO, a government, or a coalition of the willing. Nassim Taleb calls these people interventionistas. Often these people come armed with solutions to solve the first-order consequences of a decision but create worse second and subsequent order consequences. Luckily, for them at least, they’re never around to see the train wreck they created.

Today we use the phrase iatrogenics to refer to any effect resulting from intervention in excess of gain. Some examples are easier recognized than others. For example, when the negative effects are immediate and visible and appear to be a direct cause-effect, we can reasonably conclude that the intervention caused a negative effect. However, if the negative effects are delayed or could be explained by multiple causes, we are less likely to conclude the intervention caused them.

A great example of iatrogenics in action is the death of George Washington. In 1799, as he lay dying from a bacterial infection, his well-intentioned doctors aided or hastened his death using the standard treatment at the time, which was bloodletting (at least five pints, according to Ron Chernow).

More controversial examples exist as well, such as military interventions in the Middle East. In these cases, linkages are clouded by narratives, moral arguments, and clear cause and impact. (A great book to read on this is Perilous Interventions.) And when the linkages between cause and effect are murky, the very people who caused the harm are often the people rewarded for improving the situation.

The key lesson here is that if we are to intervene, we need a solid idea of not only the benefits of our interventions but also the harm we may cause—the second and subsequent order consequences.  Otherwise, how will we know when, despite our best intentions, we cause more harm than we do good?

Intervening when we have no idea of the break-even point is “naive interventionism,” a phrase first brought to my attention by Nassim Taleb. In Antifragile, Taleb writes:

In the case of tonsillectomies, the harm to the children undergoing unnecessary treatment is coupled with the trumpeted gain for some others. The name for such net loss, the (usually bitten or delayed) damage from treatment in excess of the benefits, is iatrogenics.


Why would people do something even when the evidence points out that doing something is actually causing more harm?

I can think of a few reasons as to why otherwise well-intentioned people continue to intervene where consequences outweigh the benefits.

Some of the flaws include 1) an inability to think through problems, 2) separation from consequences, 3) a bias for action, and 4) no skin in the game. Let’s flesh these out a little.

The first flaw is the inability to think through second and subsequent order consequences. They fail to realize that the second and subsequent order consequences exist at all or could outweigh the benefits. Most things in life happen at the second, third, or nth steps.

The second flaw is a distance from the consequences. When there is a time delay between an action and its consequences (feedback) it can be hard to know that you’re causing harm. This allows, even encourages, some self-delusion. Given that we are prone to confirming our beliefs—and presumably, we took action because we believed it to be helpful—we’re unlikely to see evidence that contradicts our beliefs.

The third flaw is a bias for action. This is also known as, to paraphrase Charlie Munger, do something syndrome. If you’re a policy advisor or politician, or heck, even a modern office worker, social norms make it hard for you to say, “I don’t know.” You’re expected to have an opinion on everything.

The fourth flaw is one of the incentives, they have no or little skin in the game. They win if things go right and suffer no consequences if things go wrong.


Hippocrates created the first principle of medicine era-primum non nocere (“first do no harm”), which is to avoid iatrogenic effects. This is a great example of inversion. Outside of medicine, however, this concept is little known.

Think about how a typical meeting starts. In response to a new product from a competitor, for example, the first question people usually ask is “What are we going to do about this?” The hidden assumption that goes unexplored is that you need to do something. Rarely do we even consider that the cost of doing something might outweigh the benefits. 

And the optics of doing nothing are not without consequences. It will appear to your boss that you’re not doing anything. You have an incentive to be seen as doing something even if the costs of taking action are high.

What We Can Learn

Intervention—by people or governments—should only be used when the benefits visibly outweigh the negatives. A great example is saving a life. “Otherwise,” Nassim Taleb writes in Antifragile, “in situations in which the benefits of a particular medicine, procedure, or nutritional or lifestyle modification appear small—say, those aiming for comfort—we have a large potential sucker problem (hence putting us on the wrong side of convexity effects).”

A simple rule for the decision-maker is that intervention needs to prove its benefits and those benefits need to be orders of magnitude higher than the natural (that is non-interventionist) path. We intuitively know this already. We won’t switch apps or brands for a marginal increase over the status quo. Only when the benefits become orders of magnitude higher do we switch.

We must also recognize that some systems self-correct; this is the essence of homeostasis. Naive interventionists, or the interventionista, often deny that natural homeostatic mechanisms are sufficient, that “something needs to be done” — yet often the best course of action is nothing at all.

Read Next

Intervention Bias: When to Step in and When To Leave Things Alone

Second-Order Thinking: What Smart People Use to Outperform