When we want to improve a situation, our first instinct is often to change something. Try something new. Make adjustments. Anything other than what we’re already doing.
But this approach is frequently misguided. Sometimes doing nothing at all or removing inputs is the better approach.
Nassim Taleb, author of many books—The Black Swan, Fooled By Randomness, and The Bed of Procrustes— also wrote a book on fragility called Antifragile (antifragile defined).
One of the core ideas of Antifragile is around our tendency to intervene in situations regardless of whether that benefit is a net positive. There is a term for this, iatrogenics. Why do we do this? I can think of three good reasons: 1) we struggle to think in systems, 2) we believe that systems would benefit from our intervention and guidance, and 3) we believe that we know how to fix things to make them better.
The first reason is that we are unable to think in terms of second, third, and nth order consequences. More importantly, we are unaware that we need to. The second reason is that we see systems operating and see a flaw in the system that we could do better. Usually, it’s minor. This combines with the third reason, which is we think we are the very person that needs to fix the system.
Here’s Taleb expanding on this idea:
There is a mental defect psychologists call illusion of control that lead to a default to action rather than inaction, even when the benefits of inaction might be greater than those of action. So the intervention bias” (do something seems better than doing nothing, which is fine except that there are cases in which it gets us in trouble). The illusion of control was meant to show how “irrational” (according to some norm of behavior) we humans can be by giving ourselves the illusion to manage the uncontrollable around us: for instance gamblers cannot resist the pressure to do something in order to improve the outcome, such as throw the die with violence when they need a high number, or throw it softly in order to get a low one. Traders cannot resist wearing the same “lucky” shirt (often unwashed) to improve their day and feel they need to find similar way to take control of their destiny. This mental bias leads to all manner of patently “irrational” actions such as belief in paranormal, alternative medicine and many such actions often put under the umbrella magical thinking. Now the irony is that while this bias was devised to expose patently nonscientific fields, it largely affects many things you learn in college, particularly in social science. Many matters we deem scientific are just the fruit of that very illusion of control masquerading as science with, of course, actions to “improve” mankind.
Why is the scientific illusion of control worse than that of the pedestrian version? Because, tout simplement, these gamblers superstitions are benign, not much worse than doing nothing —they may be even beneficial in hidden ways, and in the right environment. But a doctor tinkering with your system or an army playing with a complex system with opaque causal links, say by invading Iraq, giving chemicals to kids and threatening their brain balance, or intervening in the environment, is far worse than nothing.
This variant of the illusion of control leads to the denigration of acts of omission (not doing something, letting things run their own course, leaving nature or the human body alone) as compared to doing something (such as operating on a patient or prescribing medication). This, we will see is the reason medicine used, until recent history, to kill more patients than it saved (and did not even get close to realizing it), and economists of the sophisticated equation-carrying variety, I will hope to convince you, have been particularly harmful to the economic health of societies —central bankers , and finance ministers, by tinkering with economic life, have caused massive instability.
Taleb posted an interesting table regarding intervention bias.