Albert Einstein and Max Wertheimer were close friends. Both found themselves in exile in the United States after fleeing the Nazis in the early 1930s, Einstein at Princeton and Wertheimer in New York.
They communicated by exchanging letters in which Wertheimer would entertain Einstein with thought problems.
In 1934 Wertheimer sent the following problem in a letter.
An old clattery auto is to drive a stretch of 2 miles, up and down a hill, /\. Because it is so old, it cannot drive the first mile— the ascent —faster than with an average speed of 15 miles per hour. Question: How fast does it have to drive the second mile— on going down, it can, of course, go faster—in order to obtain an average speed (for the whole distance) of 30 miles an hour?
Wertheimer’s thought problem suggests the answer might be 45 or even 60 miles an hour. But that is not the case. Even if the car broke the sound barrier on the way down, it would not achieve an average speed of 30 miles an hour. Don’t be worried if you were fooled, Einstein was at first too. Replying “Not until calculating did I notice that there is no time left for the way down!”
Gestalt psychologists’ way to solve problems is to reformulate the question until the answer becomes clear. Here’s how it works. How long does it take the old car to reach the top of the hill? The road up is one mile long. The car travels fifteen miles per hour, so it takes four minutes (one hour divided by fifteen) to reach the top. How long does it take the car to drive up and down the hill, with an average speed of thirty miles per hour? The road up and down is two miles long. Thirty miles per hour translates into two miles per four minutes. Thus, the car needs four minutes to drive the entire distance. But these four minutes were already used up by the time the car reached the top.
“In an uncertain world, statistical thinking and risk communication alone are not sufficient. Good rules of thumb are essential for good decisions.”
Three minutes after taking off from LaGuardia airport in New York City, US Airways Flight 1549 ran into a flock of Canada geese. At 2800 feet, passengers and crew heard loud bangs as the geese collided with the engines rendering them both inoperable.
When it dawned on the passengers that they were gliding toward the ground , it grew quiet on the plane. No panic, only silent prayer. Captain Chesley Sullenberger called air traffic control: “Hit birds. We’ve lost thrust in both engines. We’re turning back towards LaGuardia.”
But landing short of the airport would have catastrophic consequences, for passengers, crew , and the people living below. The captain and the copilot had to make a good judgment. Could the plane actually make it to LaGuardia, or would they have to try something more risky, such as a water landing in the Hudson River? One might expect the pilots to have measured speed, wind, altitude, and distance and fed this information into a calculator. Instead, they simply used a rule of thumb:
Fix your gaze on the tower: If the tower rises in your windshield, you won’t make it.
No estimation of the trajectory of the gliding plane is necessary. No time is wasted. And the rule is immune to calculation errors. In the words of copilot Jeffrey Skiles: “It’s not so much a mathematical calculation as visual, in that when you are flying in an airplane, things that— a point that you can’t reach will actually rise in your windshield. A point that you are going to overfly will descend in your windshield.” This time the point they were trying to reach did not descend but rose. They went for the Hudson.
In the cabin, the passengers were not aware of what was going on in the cockpit. All they heard was: “This is the captain: Brace for impact.” Flight attendants shouted: “Heads down! Stay down!” Passengers and crew later recalled that they were trying to grasp what death would be like, and the anguish of their kids, husbands, and wives. Then the impact happened, and the plane stopped. When passengers opened the emergency doors, sunlight streamed in. Everyone got up and rushed toward the openings. Only one passenger headed to the overhead bin to get her carry-on but was immediately stopped. The wings of the floating but slowly sinking plane were packed with people in life jackets hoping to be rescued. Then they saw the ferry coming. Everyone survived.
All this happened within the three minutes between the geese hitting the plane and the ditch in the river. During that time, the pilots began to run through the dual-engine failure checklist, a three-page list designed to be used at thirty thousand feet, not at three thousand feet: turn the ignition on, reset flight control computer, and so on. But they could not finish it. Nor did they have time to even start on the ditching checklist. While the evacuation was underway, Skiles remained in the cockpit and went through the evacuation checklist to safeguard against potential fire hazards and other dangers. Sullenberger went back to check on passengers and left the cabin only after making sure that no one was left behind. It was the combination of teamwork, checklists, and smart rules of thumb that made the miracle possible.
Say what? They used a heuristic?
Heuristics enable us to make fast, highly (but not perfectly) accurate decisions without taking too much time and searching for information. Heuristics allow us to focus on only a few pieces of information and ignore the rest.
“Experts,” Gigerenzer writes, “often search for less information than novices do.”
We do the same thing, intuitively, to catch a baseball — the gaze heuristic.
Fix your gaze on an object, and adjust your speed so that the angle of gaze remains constant.
Professionals and amateurs alike rely on this rule.
… If a fly ball comes in high, the player fixates his eyes on the ball, starts running, and adjusts his running speed so that the angle of gaze remains constant. The player does not need to calculate the trajectory of the ball. To select the right parabola, the player’s brain would have to estimate the ball’s initial distance, velocity, and angle, which is not a simple feat. And to make things more complicated, real-life balls do not fly in parabolas . Wind, air resistance, and spin affect their paths. Even the most sophisticated robots or computers today cannot correctly estimate a landing point during the few seconds a ball soars through the air. The gaze heuristic solves this problem by guiding the player toward the landing point, not by calculating it mathematically . That’s why players don’t know exactly where the ball will land, and often run into walls and over the stands in their pursuit.
The gaze heuristic is an example of how the mind can discover simple solutions to very complex problems.
The broader point of Gigerenzer’s book is that while rational thinking works well for risks, you need a combination of rational and heuristic thinking to make decisions under uncertainty.
If we knew everything about the future with certainty, our lives would be drained of emotion. No surprise and pleasure, no joy or thrill— we knew it all along. The first kiss, the first proposal, the birth of a healthy child would be about as exciting as last year’s weather report. If our world ever turned certain, life would be mind-numbingly dull.
The Illusion of Certainty
We demand certainty of others. We ask our bankers, doctors, and political leaders (among others) to give it to us. What they deliver, however, is the illusion of certainty. We feel comfortable with this.
Many of us smile at old-fashioned fortune-tellers. But when the soothsayers work with computer algorithms rather than tarot cards, we take their predictions seriously and are prepared to pay for them. The most astounding part is our collective amnesia: Most of us are still anxious to see stock market predictions even if they have been consistently wrong year after year.
Technology changes how we see things – it amplifies the illusion of certainty.
When an astrologer calculates an expert horoscope for you and foretells that you will develop a serious illness and might even die at age forty-nine, will you tremble when the date approaches? Some 4 percent of Germans would; they believe that an expert horoscope is absolutely certain.
But when technology is involved, the illusion of certainty is amplified. Forty-four percent of people surveyed think that the result of a screening mammogram is certain. In fact, mammograms fail to detect about ten percent of cancers, and the younger the women being tested, the more error-prone the results, because their breasts are denser.
“Not understanding a new technology is one thing,” Gigerenzer writes, “believing that it delivers certainty is another.”
It’s best to remember Ben Franklin: “In this world, nothing can be said to be certain, except death and taxes.”
The Security Blanket
Where does our need for certainty come from?
People with a high need for certainty are more prone to stereotypes than others and are less inclined to remember information that contradicts their stereotypes. They find ambiguity confusing and have a desire to plan out their lives rationally. First get a degree, a car, and then a career, find the most perfect partner, buy a home, and have beautiful babies. But then the economy breaks down, the job is lost, the partner has an affair with someone else, and one finds oneself packing boxes to move to a cheaper place. In an uncertain world, we cannot plan everything ahead. Here, we can only cross each bridge when we come to it, not beforehand. The very desire to plan and organize everything may be part of the problem, not the solution. There is a Yiddish joke: “Do you know how to make God laugh? Tell him your plans.”
To be sure, illusions have their function. Small children often need security blankets to soothe their fears. Yet for the mature adult, a high need for certainty can be a dangerous thing. It prevents us from learning to face the uncertainty pervading our lives. As hard as we try, we cannot make our lives risk-free the way we make our milk fat-free.
At the same time, a psychological need is not entirely to blame for the illusion of certainty. Manufacturers of certainty play a crucial role in cultivating the illusion. They delude us into thinking that our future is predictable, as long as the right technology is at hand.
Risk and Uncertainty
Two magnificently dressed young women sit upright on their chairs, calmly facing each other. Yet neither takes notice of the other. Fortuna, the fickle, wheel-toting goddess of chance, sits blindfolded on the left while human figures desperately climb, cling to, or tumble off the wheel in her hand. Sapientia, the calculating and vain deity of science, gazes into a hand-mirror, lost in admiration of herself. These two allegorical figures depict a long-standing polarity: Fortuna brings good or bad luck, depending on her mood, but science promises certainty.
This sixteenth-century woodcut was carved a century before one of the greatest revolutions in human thinking, the “probabilistic revolution,” colloquially known as the taming of chance. Its domestication began in the mid-seventeenth century. Since then, Fortuna’s opposition to Sapientia has evolved into an intimate relationship, not without attempts to snatch each other’s possessions. Science sought to liberate people from Fortuna’s wheel, to banish belief in fate, and replace chances with causes. Fortuna struck back by undermining science itself with chance and creating the vast empire of probability and statistics. After their struggles, neither remained the same: Fortuna was tamed, and science lost its certainty.
The twilight of uncertainty comes in different shades and degrees. Beginning in the seventeenth century, the probabilistic revolution gave humankind the skills of statistical thinking to triumph over Fortuna, but these skills were designed for the palest shade of uncertainty, a world of known risk, in short, risk. I use this term for a world where all alternatives, consequences, and probabilities are known. Lotteries and games of chance are examples. Most of the time, however, we live in a changing world where some of these are unknown: where we face unknown risks, or uncertainty. The world of uncertainty is huge compared to that of risk. … In an uncertain world, it is impossible to determine the optimal course of action by calculating the exact risks. We have to deal with “unknown unknowns.” Surprises happen. Even when calculation does not provide a clear answer, however, we have to make decisions. Thankfully we can do much better than frantically clinging to and tumbling off Fortuna’s wheel. Fortuna and Sapientia had a second brainchild alongside mathematical probability, which is often passed over: rules of thumb, known in scientific language as heuristics.
How decisions change based on risk/uncertainty
When making decisions, the two sets of mental tools are required:
1. RISK: If risks are known, good decisions require logic and statistical thinking.
2. UNCERTAINTY: If some risks are unknown, good decisions also require intuition and smart rules of thumb.
Most of the time we need a combination of the two.