Tag: Thinking

The Many Ways Our Memory Fails Us (Part 2)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

In part one, we began a conversation about the trappings of the human memory, using Daniel Schacter’s excellent The Seven Sins of Memory as our guide. (We’ve also covered some reasons why our memory is pretty darn good.) We covered transience — the loss of memory due to time — and absent-mindedness — memories that were never encoded at all or were not available when needed. Let’s keep going with a couple more whoppers: Blocking and Misattribution.

Blocking

Blocking is the phenomenon when something is indeed encoded in our memory and should be easily available in the given situation, but simply will not come to mind. We’re most familiar with blocking as the always frustrating “It’s on the tip of my tongue!

Unsurprisingly, blocking occurs most frequently when it comes to names and indeed occurs more frequently as we get older:

Twenty-year-olds, forty-year-olds, and seventy-year-olds kept diaries for a month in which they recorded spontaneously occurring retrieval blocks that were accompanied by the “tip of the tongue” sensation. Blocking occurred occasionally for the names of objects (for example, algae) and abstract words (for example, idiomatic). In all three groups, however, blocking occurred most frequently for proper names, with more blocks for people than for other proper names such as countries or cities. Proper name blocks occurred more frequently in the seventy-year-olds than in either of the other two groups.

This is not the worst sin our memory commits — excepting the times when we forget an important person’s name (which is admittedly not fun), blocking doesn’t cause the terrible practical results some of the other memory issues cause. But the reason blocking occurs does tells us something interesting about memory, something we intuitively know from other domains: We have a hard time learning things by rote or by force. We prefer associations and connections to form strong, lasting, easily available memories.

Why are names blocked from us so frequently, even more than objects, places, descriptions, and other nouns? For example, Schacter mentions experiments in which researchers show that we more easily forget a man’s name than his occupationeven if they’re the same word! (Baker/baker or Potter/potter, for example.)

It’s because relative to a descriptive noun like “baker,” which calls to mind all sorts of connotations, images, and associations, a person’s name has very little attached to it. We have no easy associations to make — it doesn’t tell us anything about the person or give us much to hang our hat on. It doesn’t really help us form an image or impression. And so we basically remember it by rote, which doesn’t always work that well.

Most models of name retrieval hold that activation of phonological representations [sound associations] occurs only after activation of conceptual and visual representations. This idea explains why people can often retrieve conceptual information about an object or person whom they cannot name, whereas the reverse does not occur. For example, diary studies indicate that people frequently recall a person’s occupation without remembering his name, but no instances have been documented in which a name is recalled without any conceptual knowledge about the person. In experiments in which people named pictures of famous individuals, participants who failed to retrieve the name “Charlton Heston” could often recall that he was an actor. Thus, when you block on the name “John Baker” you may very well recall that he is an attorney who enjoys golf, but it is highly unlikely that you would recall Baker’s name and fail to recall any of his personal attributes.

A person’s name is the weakest piece of information we have about them in our people-information lexicon, and thus the least available at any time, and the most susceptible to not being available as needed. It gets worse if it’s a name we haven’t needed to recall frequently or recently, as we all can probably attest to. (This also applies to the other types of words we block on less frequently — objects, places, etc.)

The only real way to avoid blocking problems is to create stronger associations when we learn names, or even re-encode names we already know by increasing their salience with a vivid image, even a silly one. (If you ever meet anyone named Baker…you know what to do.)

But the most important idea here is that information gains salience in our brain based on what it brings to mind. 

Whether or not blocking occurs in the sense implied by Freud’s idea of repressed memories, Schacter is non-committal about — it seems the issue was not, at the time of writing, settled.

Misattribution

The memory sin of misattribution has fairly serious consequences. Misattribution happens all the time and is a peculiar memory sin where we do remember something, but that thing is wrong, or possibly not even our own memory at all:

Sometimes we remember events that never happened, misattributing speedy processing of incoming information or vivid images that spring to mind, to memories of past events that did not occur. Sometimes we recall correctly what happened, but misattribute it to the wrong time and place. And at other times misattribution operates in a different direction: we mistakenly credit a spontaneous image or thought to our own imagination, when in reality we are recalling it–without awareness–from something we read or heard.

The most familiar, but benign, experience we’ve all had with misattribution is the curious case of deja vu. As of the writing of his book, Schacter felt there was no convincing explanation for why deja vu occurs, but we know that the brain is capable of thinking it’s recalling an event that happened previously, even if it hasn’t.

In the case of deja vu, it’s simply a bit of an annoyance. But the misattribution problem causes more serious problems elsewhere.

The major one is eyewitness testimony, which we now know is notoriously unreliable. It turns out that when eyewitnesses claim they “know what they saw!” it’s unlikely they remember as well as they claim. It’s not their fault and it’s not a lie — you do think you recall the details of a situation perfectly well. But your brain is tricking you, just like deja vu. How bad is the eyewitness testimony problem? It used to be pretty bad.

…consider two facts. First, according to estimates made in the late 1980s, each year in the United States more than seventy-five thousand criminal trials were decided on the basis of eyewitness testimony. Second, a recent analysis of forty cases in which DNA evidence established the innocence of wrongly imprisoned individuals revealed that thirty-six of them (90 percent) involved mistaken eyewitness identification. There are no doubt other such mistakes that have not been rectified.

What happens is that, in any situation where our memory stores away information, it doesn’t have the horsepower to do it with complete accuracy. There are just too many variables to sort through. So we remember the general aspects of what happened, and we remember some details, depending on how salient they were.

We recall that we met John, Jim, and Todd, who were all part of the sales team for John Deere. We might recall that John was the young one with glasses, Jim was the older bald one, and Todd talked the most. We might remember specific moments or details of the conversation which stuck out.

But we don’t get it all perfectly, and if it was an unmemorable meeting, with the transience of time, we start to lose the details. The combination of the specifics and the details is a process called memory binding, and it’s often the source of misattribution errors.

Let’s say we remember for sure that we curled our hair this morning. All of our usual cues tell us that we did — our hair is curly, it’s part of our morning routine, we remember thinking it needed to be done, etc. But…did we turn the curling iron off? We remember that we did, but is that yesterday’s memory or today’s?

This is a memory binding error. Our brain didn’t sufficiently “link up” the curling event and the turning off of the curler, so we’re left to wonder. This binding issue leads to other errors, like the memory conjunction error, where sometimes the binding process does occur, but it makes a mistake. We misattribute the strong familiarity:

Having met Mr. Wilson and Mr. Albert during your business meeting, you reply confidently the next day when an associate asks you the name of the company vice president: “Mr. Wilbert.” You remembered correctly pieces of the two surnames but mistakenly combined them into a new one. Cognitive psychologists have developed experimental procedures in which people exhibit precisely these kinds of erroneous conjunctions between features of different words, pictures, sentences, or even faces. Thus, having studied spaniel and varnish, people sometimes claim to remember Spanish.

What’s happening is a misattribution. We know we saw the syllables Span- and –nish and our memory tells us we must have heard Spanish. But we didn’t.

Back to the eyewitness testimony problem, what’s happening is we’re combining a general familiarity with a lack of specific recall, and our brain is recombining those into a misattribution. We recall a tall-ish man with some sort of facial hair, and then we’re shown 6 men in a lineup, and one is tall-ish with facial hair, and our brain tells us that must be the guy. We make a relative judgment: Which person here is closest to what I think I saw? Unfortunately, like the Spanish/varnish issue, we never actually saw the person we’ve identified as the perp.

None of this occurs with much conscious involvement, of course. It’s happening subconsciously, which is why good procedures are needed to overcome the problem. In the case of suspect lineups, the solution is to show the witness each suspect, one after another, and have them give a thumbs up or thumbs down immediately. This takes away the relative comparison and makes us consciously compare the suspect in front of us with our memory of the perpetrator.

The good thing about this error is that people can be encouraged to search their memory more carefully. But it’s far from foolproof, even if we’re getting a very strong indication that we remember something.

And what helps prevent us from making too many errors is something Schacter calls the distinctiveness heuristic. If a distinctive thing supposedly happened, we usually reason we’d have a good memory of it. And usually this is a very good heuristic to have. (Remember, salience always encourages memory formation.) As we discussed in Part One, a salient artifact gives us something to tie a memory to. If I meet someone wearing a bright rainbow-colored shirt, I’m a lot more likely to recall some details about them, simply because they stuck out.

***

As an aside, misattribution allows us one other interesting insight into the human brain: Our “people information” remembering is a specific, distinct module, one that can falter on its own, without harming any other modules. Schacter discusses a man with a delusion that many of the normal people around him were film stars. He even misattributed made-up famous-sounding names (like Sharon Sugar) to famous people, although he couldn’t put his finger on who they were.

But the man did not falsely recognize other things. Made up cities or made up words did not trip up his brain in the strange way people did. This (and other data) tells us that our ability to recognize people is a distinct “module” our brain uses, supporting one of Judith Rich Harris’s ideas about human personality that we’ve discussed: The “people information lexicon” we develop throughout our lives is a uniquely important module we use.

***

One final misattribution is something called cryptomnesia — essentially the opposite of deja vu. It’s when we think we recognize something as new and novel even though we’ve seen it before. Accidental plagiarizing can even result from cryptomnesia. (Try telling that to your school teachers!) Cryptomnesia falls into the same bucket as other misattributions in that we fail to recollect the source of information we’re recalling — the information and event where we first remembered it are not bound together properly. Let’s say we “invent” the melody to a song which already exists. The melody sounds wonderful and familiar, so we like it. But we mistakenly think it’s new.

In the end, Schacter reminds us to think carefully about the memories we “know” are true, and to try to remember specifics when possible:

We often need to sort out ambiguous signals, such as feelings of familiarity or fleeting images, that may originate in specific past experiences, or arise from subtle influences in the present. Relying on judgment and reasoning to come up with plausible attributions, we sometimes go astray.  When misattribution combines with another of memory’s sins — suggestibility — people can develop detailed and strongly held recollections of complex events that never occurred.

And with that, we will leave it here for now. Next time we’ll delve into suggestibility and bias, two more memory sins with a range of practical outcomes.

Mental Models: Getting the World to Do the Work for You

People are working harder and harder to clean up otherwise avoidable messes they created by making poor initial decisions. There are many reasons we’re making poor decisions and failing to learn from them.

Under the heading, Sources of Stupidity in an article entitled Smart Decisions we wrote about some of the factors that contribute to suboptimal decisions.

1. We’re (sometimes) stupid. Of course, I like to think that I’m rational and capable of interpreting all of the information in a non-biased way. Only I’m not. At least not always. There are situations that increase the odds of irrationality, for instance when we’re tired, overly focused on a goal, rushing, distracted or interrupted, operating in a group, or are under the direction of an expert2.

2. We have the wrong information. In this case, we’re operating with the wrong facts or our assumptions are incorrect.

3. We use the wrong model. We use models to make decisions. The quality of those models often determines the quality of our thinking. There are a variety of reasons that we use false, incomplete, or incorrect models. For instance, we’re prone to using less useful models when we are a novice or we operate in a domain outside of our area of expertise. The odds of the wrong model also increase as the pace of environmental change increases.

4. We fail to learn. We all know the person that has 20 years of experience but it’s really the same year over and over. Well, that person is sometimes us. If we don’t understand how we learn, we’re likely to make the same mistakes over and over.

5. Doing what’s easy over what’s right. You can think of this as looking good over doing good.  In this case, we make choices based on optics, explainability, politics, etc. Mostly this comes from not having a strong sense of self and seeking external validation or avoiding punishment.

Simple But Not Easy

One of the metamodels that traverse all five of these sources of stupidity is understanding the world.

If you understand the world as it really is, not as you’d wish it to be, you will begin to make better decisions almost immediately. Once you start making better decisions the results compound. Better initial decisions free up your time, reduce your stress, allowing you to spend more time with your family and leave your competition in the dust.

Understanding The World

How can we best understand the world as it is?

Acquiring knowledge can be a very daunting task. If you think of the mind as a toolbox, we’re only as good as the tools at our disposal. A carpenter doesn’t show up to work with an empty toolbox. Not only do they want as many tools in their toolbox as possible, but they want to know how to use them. Having more tools and the knowledge of how to use them means they can tackle more problems.  Try as we might, we cannot build a house with only a hammer.

If you’re a knowledge worker, you’re a carpenter. But your tools aren’t bought at a store and they don’t come in a red box that you carry around. Mental tools are the big ideas from multiple disciplines, and we store them in our mind. And if we have a lot of tools and the knowledge required to wield them properly, we can start to synthesize how the world works and make better decisions when confronted with problems.

This is how we understand and deal with reality. The tools you put into your toolbox are Mental Models.

Mental models are a framework for understanding how the world really works. They help you grasp new ideas quickly, identify patterns before anyone else and shift your perspective with ease. Mental Models allow us to make better decisions, scramble out of bad situations, and think critically. If you want to understand reality you must look at a problem in multiple dimensions — how could it be otherwise?

Getting to this level of understanding requires having a lot of tools and knowing how to use them. You knew there was a hitch right?

We need to change our fast-food diet of information consumption and adopt the healthier diet of knowledge that changes slowly over time. While changing diets isn’t easy, it can be incredibly rewarding: more time, less stress, and being better at your job. The costs, however, are short term pain for long-term gain. You must change how you think.

2^nd Order Thinking

One example of a model we can immediately conceptualize and use to improve our ability to make better decisions is something we can borrow from ecology called second-order thinking. The simple way to conceptualize this is to ask yourself “If I do X, what will happen after that?”. I sum this up using the ecologist Garrett Hardin’s simple question: “And then what?

A lot of people forget about higher order effects — second and third-order effects or higher. I’ve been in a lot of meetings where decisions are made and very few people think to the second level, let alone the third. Rather, what typically happens is called first conclusion bias. The brain shuts down and stops thinking at the first idea that comes to mind that seems to address the problem as you understand it.

We don’t often realize that our first thoughts are usually not even our own thoughts. They usually belong to someone else. We understand the sound-byte but we haven’t done the hard work of real thinking. After we reach the first conclusion, our minds often shut down. We don’t seek evidence that would contradict our conclusion. We don’t ask ourselves what the likely result of this solution would be — we don’t ask ourselves “And then what?” We don’t ask what other solutions might be even more optimal.

For example, consider a hypothetical organization that decides to change their incentive systems. They come up with a costly new system that requires substantial changes to the current system. Only they don’t consider (or even understand) the problems that the new system is likely to create. It’s possible they’ve created more problems than they’ve solved – only now there are different problems they must put their head down to solve. Optically, they “reorganize their incentive programs,” but practically, they’ve simply expended energy to stay in place.

Another, perhaps more complicated, example is when a salesman comes into a company and offers you a software program he claims will lower your operating costs and increase your profits. He’s got all these beautiful charts on how much more competitive you’ll be and how it will improve everything. This is exactly what you need because your compensation is based on increasing profits. You’re sold.

Then second-order thinking kicks in and you dare to ask how much of those cost savings are going to go to you and how much will eventually end up benefits enjoyed by customers? To a large extent, that depends on the business you’re in. However, you can be damn sure the salesman is now knocking on your competitor’s door and telling them you just bought their product and if they want to remain competitive they better purchase it too. Eventually, you all have the new software and no one is truly better off. Thus, in the manner of a crowd of people standing on their tip-toes at a parade, all competitors spend the money but none of them win: The salesman wins and the customer wins.

We know, thanks to people like Garrett Hardin, Howard Marks, Charlie Munger, Peter Kaufman, and disciplines like ecology, that there are second and third-order effects. This is how the world really works.It just isn’t always a comfortable reality.

Understanding how the world works isn’t easy and it shouldn’t be. It’s hard work. If it were easy, everyone would do it. And it’s not for everyone. Sometimes, if your goal is to maximize utility, you should focus on getting very, very good in a narrow area and becoming an expert, accepting that you will make many mistakes outside of that domain. But for most, it’s extremely helpful to understand the forces at play outside of their narrow area of expertise.

Because when you think about it, how could reality be anything other than a synthesis of multiple factors? How could it possibly be otherwise?

Henry Ford and the Actual Value of Education

“The object of education is not to fill a man’s mind with facts;
it is to teach him how to use his mind in thinking.”
— Henry Ford

***

In his memoir My Life and Work, written in 1934, the brilliant (but flawed) Henry Ford (1863-1947) offers perhaps the best definition you’ll find of the value of an education, and a useful warning against the mere accumulation of information for the sake of its accumulation. A  devotee of lifelong learning need not be a Jeopardy contestant, accumulating trivia to spit back as needed. In the Age of Google, that sort of knowledge is increasingly irrelevant.

A real life-long learner seeks to learn and apply the world’s best knowledge to create a more constructive and more useful life for themselves and those around them. And to do that, you have to learn how to think on your feet. The world does not offer up no-brainers every day; more frequently, we’re presented with a lot of grey options. Unless your studies are improving your ability to handle reality as it is and get a fair result, you’re probably wasting your time.

From Ford’s memoir:

An educated man is not one whose memory is trained to carry a few dates in history—he is one who can accomplish things. A man who cannot think is not an educated man however many college degrees he may have acquired. Thinking is the hardest work anyone can do—which is probably the reason why we have so few thinkers. There are two extremes to be avoided: one is the attitude of contempt toward education, the other is the tragic snobbery of assuming that marching through an educational system is a sure cure for ignorance and mediocrity. You cannot learn in any school what the world is going to do next year, but you can learn some of the things which the world has tried to do in former years, and where it failed and where it succeeded. If education consisted in warning the young student away from some of the false theories on which men have tried to build, so that he may be saved the loss of the time in finding out by bitter experience, its good would be unquestioned.

An education which consists of signposts indicating the failure and the fallacies of the past doubtless would be very useful. It is not education just to possess the theories of a lot of professors. Speculation is very interesting, and sometimes profitable, but it is not education. To be learned in science today is merely to be aware of a hundred theories that have not been proved. And not to know what those theories are is to be “uneducated,” “ignorant,” and so forth. If knowledge of guesses is learning, then one may become learned by the simple expedient of making his own guesses. And by the same token he can dub the rest of the world “ignorant” because it does not know what his guesses are.

But the best that education can do for a man is to put him in possession of his powers, give him control of the tools with which destiny has endowed him, and teach him how to think. The college renders its best service as an intellectual gymnasium, in which mental muscle is developed and the student strengthened to do what he can. To say, however, that mental gymnastics can be had only in college is not true, as every educator knows. A man’s real education begins after he has left school. True education is gained through the discipline of life.

[…]

Men satisfy their minds more by finding out things for themselves than by heaping together the things which somebody else has found out. You can go out and gather knowledge all your life, and with all your gathering you will not catch up even with your own times. You may fill your head with all the “facts” of all the ages, and your head may be just an overloaded fact−box when you get through. The point is this: Great piles of knowledge in the head are not the same as mental activity. A man may be very learned and very useless. And then again, a man may be unlearned and very useful.

The object of education is not to fill a man’s mind with facts; it is to teach him how to use his mind in thinking. And it often happens that a man can think better if he is not hampered by the knowledge of the past.

Ford is probably wrong in his very last statement, study of the past is crucial to understand the human condition, but the sentiment offered in the rest of the piece should be read and re-read frequently.

This brings to mind a debate you’ll hear that almost all debaters get wrong: What’s more valuable, to be educated in the school of life, or in the school of books? Which is it?

It’s both!

This is what we call a false dichotomy. There is absolutely no reason to choose between the two. We’re all familiar with the algebra. If A and B have positive value, then A+B must be greater than A or B alone! You must learn from your life as it goes along, but since we have the option to augment that by studying the lives of others, why would we not take advantage? All it takes is the will and the attitude to study the successes and failures of history, add them to your own experience, and get an algebra-style A+B result.

So, resolve to use your studies to learn to think, to learn to handle the world better, to be more useful to those around you. Don’t worry about the facts and figures for their own sake. We don’t need another human encyclopedia.

***

Still Interested? Check out all of Ford’s interesting memoir, or try reading up on what a broad education should contain. 

A Few Useful Mental Tools from Richard Feynman

We’ve covered the brilliant physicist Richard Feynman (1918-1988) many times here before. He was a genius. A true genius. But there have been many geniuses — physics has been fortunate to attract some of them — and few of them are as well known as Feynman. Why is Feynman so well known? It’s likely because he had tremendous range outside of pure science, and although he won a Nobel Prize for his work in quantum mechanics, he’s probably best known for other things, primarily his wonderful ability to explain and teach.

This ability was on display in a series of non-technical lectures in 1963, memorialized in a short book called The Meaning of it All: Thoughts of a Citizen Scientist. The lectures are a wonderful example of how well Feynman’s brain worked outside of physics, talking through basic reasoning and some of the problems of his day.

Particularly useful are a series of “tricks of the trade” he gives in a section called This Unscientific Age. These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day. They’re wonderfully instructive. Let’s check them out.

Mental Tools from Richard Feynman

Before we start, it’s worth noting that Feynman takes pains to mention that not everything needs to be considered with scientific accuracy. So don’t waste your time unless it’s a scientific matter. So let’s start with a deep breath:

Now, that there are unscientific things is not my grief. That’s a nice word. I mean, that is not what I am worrying about, that there are unscientific things. That something is unscientific is not bad; there is nothing the matter with it. It is just unscientific. And scientific is limited, of course, to those things that we can tell about by trial and error. For example, there is the absurdity of the young these days chanting things about purple people eaters and hound dogs, something that we cannot criticize at all if we belong to the old flat foot floogie and a floy floy or the music goes down and around. Sons of mothers who sang about “come, Josephine, in my flying machine,” which sounds just about as modern as “I’d like to get you on a slow boat to China.” So in life, in gaiety, in emotion, in human pleasures and pursuits, and in literature and so on, there is no need to be scientific, there is no reason to be scientific. One must relax and enjoy life. That is not the criticism. That is not the point.

As we enter the realm of “knowable” things in a scientific sense, the first trick has to do with deciding whether someone truly knows their stuff or is mimicking:

The first one has to do with whether a man knows what he is talking about, whether what he says has some basis or not. And my trick that I use is very easy. If you ask him intelligent questions—that is, penetrating, interested, honest, frank, direct questions on the subject, and no trick questions—then he quickly gets stuck. It is like a child asking naive questions. If you ask naive but relevant questions, then almost immediately the person doesn’t know the answer, if he is an honest man. It is important to appreciate that.

And I think that I can illustrate one unscientific aspect of the world which would be probably very much better if it were more scientific. It has to do with politics. Suppose two politicians are running for president, and one goes through the farm section and is asked, “What are you going to do about the farm question?” And he knows right away— bang, bang, bang.

Now he goes to the next campaigner who comes through. “What are you going to do about the farm problem?” “Well, I don’t know. I used to be a general, and I don’t know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem. And it must be a hard problem. So the way that I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can’t tell you ahead of time what conclusion, but I can give you some of the principles I’ll try to use—not to make things difficult for individual farmers, if there are any special problems we will have to have some way to take care of them,” etc., etc., etc.

That’s a wonderfully useful way to figure out whether someone is Max Planck or the chauffeur.

The second trick regards how to deal with uncertainty:

People say to me, “Well, how can you teach your children what is right and wrong if you don’t know?” Because I’m pretty sure of what’s right and wrong. I’m not absolutely sure; some experiences may change my mind. But I know what I would expect to teach them. But, of course, a child won’t learn what you teach him.

I would like to mention a somewhat technical idea, but it’s the way, you see, we have to understand how to handle uncertainty. How does something move from being almost certainly false to being almost certainly true? How does experience change? How do you handle the changes of your certainty with experience? And it’s rather complicated, technically, but I’ll give a rather simple, idealized example.

You have, we suppose, two theories about the way something is going to happen, which I will call “Theory A” and “Theory B.” Now it gets complicated. Theory A and Theory B. Before you make any observations, for some reason or other, that is, your past experiences and other observations and intuition and so on, suppose that you are very much more certain of Theory A than of Theory B—much more sure. But suppose that the thing that you are going to observe is a test. According to Theory A, nothing should happen. According to Theory B, it should turn blue. Well, you make the observation, and it turns sort of a greenish. Then you look at Theory A, and you say, “It’s very unlikely,” and you turn to Theory B, and you say, “Well, it should have turned sort of blue, but it wasn’t impossible that it should turn sort of greenish color.” So the result of this observation, then, is that Theory A is getting weaker, and Theory B is getting stronger. And if you continue to make more tests, then the odds on Theory B increase. Incidentally, it is not right to simply repeat the same test over and over and over and over, no matter how many times you look and it still looks greenish, you haven’t made up your mind yet. But if you find a whole lot of other things that distinguish Theory A from Theory B that are different, then by accumulating a large number of these, the odds on Theory B increase.

Feynman is talking about Grey Thinking here, the ability to put things on a gradient from “probably true” to “probably false” and how we deal with that uncertainty. He isn’t proposing a method of figuring out absolute, doctrinaire truth.

Another term for what he’s proposing is Bayesian updating — starting with a priori odds, based on earlier understanding, and “updating” the odds of something based on what you learn thereafter. An extremely useful tool.

Feynman’s third trick is the realization that as we investigate whether something is true or not, new evidence and new methods of experimentation should show the effect of getting stronger and stronger, not weaker. He uses an excellent example here by analyzing mental telepathy:

I give an example. A professor, I think somewhere in Virginia, has done a lot of experiments for a number of years on the subject of mental telepathy, the same kind of stuff as mind reading. In his early experiments the game was to have a set of cards with various designs on them (you probably know all this, because they sold the cards and people used to play this game), and you would guess whether it’s a circle or a triangle and so on while someone else was thinking about it. You would sit and not see the card, and he would see the card and think about the card and you’d guess what it was. And in the beginning of these researches, he found very remarkable effects. He found people who would guess ten to fifteen of the cards correctly, when it should be on the average only five. More even than that. There were some who would come very close to a hundred percent in going through all the cards. Excellent mind readers.

A number of people pointed out a set of criticisms. One thing, for example, is that he didn’t count all the cases that didn’t work. And he just took the few that did, and then you can’t do statistics anymore. And then there were a large number of apparent clues by which signals inadvertently, or advertently, were being transmitted from one to the other.

Various criticisms of the techniques and the statistical methods were made by people. The technique was therefore improved. The result was that, although five cards should be the average, it averaged about six and a half cards over a large number of tests. Never did he get anything like ten or fifteen or twenty-five cards. Therefore, the phenomenon is that the first experiments are wrong. The second experiments proved that the phenomenon observed in the first experiment was nonexistent. The fact that we have six and a half instead of five on the average now brings up a new possibility, that there is such a thing as mental telepathy, but at a much lower level. It’s a different idea, because, if the thing was really there before, having improved the methods of experiment, the phenomenon would still be there. It would still be fifteen cards. Why is it down to six and a half? Because the technique improved. Now it still is that the six and a half is a little bit higher than the average of statistics, and various people criticized it more subtly and noticed a couple of other slight effects which might account for the results.

It turned out that people would get tired during the tests, according to the professor. The evidence showed that they were getting a little bit lower on the average number of agreements. Well, if you take out the cases that are low, the laws of statistics don’t work, and the average is a little higher than the five, and so on. So if the man was tired, the last two or three were thrown away. Things of this nature were improved still further. The results were that mental telepathy still exists, but this time at 5.1 on the average, and therefore all the experiments which indicated 6.5 were false. Now what about the five? . . . Well, we can go on forever, but the point is that there are always errors in experiments that are subtle and unknown. But the reason that I do not believe that the researchers in mental telepathy have led to a demonstration of its existence is that as the techniques were improved, the phenomenon got weaker. In short, the later experiments in every case disproved all the results of the former experiments. If remembered that way, then you can appreciate the situation.

This echoes Feyman’s dictum about not fooling oneself: We must refine our process for probing and experimenting if we’re to get at real truth, always watching out for little troubles. Otherwise, we torture the world so that results fit our expectations. If we carefully refine and re-test and the effect gets weaker all the time, it’s likely to not be true, or at least not to the magnitude originally hoped for.

The fourth trick is to ask the right question, which is not “Could this be the case?” but “Is this actually the case?” Many get so caught up with the former that they forget to ask the latter:

That brings me to the fourth kind of attitude toward ideas, and that is that the problem is not what is possible. That’s not the problem. The problem is what is probable, what is happening. It does no good to demonstrate again and again that you can’t disprove that this could be a flying saucer. We have to guess ahead of time whether we have to worry about the Martian invasion. We have to make a judgment about whether it is a flying saucer, whether it’s reasonable, whether it’s likely. And we do that on the basis of a lot more experience than whether it’s just possible, because the number of things that are possible is not fully appreciated by the average individual. And it is also not clear, then, to them how many things that are possible must not be happening. That it’s impossible that everything that is possible is happening. And there is too much variety, so most likely anything that you think of that is possible isn’t true. In fact that’s a general principle in physics theories: no matter what a guy thinks of, it’s almost always false. So there have been five or ten theories that have been right in the history of physics, and those are the ones we want. But that doesn’t mean that everything’s false. We’ll find out.

The fifth trick is a very, very common one, even 50 years after Feynman pointed it out. You cannot judge the probability of something happening after it’s already happened. That’s cherry-picking. You have to run the experiment forward for it to mean anything:

I now turn to another kind of principle or idea, and that is that there is no sense in calculating the probability or the chance that something happens after it happens. A lot of scientists don’t even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it’s a general principle of psychologists that in these tests they arrange so that the odds that the things that happen happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out.

This man had designed an experiment which would show something which I do not remember, if the rats always went to the right, let’s say. I can’t remember exactly. He had to do a great number of tests, because, of course, they could go to the right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And its hard to do, and he did his number. Then he found that it didn’t work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn’t count.”

He said, “Why?” I said, “Because it doesn’t make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”

For example, I had the most remarkable experience this evening. While coming in here, I saw license plate ANZ 912. Calculate for me, please, the odds that of all the license plates in the state of Washington I should happen to see ANZ 912. Well, it’s a ridiculous thing. And, in the same way, what he must do is this: The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn’t work.

The sixth trick is one that’s familiar to almost all of us, yet almost all of us forget about every day: The plural of anecdote is not data. We must use proper statistical sampling to know whether or not we know what we’re talking about:

The next kind of technique that’s involved is statistical sampling. I referred to that idea when I said they tried to arrange things so that they had one in twenty odds. The whole subject of statistical sampling is somewhat mathematical, and I won’t go into the details. The general idea is kind of obvious. If you want to know how many people are taller than six feet tall, then you just pick people out at random, and you see that maybe forty of them are more than six feet so you guess that maybe everybody is. Sounds stupid.

Well, it is and it isn’t. If you pick the hundred out by seeing which ones come through a low door, you’re going to get it wrong. If you pick the hundred out by looking at your friends you’ll get it wrong because they’re all in one place in the country. But if you pick out a way that as far as anybody can figure out has no connection with their height at all, then if you find forty out of a hundred, then, in a hundred million there will be more or less forty million. How much more or how much less can be worked out quite accurately. In fact, it turns out that to be more or less correct to 1 percent, you have to have 10,000 samples. People don’t realize how difficult it is to get the accuracy high. For only 1 or 2 percent you need 10,000 tries.

The last trick is to realize that many errors people make simply come from lack of information. They don’t even know they’re missing the tools they need. This can be a very tough one to guard against — it’s hard to know when you’re missing information that would change your mind — but Feynman gives the simple case of astrology to prove the point:

Now, looking at the troubles that we have with all the unscientific and peculiar things in the world, there are a number of them which cannot be associated with difficulties in how to think, I think, but are just due to some lack of information. In particular, there are believers in astrology, of which, no doubt, there are a number here. Astrologists say that there are days when it’s better to go to the dentist than other days. There are days when it’s better to fly in an airplane, for you, if you are born on such a day and such and such an hour. And it’s all calculated by very careful rules in terms of the position of the stars. If it were true it would be very interesting. Insurance people would be very interested to change the insurance rates on people if they follow the astrological rules, because they have a better chance when they are in the airplane. Tests to determine whether people who go on the day that they are not supposed to go are worse off or not have never been made by the astrologers. The question of whether it’s a good day for business or a bad day for business has never been established. Now what of it? Maybe it’s still true, yes.

On the other hand, there’s an awful lot of information that indicates that it isn’t true. Because we have a lot of knowledge about how things work, what people are, what the world is, what those stars are, what the planets are that you are looking at, what makes them go around more or less, where they’re going to be in the next 2000 years is completely known. They don’t have to look up to find out where it is. And furthermore, if you look very carefully at the different astrologers they don’t agree with each other, so what are you going to do? Disbelieve it. There’s no evidence at all for it. It’s pure nonsense.

The only way you can believe it is to have a general lack of information about the stars and the world and what the rest of the things look like. If such a phenomenon existed it would be most remarkable, in the face of all the other phenomena that exist, and unless someone can demonstrate it to you with a real experiment, with a real test, took people who believe and people who didn’t believe and made a test, and so on, then there’s no point in listening to them.

***

Still Interested? Check out the (short) book: The Meaning of it All: Thoughts of a Citizen-Scientist.

Warren Berger’s Three-Part Method for More Creativity

“A problem well stated is a problem half-solved.”
— Charles “Boss” Kettering

***

The whole scientific method is built on a very simple structure: If I do this, then what will happen? That’s the basic question on which more complicated, intricate, and targeted lines of inquiry are built, across a wide variety of subjects. This simple form helps us push deeper and deeper into knowledge of the world. (On a sidenote, science has become such a loaded, political word that this basic truth of how it works frequently seems to be lost!)

Individuals learn this way too. From the time you were a child, you were asking why (maybe even too much), trying to figure out all the right questions to ask to get better information about how the world works and what to do about it.

Because question-asking is such an integral part of how we know things about the world, both institutionally and individually, it seems worthy to understand how creative inquiry works, no? If we want to do things that haven’t been done or learn things that have never been learned — in short, be more creative — we must learn to ask the right questions, ones so good that they’re half-answered in the asking. And to do that, it might help to understand the process, no?

Warren Berger proposes a simple method in his book A More Beautiful Questionan interesting three-part system to help (partially) solve the problem of inquiry. He calls it The Why, What If, and How of Innovative Questioning, and reminds us why it’s worth learning about.

Each stage of the problem solving process has distinct challenges and issues–requiring a different mind-set, along with different types of questions. Expertise is helpful at certain points, not so helpful at others; wide-open, unfettered divergent thinking is critical at one stage, discipline and focus is called for at another. By thinking of questioning and problem solving in a more structured way, we can remind ourselves to shift approaches, change tools, and adjust our questions according to which stage we’re entering.

Three-Part Method for More Creativity

Why?

It starts with the Why?

A good Why? seeks true understanding. Why are things the way they are currently? Why do we do it that way? Why do we believe what we believe?

This start is essential because it gives us permission to continue down a line of inquiry fully equipped. Although we may think we have a brilliant idea in our heads for a new product, or a new answer to an old question, or a new way of doing an old thing, unless we understand why things are the way they are, we’re not yet on solid ground. We never want to operate from a position of ignorance, wasting our time on an idea that hasn’t been pushed and fleshed out. Before we say “I already know” the answer, maybe we need to step back and look for the truth.

At the same time, starting with a strong Why also opens up the idea that the current way (whether it’s our way or someone else’s) might be wrong, or at least inefficient. Let’s say a friend proposes you go to the same restaurant you’ve been to a thousand times. It might be a little agitating, but a simple “Why do we always go there?” allows two things to happen:

A. Your friend can explain why, and this gives him/her a legitimate chance at persuasion. (If you’re open minded.)

B. The two of you may agree you only go there out of habit, and might like to go somewhere else.

This whole Why? business is the realm of contrarian thinking, which not everyone enjoys doing. But Berger cites the case of George Lois:

George Lois, the renowned designer of iconic magazine covers and celebrated advertising campaigns, was also known for being a disruptive force in business meetings. It wasn’t just that he was passionate in arguing for his ideas; the real issue, Lois recalls, was that often he was the only person in the meeting willing to ask why. The gathered business executives would be anxious to proceed on a course of action assumed to be sensible. While everyone else nodded in agreement, “I would be the only guy raising his hand to say, ‘Wait a minute, this thing you want to do doesn’t make any sense. Why the hell are you doing it this way?”

Others in the room saw Lois to be slowing the meeting and stopping the group from moving forward. But Lois understood that the group was apt to be operating on habit–trotting out an idea or approach similar to what had been done in similar situations before, without questioning whether it was the best idea or the right approach in this instance. The group needed to be challenged to “step back” by someone like Lois–who had a healthy enough ego to withstand being the lone questioner in the room.

The truth is that a really good Why? type question tends to be threatening. That’s also what makes it useful. It challenges us to step back and stop thinking on autopilot. It also requires what Berger calls a step back from knowing — that recognizable feeling of knowing something but not knowing how you know it. This forced perspective is, of course, as valuable a thing as you can do.

Berger describes a valuable exercise that’s sometimes used to force perspective on people who think they already have a complete answer. After showing a drawing of a large square (seemingly) divided into 16 smaller squares, the questioner asks the audience “How many squares do you see?”

The easy answer is sixteen. But the more observant people in the group are apt to notice–especially after Srinivas allows them to have a second, longer, look–that you can find additional squares by configuring them differently. In addition to the sixteen single squares, there are nine two-by-two squares, four three-by-three squares, and one large four-by-four square, which brings the total to thirty squares.

“The squares were always there, but you didn’t find them until you looked for them.”

Point being, until you step back, re-examine, and look a little harder, you might not have seen all the damn squares yet!

What If?

The second part is where a good questioner, after using Why? to understand as deeply as possible and open a new line of inquiry, proposes a new type of solution, usually an audacious one — all great ideas tend to be, almost by definition — by asking What If…?

Berger illustrates this one well with the story of Pandora Music. The founder Tim Westergren wanted to know why good music wasn’t making it out to the masses. His search didn’t lead to a satisfactory answer, so he eventually asked himself, What if we could map the DNA of music? The result has been pretty darn good, with something close to 80 million listeners at present:

The Pandora story, like many stories of inquiry-driven startups, started with someone’s wondering about an unmet need. It concluded with the questioner, Westergren, figuring out how to bring a fully realized version of the answer into the world.

But what happened in between? That’s when the lightning struck. In Westergren’s case, ideas and influences began to come together; he combined what he knew about music with what he was learning about technology. Inspiration was drawn from a magazine article, and from a seemingly unrelated world (biology). A vision of the new possibility began to form in the mind. It all resulted in an audacious hypothetical question that might or might not have been feasible–but was exciting enough to rally people to the challenge of trying to make it work.

The What If stage is the blue-sky moment of questioning, when anything is possible. Those possibilities may not survive the more practical How stage; but it’s critical to innovation that there be time for wild, improbable ideas to surface and to inspire.

If the word Why has penetrative power, enabling the questioner to get past assumptions and dig deep into problems, the words What if have a more expansive effect–allowing us to think without limits or constraints, firing the imagination.

Clearly, Westergren had engaged in serious combinatorial creativity pulling from multiple disciplines, which led him to ask the right kind of questions. This seems to be a pretty common feature at this stage of the game, and an extremely common feature of all new ideas:

Smart recombinations are all around us. Pandora, for example, is a combination of a radio station and search engine; it also takes the biological method of genetic coding and transfers it to the domain of music […] In today’s tech world, many of the most successful products–Apple’s iPhone being just one notable example–are hybrids, melding functions and features in new ways.

Companies, too, can be smart recombinations. Netflix was started as a video-rental business that operated like a monthly membership health club (and how it has added “TV production studio” to the mix). Airbnb is a combination of an online travel agency, a social media platform, and a good old-fashioned bed-and-breakfast (the B&B itself is a smart combination from way back.)

It may be that the Why? –> What if? line of inquiry is common to all types of innovative thinking because it engages the part of our brain that starts turning over old ideas in new ways by combining them with other unrelated ideas, much of them previously sitting idle in our subconscious. That churning is where new ideas really arise.

The idea then has to be “reality-tested”, and that’s where the last major question comes in.

How?

Once we think we’ve hit on a brilliant new idea, it’s time to see if the thing actually works. Usually and most frequently, the answer is no. But enough times to make it worth our while, we discover that the new idea has legs.

The most common problem here is that we try to perfect a new idea all at once, leading to stagnation and paralysis. That’s usually the wrong approach.

Another, often better, way is to try the idea quickly and start getting feedback. As much as possible. In the book, Berger describes a fun little experiment that drives home the point, and serves as a fairly useful business metaphor besides:

A software designer shared a story about an interesting experiment in which the organizers brought together a group of kindergarten children who were divided into small teams and given a challenge: Using uncooked spaghetti sticks, string, tape, and a marshmallow, they had to assemble the tallest structure they could, within a time limit (the marshmallow was supposed to be placed on top of the completed structure.)

Then, in a second phase of the experiment, the organizers added a new wrinkle. They brought in teams of Harvard MBA grad students to compete in the challenge against the kindergartners. The grad students, I’m told, took it seriously. They brought a highly analytical approach to the challenge, debating among themselves about how best to combine the sticks, the string, and the tape to achieve maximum altitude.

Perhaps you’ll have guessed this already, but the MBA students were no match for the kindergartners. For all their planning and discussion, the structures they carefully conceived invariably fell apart–and then they were out of time before they could get in more attempts.

The kids used their time much more efficiently by constructing right away. They tried one way of building, and if it didn’t work, they quickly tried another. They got in a lot more tries. They learned from their mistakes as they went along, instead of attempting to figure out everything in advance.

This little experiment gets run in the real world all the time by startups looking to outcompete ponderous old bureaucracies. They simply substitute velocity for scale and see what happens — it often works well.

The point is to move along the axis of Why?–>What If–>How? without too much self-censoring in the last phase. Being afraid to fail can often mean a great What If? proposition gets stuck there forever. Analysis paralysis, as it’s sometimes called. But if you can instead enter the testing of the How? stage quickly, even by showing that an idea won’t work, then you can start the loop over again, either asking a new Why? or proposing a new What If? to an existing Why?

Thus moving your creative engine forward.

***

Berger’s point is that there is an intense practical end to understanding productive inquiry. Just like “If I do this, then what will happen?” is a basic structure on which all manner of complex scientific questioning and testing is built, so can a simple Why, What If, and How structure catalyze a litany of new ideas.

Still Interested? Check out the book, or check out some related posts: Steve Jobs on CreativitySeneca on Gathering Ideas And Combinatorial Creativity, or for some fun with question-asking, What If? Serious Scientific Answers to Absurd Hypothetical Questions.

Architect Matthew Frederick on the Three Levels of Knowing

Three Levels of Knowing

Architect Matthew Frederick draws our attention to the three levels of knowing in 101 Things Things I Learned in Architecture School.

Simplicity is the world view of the child or uninformed adult, fully engaged in his own experience and happily unaware of what lies beneath the surface of immediate reality.

Complexity characterizes the ordinary adult world view. It is characterized by an awareness of complex systems in nature and society but an inability to discern clarifying patterns and connections.

Informed Simplicity is an enlightened view of reality. It is founded on ability to discern or create clarifying patterns with complex mixtures. Pattern recognition is a crucial skill for an architect, who must create a highly ordered building amid many competing and frequently nebulous design considerations.

One approach to informed simplicity is a narrow specialization. By immersing yourself in one discipline or field, you can often begin to see things at an informed simplicity level. That is, you understand the variables at play, the probable results, what’s important and what’s not, etc.

Farnam Street takes another approach.

We’re trying to better understand how the world works so we can align ourselves with reality. We become the generalist, with a few big ideas from each discipline that we can combine to understand the forces at play.

However, we can only take you so far. Part of seeing things with informed simplicity means that you’ve done the work and chewed on the complexity yourself. If we gave you the answers – not that we have them – you’d never have them when you need them because you wouldn’t understand why they work, when they work and when they don’t work. You have to synthesize for yourself.

At the 2016 Daily Journal Meeting, Charlie Munger commented on this:

Saying you’re in favor of synthesis is like saying you’re in favor of reality. Synthesis is reality because we live in a world with multiple factors involved. Of course, you’ve got to have synthesis to understand the situation when two factors are intertwined. Of course, you want to be good at synthesis.

It’s easy to say you want to be good at synthesis. But it’s not what the reward system of the world pays for. They want extreme specialization. By the way, for most people extreme specialization is the way to succeed. Most people are way better off being a chiropodist than trying to understand a little bit of all the disciplines.