Tag: Thinking

How To Mentally Overachieve — Charles Darwin’s Reflections On His Own Mind

We’ve written quite a bit about the marvelous British naturalist Charles Darwin, who with his Origin of Species created perhaps the most intense intellectual debate in human history, one which continues up to this day.

Darwin’s Origin was a courageous and detailed thought piece on the nature and development of biological species. It’s the starting point for nearly all of modern biology.

But, as we’ve noted before, Darwin was not a man of pure IQ. He was not Issac Newton, or Richard Feynman, or Albert Einstein — breezing through complex mathematical physics at a young age.

Charlie Munger thinks Darwin would have placed somewhere in the middle of a good private high school class. He was also in notoriously bad health for most of his adult life and, by his son’s estimation, a terrible sleeper. He really only worked a few hours a day in the many years leading up to the Origin of Species.

Yet his “thinking work” outclassed almost everyone. An incredible story.

In his autobiography, Darwin reflected on this peculiar state of affairs. What was he good at that led to the result? What was he so weak at? Why did he achieve better thinking outcomes? As he put it, his goal was to:

“Try to analyse the mental qualities and the conditions on which my success has depended; though I am aware that no man can do this correctly.”

In studying Darwin ourselves, we hope to better appreciate our own strengths and weaknesses and, not to mention understand the working methods of a “mental overachiever.

Let’s explore what Darwin saw in himself.

***

1. He did not have a quick intellect or an ability to follow long, complex, or mathematical reasoning. He may have been a bit hard on himself, but Darwin realized that he wasn’t a “5 second insight” type of guy (and let’s face it, most of us aren’t). His life also proves how little that trait matters if you’re aware of it and counter-weight it with other methods.

I have no great quickness of apprehension or wit which is so remarkable in some clever men, for instance, Huxley. I am therefore a poor critic: a paper or book, when first read, generally excites my admiration, and it is only after considerable reflection that I perceive the weak points. My power to follow a long and purely abstract train of thought is very limited; and therefore I could never have succeeded with metaphysics or mathematics. My memory is extensive, yet hazy: it suffices to make me cautious by vaguely telling me that I have observed or read something opposed to the conclusion which I am drawing, or on the other hand in favour of it; and after a time I can generally recollect where to search for my authority. So poor in one sense is my memory, that I have never been able to remember for more than a few days a single date or a line of poetry.

2. He did not feel easily able to write clearly and concisely. He compensated by getting things down quickly and then coming back to them later, thinking them through again and again. Slow, methodical….and ridiculously effective: For those who haven’t read it, the Origin of Species is extremely readable and clear, even now, 150 years later.

I have as much difficulty as ever in expressing myself clearly and concisely; and this difficulty has caused me a very great loss of time; but it has had the compensating advantage of forcing me to think long and intently about every sentence, and thus I have been led to see errors in reasoning and in my own observations or those of others.

There seems to be a sort of fatality in my mind leading me to put at first my statement or proposition in a wrong or awkward form. Formerly I used to think about my sentences before writing them down; but for several years I have found that it saves time to scribble in a vile hand whole pages as quickly as I possibly can, contracting half the words; and then correct deliberately. Sentences thus scribbled down are often better ones than I could have written deliberately.

3. He forced himself to be an incredibly effective and organized collector of information. Darwin’s system of reading and indexing facts in large portfolios is worth emulating, as is the habit of taking down conflicting ideas immediately.

As in several of my books facts observed by others have been very extensively used, and as I have always had several quite distinct subjects in hand at the same time, I may mention that I keep from thirty to forty large portfolios, in cabinets with labelled shelves, into which I can at once put a detached reference or memorandum. I have bought many books, and at their ends I make an index of all the facts that concern my work; or, if the book is not my own, write out a separate abstract, and of such abstracts I have a large drawer full. Before beginning on any subject I look to all the short indexes and make a general and classified index, and by taking the one or more proper portfolios I have all the information collected during my life ready for use.

4. He had possibly the most valuable trait in any sort of thinker: A passionate interest in understanding reality and putting it in useful order in his headThis “Reality Orientation” is hard to measure and certainly does not show up on IQ tests, but probably determines, to some extent, success in life.

On the favourable side of the balance, I think that I am superior to the common run of men in noticing things which easily escape attention, and in observing them carefully. My industry has been nearly as great as it could have been in the observation and collection of facts. What is far more important, my love of natural science has been steady and ardent.

This pure love has, however, been much aided by the ambition to be esteemed by my fellow naturalists. From my early youth I have had the strongest desire to understand or explain whatever I observed,–that is, to group all facts under some general laws. These causes combined have given me the patience to reflect or ponder for any number of years over any unexplained problem. As far as I can judge, I am not apt to follow blindly the lead of other men. I have steadily endeavoured to keep my mind free so as to give up any hypothesis, however much beloved (and I cannot resist forming one on every subject), as soon as facts are shown to be opposed to it.

Indeed, I have had no choice but to act in this manner, for with the exception of the Coral Reefs, I cannot remember a single first-formed hypothesis which had not after a time to be given up or greatly modified. This has naturally led me to distrust greatly deductive reasoning in the mixed sciences. On the other hand, I am not very sceptical—a frame of mind which I believe to be injurious to the progress of science. A good deal of scepticism in a scientific man is advisable to avoid much loss of time, but I have met with not a few men, who, I feel sure, have often thus been deterred from experiment or observations, which would have proved directly or indirectly serviceable.

[…]

Therefore my success as a man of science, whatever this may have amounted to, has been determined, as far as I can judge, by complex and diversified mental qualities and conditions. Of these, the most important have been—the love of science—unbounded patience in long reflecting over any subject—industry in observing and collecting facts—and a fair share of invention as well as of common sense.

5. Most inspirational to us of average intellect, he outperformed his own mental aptitude with these good habits, surprising even himself with the results.

With such moderate abilities as I possess, it is truly surprising that I should have influenced to a considerable extent the belief of scientific men on some important points.

***

Still Interested? Read his autobiography, his The Origin of Species, or check out David Quammen’s wonderful short biography of the most important period of Darwin’s life. Also, if you missed it, check out our prior post on Darwin’s Golden Rule.

What Are You Doing About It? Reaching Deep Fluency with Mental Models

The mental models approach is very intellectually appealing, almost seductive to a certain type of person. (It certainly is for us.)

The whole idea is to take the world’s greatest, most useful ideas and make them work for you!

How hard can it be?

Nearly all of the models themselves are perfectly well understandable by the average well-educated knowledge worker, including all of you reading this piece. Ideas like Bayesian updating, multiplicative thinking, hindsight bias, or the bias from envy and jealousy, are all obviously true and part of the reality we live in.

There’s a bit of a problem we’re seeing though: People are reading the stuff, enjoying it, agreeing with it…but not taking action. It’s not becoming part of their standard repertoire.

Let’s say you followed up on Bayesian thinking after reading our post on it — you spent some time soaking in Thomas Bayes‘ great wisdom on updating your understanding of the world incrementally and probabilistically rather than changing your mind in black-and-white. Great!

But a week later, what have you done with that knowledge? How has it actually impacted your life? If the honest answer is “It hasn’t,” then haven’t you really wasted your time?

Ironically, it’s this habit of “going halfway” instead of “going all the way,” like Sisyphus constantly getting halfway up the mountain, which is the biggest waste of time!

See, the common reason why people don’t truly “follow through” with all of this stuff is that they haven’t raised their knowledge to a “deep fluency” — they’re skimming the surface. They pick up bits and pieces — some heuristics or biases here, a little physics or biology there, and then call it a day and pull up Netflix. They get a little understanding, but not that much, and certainly no doing.

The better approach, if you actually care about making changes, is to imitate Charlie Munger, Charles Darwin, and Richard Feynman, and start raising your knowledge of the Big Ideas to a deep fluency, and then figuring out systems, processes, and mental tricks to implement them in your own life.

Let’s work through an example.

***

Say you’re just starting to explore all the wonderful literature on heuristics and biases and come across the idea of Confirmation Bias: The idea that once we’ve landed on an idea we really like, we tend to keep looking for further data to confirm our already-held notions rather than trying to disprove our idea.

This is common, widespread, and perfectly natural. We all do it. John Kenneth Galbraith put it best:

“In the choice between changing one’s mind and proving there’s no need to do so, most people get busy on the proof.”

Now, what most people do, the ones you’re trying to outperform, is say “Great idea! Thanks Galbraith.” and then stop thinking about it.

Don’t do that!

The next step would be to push a bit further, to get beyond the sound bite: What’s the process that leads to confirmation bias? Why do I seek confirmatory information and in which contexts am I particularly susceptible? What other models are related to the confirmation bias? How do I solve the problem?

The answers are out there: They’re in Daniel Kahneman and in Charlie Munger and in Elster. They’re available by searching through Farnam Street.

The big question: How far do you go? A good question without a perfect answer. But the best test I can think of is to perform something like the Feynman technique, and to think about the chauffeur problem.

Can you explain it simply to an intelligent layperson, using vivid examples? Can you answer all the follow-ups? That’s fluency. And you must be careful not to fool yourself, because in the wise words of Feynman, “…you are the easiest person to fool.

While that’s great work, you’re not done yet. You have to make the rubber hit the road now. Something has to happen in your life and mind.

The way to do that is to come up with rules, systems, parables, and processes of your own, or to copy someone else’s that are obviously sound.

In the case of Confirmation Bias, we have two wonderful models to copy, one from each of the Charlies — Darwin, and Munger.

Darwin had rule, one we have written about before but will restate here: Make a note, immediately, if you come across a thought or idea that is contrary to something you currently believe. 

As for Munger, he implemented a rule in his own life: “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”

Now we’re getting somewhere! With the implementation of those two habits and some well-earned deep fluency, you can immediately, tomorrow, start improving the quality of your decision-making.

Sometimes when we get outside the heuristic/biases stuff, it’s less obvious how to make the “rubber hit the road” — and that will be a constant challenge for you as you take this path.

But that’s also the fun part! With every new idea and model you pick up, you also pick up the opportunity to synthesize for yourself a useful little parable to make it stick or a new habit that will help you use it. Over time, you’ll come up with hundreds of them, and people might even look to you when they’re having problems doing it themselves!

Look at Buffett and Munger — both guys are absolute machines, chock full of pithy little rules and stories they use in order to implement and recall what they’ve learned.

For example, Buffett discovered early on the manipulative psychology behind open-outcry auctions. What did he do? He made a rule to never go to one! That’s how it’s done.

Even if you can’t come up with a great rule like that, you can figure out a way to use any new model or idea you learn. It just takes some creative thinking.

Sometimes it’s just a little mental rule or story that sticks particularly well. (Recall one of the prime lessons from our series on memory: Salient, often used, well-associated, and important information sticks best.)

We did this very thing recently with Lee Kuan Yew’s Rule. What a trite way to refer to the simple idea of asking if something actually works…attributing it to a Singaporean political leader!

But that’s exactly the point. Give the thing a name and a life and, like clockwork, you’ll start recalling it. The phrase “Lee Kuan Yew’s Rule” actually appears in my head when I’m approaching some new system or ideology, and as soon as it does, I find myself backing away from ideology and towards pragmatism. Exactly as I’d hoped.

Your goal should be to create about a thousand of those little tools in your head, attached to a deep fluency in the material from which it came. 

***

I can hear the objection coming. Who has time for this stuff?

You do. It’s about making time for the things that really matter. And what could possibly matter more than upgrading your whole mental operating system?I solemnly promise that you’re spending way more time right now making sub-optimal decisions and trying to deal with the fallout.

If you need help learning to manage your time right this second, check out the resources in our learning community including our productivity seminar, which changed thousands people’s lives for the better. The central idea is to become more thoughtful and deliberate with how you spend your hours. When you start doing that, you’ll notice you do have an hour a day to spend on this Big Ideas stuff.

Once you find that solid hour (or more), start using it in the way outlined above, and let the world’s great knowledge actually start making an impact. Just do a little every day.

What you’ll notice, over the weeks and months and years of doing this, is that your mind will really change! It has to! And with that, your life will change too. The only way to fail at improving your brain is by imitating Sisyphus, pushing the boulder halfway up, over and over.

Unless and until you really understand this, you’ll continue spinning your wheels. So here’s your call to action. Go get to it!

The Many Ways Our Memory Fails Us (Part 2)

(Purchase a copy of the entire 3-part series in one sexy PDF for $3.99)

***

In part one, we began a conversation about the trappings of the human memory, using Daniel Schacter’s excellent The Seven Sins of Memory as our guide. (We’ve also covered some reasons why our memory is pretty darn good.) We covered transience — the loss of memory due to time — and absent-mindedness — memories that were never encoded at all or were not available when needed. Let’s keep going with a couple more whoppers: Blocking and Misattribution.

Blocking

Blocking is the phenomenon when something is indeed encoded in our memory and should be easily available in the given situation, but simply will not come to mind. We’re most familiar with blocking as the always frustrating “It’s on the tip of my tongue!

Unsurprisingly, blocking occurs most frequently when it comes to names and indeed occurs more frequently as we get older:

Twenty-year-olds, forty-year-olds, and seventy-year-olds kept diaries for a month in which they recorded spontaneously occurring retrieval blocks that were accompanied by the “tip of the tongue” sensation. Blocking occurred occasionally for the names of objects (for example, algae) and abstract words (for example, idiomatic). In all three groups, however, blocking occurred most frequently for proper names, with more blocks for people than for other proper names such as countries or cities. Proper name blocks occurred more frequently in the seventy-year-olds than in either of the other two groups.

This is not the worst sin our memory commits — excepting the times when we forget an important person’s name (which is admittedly not fun), blocking doesn’t cause the terrible practical results some of the other memory issues cause. But the reason blocking occurs does tells us something interesting about memory, something we intuitively know from other domains: We have a hard time learning things by rote or by force. We prefer associations and connections to form strong, lasting, easily available memories.

Why are names blocked from us so frequently, even more than objects, places, descriptions, and other nouns? For example, Schacter mentions experiments in which researchers show that we more easily forget a man’s name than his occupationeven if they’re the same word! (Baker/baker or Potter/potter, for example.)

It’s because relative to a descriptive noun like “baker,” which calls to mind all sorts of connotations, images, and associations, a person’s name has very little attached to it. We have no easy associations to make — it doesn’t tell us anything about the person or give us much to hang our hat on. It doesn’t really help us form an image or impression. And so we basically remember it by rote, which doesn’t always work that well.

Most models of name retrieval hold that activation of phonological representations [sound associations] occurs only after activation of conceptual and visual representations. This idea explains why people can often retrieve conceptual information about an object or person whom they cannot name, whereas the reverse does not occur. For example, diary studies indicate that people frequently recall a person’s occupation without remembering his name, but no instances have been documented in which a name is recalled without any conceptual knowledge about the person. In experiments in which people named pictures of famous individuals, participants who failed to retrieve the name “Charlton Heston” could often recall that he was an actor. Thus, when you block on the name “John Baker” you may very well recall that he is an attorney who enjoys golf, but it is highly unlikely that you would recall Baker’s name and fail to recall any of his personal attributes.

A person’s name is the weakest piece of information we have about them in our people-information lexicon, and thus the least available at any time, and the most susceptible to not being available as needed. It gets worse if it’s a name we haven’t needed to recall frequently or recently, as we all can probably attest to. (This also applies to the other types of words we block on less frequently — objects, places, etc.)

The only real way to avoid blocking problems is to create stronger associations when we learn names, or even re-encode names we already know by increasing their salience with a vivid image, even a silly one. (If you ever meet anyone named Baker…you know what to do.)

But the most important idea here is that information gains salience in our brain based on what it brings to mind. 

Whether or not blocking occurs in the sense implied by Freud’s idea of repressed memories, Schacter is non-committal about — it seems the issue was not, at the time of writing, settled.

Misattribution

The memory sin of misattribution has fairly serious consequences. Misattribution happens all the time and is a peculiar memory sin where we do remember something, but that thing is wrong, or possibly not even our own memory at all:

Sometimes we remember events that never happened, misattributing speedy processing of incoming information or vivid images that spring to mind, to memories of past events that did not occur. Sometimes we recall correctly what happened, but misattribute it to the wrong time and place. And at other times misattribution operates in a different direction: we mistakenly credit a spontaneous image or thought to our own imagination, when in reality we are recalling it–without awareness–from something we read or heard.

The most familiar, but benign, experience we’ve all had with misattribution is the curious case of deja vu. As of the writing of his book, Schacter felt there was no convincing explanation for why deja vu occurs, but we know that the brain is capable of thinking it’s recalling an event that happened previously, even if it hasn’t.

In the case of deja vu, it’s simply a bit of an annoyance. But the misattribution problem causes more serious problems elsewhere.

The major one is eyewitness testimony, which we now know is notoriously unreliable. It turns out that when eyewitnesses claim they “know what they saw!” it’s unlikely they remember as well as they claim. It’s not their fault and it’s not a lie — you do think you recall the details of a situation perfectly well. But your brain is tricking you, just like deja vu. How bad is the eyewitness testimony problem? It used to be pretty bad.

…consider two facts. First, according to estimates made in the late 1980s, each year in the United States more than seventy-five thousand criminal trials were decided on the basis of eyewitness testimony. Second, a recent analysis of forty cases in which DNA evidence established the innocence of wrongly imprisoned individuals revealed that thirty-six of them (90 percent) involved mistaken eyewitness identification. There are no doubt other such mistakes that have not been rectified.

What happens is that, in any situation where our memory stores away information, it doesn’t have the horsepower to do it with complete accuracy. There are just too many variables to sort through. So we remember the general aspects of what happened, and we remember some details, depending on how salient they were.

We recall that we met John, Jim, and Todd, who were all part of the sales team for John Deere. We might recall that John was the young one with glasses, Jim was the older bald one, and Todd talked the most. We might remember specific moments or details of the conversation which stuck out.

But we don’t get it all perfectly, and if it was an unmemorable meeting, with the transience of time, we start to lose the details. The combination of the specifics and the details is a process called memory binding, and it’s often the source of misattribution errors.

Let’s say we remember for sure that we curled our hair this morning. All of our usual cues tell us that we did — our hair is curly, it’s part of our morning routine, we remember thinking it needed to be done, etc. But…did we turn the curling iron off? We remember that we did, but is that yesterday’s memory or today’s?

This is a memory binding error. Our brain didn’t sufficiently “link up” the curling event and the turning off of the curler, so we’re left to wonder. This binding issue leads to other errors, like the memory conjunction error, where sometimes the binding process does occur, but it makes a mistake. We misattribute the strong familiarity:

Having met Mr. Wilson and Mr. Albert during your business meeting, you reply confidently the next day when an associate asks you the name of the company vice president: “Mr. Wilbert.” You remembered correctly pieces of the two surnames but mistakenly combined them into a new one. Cognitive psychologists have developed experimental procedures in which people exhibit precisely these kinds of erroneous conjunctions between features of different words, pictures, sentences, or even faces. Thus, having studied spaniel and varnish, people sometimes claim to remember Spanish.

What’s happening is a misattribution. We know we saw the syllables Span- and –nish and our memory tells us we must have heard Spanish. But we didn’t.

Back to the eyewitness testimony problem, what’s happening is we’re combining a general familiarity with a lack of specific recall, and our brain is recombining those into a misattribution. We recall a tall-ish man with some sort of facial hair, and then we’re shown 6 men in a lineup, and one is tall-ish with facial hair, and our brain tells us that must be the guy. We make a relative judgment: Which person here is closest to what I think I saw? Unfortunately, like the Spanish/varnish issue, we never actually saw the person we’ve identified as the perp.

None of this occurs with much conscious involvement, of course. It’s happening subconsciously, which is why good procedures are needed to overcome the problem. In the case of suspect lineups, the solution is to show the witness each suspect, one after another, and have them give a thumbs up or thumbs down immediately. This takes away the relative comparison and makes us consciously compare the suspect in front of us with our memory of the perpetrator.

The good thing about this error is that people can be encouraged to search their memory more carefully. But it’s far from foolproof, even if we’re getting a very strong indication that we remember something.

And what helps prevent us from making too many errors is something Schacter calls the distinctiveness heuristic. If a distinctive thing supposedly happened, we usually reason we’d have a good memory of it. And usually this is a very good heuristic to have. (Remember, salience always encourages memory formation.) As we discussed in Part One, a salient artifact gives us something to tie a memory to. If I meet someone wearing a bright rainbow-colored shirt, I’m a lot more likely to recall some details about them, simply because they stuck out.

***

As an aside, misattribution allows us one other interesting insight into the human brain: Our “people information” remembering is a specific, distinct module, one that can falter on its own, without harming any other modules. Schacter discusses a man with a delusion that many of the normal people around him were film stars. He even misattributed made-up famous-sounding names (like Sharon Sugar) to famous people, although he couldn’t put his finger on who they were.

But the man did not falsely recognize other things. Made up cities or made up words did not trip up his brain in the strange way people did. This (and other data) tells us that our ability to recognize people is a distinct “module” our brain uses, supporting one of Judith Rich Harris’s ideas about human personality that we’ve discussed: The “people information lexicon” we develop throughout our lives is a uniquely important module we use.

***

One final misattribution is something called cryptomnesia — essentially the opposite of deja vu. It’s when we think we recognize something as new and novel even though we’ve seen it before. Accidental plagiarizing can even result from cryptomnesia. (Try telling that to your school teachers!) Cryptomnesia falls into the same bucket as other misattributions in that we fail to recollect the source of information we’re recalling — the information and event where we first remembered it are not bound together properly. Let’s say we “invent” the melody to a song which already exists. The melody sounds wonderful and familiar, so we like it. But we mistakenly think it’s new.

In the end, Schacter reminds us to think carefully about the memories we “know” are true, and to try to remember specifics when possible:

We often need to sort out ambiguous signals, such as feelings of familiarity or fleeting images, that may originate in specific past experiences, or arise from subtle influences in the present. Relying on judgment and reasoning to come up with plausible attributions, we sometimes go astray.  When misattribution combines with another of memory’s sins — suggestibility — people can develop detailed and strongly held recollections of complex events that never occurred.

And with that, we will leave it here for now. Next time we’ll delve into suggestibility and bias, two more memory sins with a range of practical outcomes.

Mental Models: Getting the World to Do the Work for You

People are working harder and harder to clean up otherwise avoidable messes they created by making poor initial decisions. There are many reasons we’re making poor decisions and failing to learn from them.

Under the heading, Sources of Stupidity in an article entitled Smart Decisions we wrote about some of the factors that contribute to suboptimal decisions.

1. We’re (sometimes) stupid. Of course, I like to think that I’m rational and capable of interpreting all of the information in a non-biased way. Only I’m not. At least not always. There are situations that increase the odds of irrationality, for instance when we’re tired, overly focused on a goal, rushing, distracted or interrupted, operating in a group, or are under the direction of an expert2.

2. We have the wrong information. In this case, we’re operating with the wrong facts or our assumptions are incorrect.

3. We use the wrong model. We use models to make decisions. The quality of those models often determines the quality of our thinking. There are a variety of reasons that we use false, incomplete, or incorrect models. For instance, we’re prone to using less useful models when we are a novice or we operate in a domain outside of our area of expertise. The odds of the wrong model also increase as the pace of environmental change increases.

4. We fail to learn. We all know the person that has 20 years of experience but it’s really the same year over and over. Well, that person is sometimes us. If we don’t understand how we learn, we’re likely to make the same mistakes over and over.

5. Doing what’s easy over what’s right. You can think of this as looking good over doing good.  In this case, we make choices based on optics, explainability, politics, etc. Mostly this comes from not having a strong sense of self and seeking external validation or avoiding punishment.

Simple But Not Easy

One of the metamodels that traverse all five of these sources of stupidity is understanding the world.

If you understand the world as it really is, not as you’d wish it to be, you will begin to make better decisions almost immediately. Once you start making better decisions the results compound. Better initial decisions free up your time, reduce your stress, allowing you to spend more time with your family and leave your competition in the dust.

Understanding The World

How can we best understand the world as it is?

Acquiring knowledge can be a very daunting task. If you think of the mind as a toolbox, we’re only as good as the tools at our disposal. A carpenter doesn’t show up to work with an empty toolbox. Not only do they want as many tools in their toolbox as possible, but they want to know how to use them. Having more tools and the knowledge of how to use them means they can tackle more problems.  Try as we might, we cannot build a house with only a hammer.

If you’re a knowledge worker, you’re a carpenter. But your tools aren’t bought at a store and they don’t come in a red box that you carry around. Mental tools are the big ideas from multiple disciplines, and we store them in our mind. And if we have a lot of tools and the knowledge required to wield them properly, we can start to synthesize how the world works and make better decisions when confronted with problems.

This is how we understand and deal with reality. The tools you put into your toolbox are Mental Models.

Mental models are a framework for understanding how the world really works. They help you grasp new ideas quickly, identify patterns before anyone else and shift your perspective with ease. Mental Models allow us to make better decisions, scramble out of bad situations, and think critically. If you want to understand reality you must look at a problem in multiple dimensions — how could it be otherwise?

Getting to this level of understanding requires having a lot of tools and knowing how to use them. You knew there was a hitch right?

We need to change our fast-food diet of information consumption and adopt the healthier diet of knowledge that changes slowly over time. While changing diets isn’t easy, it can be incredibly rewarding: more time, less stress, and being better at your job. The costs, however, are short term pain for long-term gain. You must change how you think.

2^nd Order Thinking

One example of a model we can immediately conceptualize and use to improve our ability to make better decisions is something we can borrow from ecology called second-order thinking. The simple way to conceptualize this is to ask yourself “If I do X, what will happen after that?”. I sum this up using the ecologist Garrett Hardin’s simple question: “And then what?

A lot of people forget about higher order effects — second and third-order effects or higher. I’ve been in a lot of meetings where decisions are made and very few people think to the second level, let alone the third. Rather, what typically happens is called first conclusion bias. The brain shuts down and stops thinking at the first idea that comes to mind that seems to address the problem as you understand it.

We don’t often realize that our first thoughts are usually not even our own thoughts. They usually belong to someone else. We understand the sound-byte but we haven’t done the hard work of real thinking. After we reach the first conclusion, our minds often shut down. We don’t seek evidence that would contradict our conclusion. We don’t ask ourselves what the likely result of this solution would be — we don’t ask ourselves “And then what?” We don’t ask what other solutions might be even more optimal.

For example, consider a hypothetical organization that decides to change their incentive systems. They come up with a costly new system that requires substantial changes to the current system. Only they don’t consider (or even understand) the problems that the new system is likely to create. It’s possible they’ve created more problems than they’ve solved – only now there are different problems they must put their head down to solve. Optically, they “reorganize their incentive programs,” but practically, they’ve simply expended energy to stay in place.

Another, perhaps more complicated, example is when a salesman comes into a company and offers you a software program he claims will lower your operating costs and increase your profits. He’s got all these beautiful charts on how much more competitive you’ll be and how it will improve everything. This is exactly what you need because your compensation is based on increasing profits. You’re sold.

Then second-order thinking kicks in and you dare to ask how much of those cost savings are going to go to you and how much will eventually end up benefits enjoyed by customers? To a large extent, that depends on the business you’re in. However, you can be damn sure the salesman is now knocking on your competitor’s door and telling them you just bought their product and if they want to remain competitive they better purchase it too. Eventually, you all have the new software and no one is truly better off. Thus, in the manner of a crowd of people standing on their tip-toes at a parade, all competitors spend the money but none of them win: The salesman wins and the customer wins.

We know, thanks to people like Garrett Hardin, Howard Marks, Charlie Munger, Peter Kaufman, and disciplines like ecology, that there are second and third-order effects. This is how the world really works.It just isn’t always a comfortable reality.

Understanding how the world works isn’t easy and it shouldn’t be. It’s hard work. If it were easy, everyone would do it. And it’s not for everyone. Sometimes, if your goal is to maximize utility, you should focus on getting very, very good in a narrow area and becoming an expert, accepting that you will make many mistakes outside of that domain. But for most, it’s extremely helpful to understand the forces at play outside of their narrow area of expertise.

Because when you think about it, how could reality be anything other than a synthesis of multiple factors? How could it possibly be otherwise?

Henry Ford and the Actual Value of Education

“The object of education is not to fill a man’s mind with facts;
it is to teach him how to use his mind in thinking.”
— Henry Ford

***

In his memoir My Life and Work, written in 1934, the brilliant (but flawed) Henry Ford (1863-1947) offers perhaps the best definition you’ll find of the value of an education, and a useful warning against the mere accumulation of information for the sake of its accumulation. A  devotee of lifelong learning need not be a Jeopardy contestant, accumulating trivia to spit back as needed. In the Age of Google, that sort of knowledge is increasingly irrelevant.

A real life-long learner seeks to learn and apply the world’s best knowledge to create a more constructive and more useful life for themselves and those around them. And to do that, you have to learn how to think on your feet. The world does not offer up no-brainers every day; more frequently, we’re presented with a lot of grey options. Unless your studies are improving your ability to handle reality as it is and get a fair result, you’re probably wasting your time.

From Ford’s memoir:

An educated man is not one whose memory is trained to carry a few dates in history—he is one who can accomplish things. A man who cannot think is not an educated man however many college degrees he may have acquired. Thinking is the hardest work anyone can do—which is probably the reason why we have so few thinkers. There are two extremes to be avoided: one is the attitude of contempt toward education, the other is the tragic snobbery of assuming that marching through an educational system is a sure cure for ignorance and mediocrity. You cannot learn in any school what the world is going to do next year, but you can learn some of the things which the world has tried to do in former years, and where it failed and where it succeeded. If education consisted in warning the young student away from some of the false theories on which men have tried to build, so that he may be saved the loss of the time in finding out by bitter experience, its good would be unquestioned.

An education which consists of signposts indicating the failure and the fallacies of the past doubtless would be very useful. It is not education just to possess the theories of a lot of professors. Speculation is very interesting, and sometimes profitable, but it is not education. To be learned in science today is merely to be aware of a hundred theories that have not been proved. And not to know what those theories are is to be “uneducated,” “ignorant,” and so forth. If knowledge of guesses is learning, then one may become learned by the simple expedient of making his own guesses. And by the same token he can dub the rest of the world “ignorant” because it does not know what his guesses are.

But the best that education can do for a man is to put him in possession of his powers, give him control of the tools with which destiny has endowed him, and teach him how to think. The college renders its best service as an intellectual gymnasium, in which mental muscle is developed and the student strengthened to do what he can. To say, however, that mental gymnastics can be had only in college is not true, as every educator knows. A man’s real education begins after he has left school. True education is gained through the discipline of life.

[…]

Men satisfy their minds more by finding out things for themselves than by heaping together the things which somebody else has found out. You can go out and gather knowledge all your life, and with all your gathering you will not catch up even with your own times. You may fill your head with all the “facts” of all the ages, and your head may be just an overloaded fact−box when you get through. The point is this: Great piles of knowledge in the head are not the same as mental activity. A man may be very learned and very useless. And then again, a man may be unlearned and very useful.

The object of education is not to fill a man’s mind with facts; it is to teach him how to use his mind in thinking. And it often happens that a man can think better if he is not hampered by the knowledge of the past.

Ford is probably wrong in his very last statement, study of the past is crucial to understand the human condition, but the sentiment offered in the rest of the piece should be read and re-read frequently.

This brings to mind a debate you’ll hear that almost all debaters get wrong: What’s more valuable, to be educated in the school of life, or in the school of books? Which is it?

It’s both!

This is what we call a false dichotomy. There is absolutely no reason to choose between the two. We’re all familiar with the algebra. If A and B have positive value, then A+B must be greater than A or B alone! You must learn from your life as it goes along, but since we have the option to augment that by studying the lives of others, why would we not take advantage? All it takes is the will and the attitude to study the successes and failures of history, add them to your own experience, and get an algebra-style A+B result.

So, resolve to use your studies to learn to think, to learn to handle the world better, to be more useful to those around you. Don’t worry about the facts and figures for their own sake. We don’t need another human encyclopedia.

***

Still Interested? Check out all of Ford’s interesting memoir, or try reading up on what a broad education should contain. 

A Few Useful Mental Tools from Richard Feynman

We’ve covered the brilliant physicist Richard Feynman (1918-1988) many times here before. He was a genius. A true genius. But there have been many geniuses — physics has been fortunate to attract some of them — and few of them are as well known as Feynman. Why is Feynman so well known? It’s likely because he had tremendous range outside of pure science, and although he won a Nobel Prize for his work in quantum mechanics, he’s probably best known for other things, primarily his wonderful ability to explain and teach.

This ability was on display in a series of non-technical lectures in 1963, memorialized in a short book called The Meaning of it All: Thoughts of a Citizen Scientist. The lectures are a wonderful example of how well Feynman’s brain worked outside of physics, talking through basic reasoning and some of the problems of his day.

Particularly useful are a series of “tricks of the trade” he gives in a section called This Unscientific Age. These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day. They’re wonderfully instructive. Let’s check them out.

Mental Tools from Richard Feynman

Before we start, it’s worth noting that Feynman takes pains to mention that not everything needs to be considered with scientific accuracy. So don’t waste your time unless it’s a scientific matter. So let’s start with a deep breath:

Now, that there are unscientific things is not my grief. That’s a nice word. I mean, that is not what I am worrying about, that there are unscientific things. That something is unscientific is not bad; there is nothing the matter with it. It is just unscientific. And scientific is limited, of course, to those things that we can tell about by trial and error. For example, there is the absurdity of the young these days chanting things about purple people eaters and hound dogs, something that we cannot criticize at all if we belong to the old flat foot floogie and a floy floy or the music goes down and around. Sons of mothers who sang about “come, Josephine, in my flying machine,” which sounds just about as modern as “I’d like to get you on a slow boat to China.” So in life, in gaiety, in emotion, in human pleasures and pursuits, and in literature and so on, there is no need to be scientific, there is no reason to be scientific. One must relax and enjoy life. That is not the criticism. That is not the point.

As we enter the realm of “knowable” things in a scientific sense, the first trick has to do with deciding whether someone truly knows their stuff or is mimicking:

The first one has to do with whether a man knows what he is talking about, whether what he says has some basis or not. And my trick that I use is very easy. If you ask him intelligent questions—that is, penetrating, interested, honest, frank, direct questions on the subject, and no trick questions—then he quickly gets stuck. It is like a child asking naive questions. If you ask naive but relevant questions, then almost immediately the person doesn’t know the answer, if he is an honest man. It is important to appreciate that.

And I think that I can illustrate one unscientific aspect of the world which would be probably very much better if it were more scientific. It has to do with politics. Suppose two politicians are running for president, and one goes through the farm section and is asked, “What are you going to do about the farm question?” And he knows right away— bang, bang, bang.

Now he goes to the next campaigner who comes through. “What are you going to do about the farm problem?” “Well, I don’t know. I used to be a general, and I don’t know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem. And it must be a hard problem. So the way that I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can’t tell you ahead of time what conclusion, but I can give you some of the principles I’ll try to use—not to make things difficult for individual farmers, if there are any special problems we will have to have some way to take care of them,” etc., etc., etc.

That’s a wonderfully useful way to figure out whether someone is Max Planck or the chauffeur.

The second trick regards how to deal with uncertainty:

People say to me, “Well, how can you teach your children what is right and wrong if you don’t know?” Because I’m pretty sure of what’s right and wrong. I’m not absolutely sure; some experiences may change my mind. But I know what I would expect to teach them. But, of course, a child won’t learn what you teach him.

I would like to mention a somewhat technical idea, but it’s the way, you see, we have to understand how to handle uncertainty. How does something move from being almost certainly false to being almost certainly true? How does experience change? How do you handle the changes of your certainty with experience? And it’s rather complicated, technically, but I’ll give a rather simple, idealized example.

You have, we suppose, two theories about the way something is going to happen, which I will call “Theory A” and “Theory B.” Now it gets complicated. Theory A and Theory B. Before you make any observations, for some reason or other, that is, your past experiences and other observations and intuition and so on, suppose that you are very much more certain of Theory A than of Theory B—much more sure. But suppose that the thing that you are going to observe is a test. According to Theory A, nothing should happen. According to Theory B, it should turn blue. Well, you make the observation, and it turns sort of a greenish. Then you look at Theory A, and you say, “It’s very unlikely,” and you turn to Theory B, and you say, “Well, it should have turned sort of blue, but it wasn’t impossible that it should turn sort of greenish color.” So the result of this observation, then, is that Theory A is getting weaker, and Theory B is getting stronger. And if you continue to make more tests, then the odds on Theory B increase. Incidentally, it is not right to simply repeat the same test over and over and over and over, no matter how many times you look and it still looks greenish, you haven’t made up your mind yet. But if you find a whole lot of other things that distinguish Theory A from Theory B that are different, then by accumulating a large number of these, the odds on Theory B increase.

Feynman is talking about Grey Thinking here, the ability to put things on a gradient from “probably true” to “probably false” and how we deal with that uncertainty. He isn’t proposing a method of figuring out absolute, doctrinaire truth.

Another term for what he’s proposing is Bayesian updating — starting with a priori odds, based on earlier understanding, and “updating” the odds of something based on what you learn thereafter. An extremely useful tool.

Feynman’s third trick is the realization that as we investigate whether something is true or not, new evidence and new methods of experimentation should show the effect of getting stronger and stronger, not weaker. He uses an excellent example here by analyzing mental telepathy:

I give an example. A professor, I think somewhere in Virginia, has done a lot of experiments for a number of years on the subject of mental telepathy, the same kind of stuff as mind reading. In his early experiments the game was to have a set of cards with various designs on them (you probably know all this, because they sold the cards and people used to play this game), and you would guess whether it’s a circle or a triangle and so on while someone else was thinking about it. You would sit and not see the card, and he would see the card and think about the card and you’d guess what it was. And in the beginning of these researches, he found very remarkable effects. He found people who would guess ten to fifteen of the cards correctly, when it should be on the average only five. More even than that. There were some who would come very close to a hundred percent in going through all the cards. Excellent mind readers.

A number of people pointed out a set of criticisms. One thing, for example, is that he didn’t count all the cases that didn’t work. And he just took the few that did, and then you can’t do statistics anymore. And then there were a large number of apparent clues by which signals inadvertently, or advertently, were being transmitted from one to the other.

Various criticisms of the techniques and the statistical methods were made by people. The technique was therefore improved. The result was that, although five cards should be the average, it averaged about six and a half cards over a large number of tests. Never did he get anything like ten or fifteen or twenty-five cards. Therefore, the phenomenon is that the first experiments are wrong. The second experiments proved that the phenomenon observed in the first experiment was nonexistent. The fact that we have six and a half instead of five on the average now brings up a new possibility, that there is such a thing as mental telepathy, but at a much lower level. It’s a different idea, because, if the thing was really there before, having improved the methods of experiment, the phenomenon would still be there. It would still be fifteen cards. Why is it down to six and a half? Because the technique improved. Now it still is that the six and a half is a little bit higher than the average of statistics, and various people criticized it more subtly and noticed a couple of other slight effects which might account for the results.

It turned out that people would get tired during the tests, according to the professor. The evidence showed that they were getting a little bit lower on the average number of agreements. Well, if you take out the cases that are low, the laws of statistics don’t work, and the average is a little higher than the five, and so on. So if the man was tired, the last two or three were thrown away. Things of this nature were improved still further. The results were that mental telepathy still exists, but this time at 5.1 on the average, and therefore all the experiments which indicated 6.5 were false. Now what about the five? . . . Well, we can go on forever, but the point is that there are always errors in experiments that are subtle and unknown. But the reason that I do not believe that the researchers in mental telepathy have led to a demonstration of its existence is that as the techniques were improved, the phenomenon got weaker. In short, the later experiments in every case disproved all the results of the former experiments. If remembered that way, then you can appreciate the situation.

This echoes Feyman’s dictum about not fooling oneself: We must refine our process for probing and experimenting if we’re to get at real truth, always watching out for little troubles. Otherwise, we torture the world so that results fit our expectations. If we carefully refine and re-test and the effect gets weaker all the time, it’s likely to not be true, or at least not to the magnitude originally hoped for.

The fourth trick is to ask the right question, which is not “Could this be the case?” but “Is this actually the case?” Many get so caught up with the former that they forget to ask the latter:

That brings me to the fourth kind of attitude toward ideas, and that is that the problem is not what is possible. That’s not the problem. The problem is what is probable, what is happening. It does no good to demonstrate again and again that you can’t disprove that this could be a flying saucer. We have to guess ahead of time whether we have to worry about the Martian invasion. We have to make a judgment about whether it is a flying saucer, whether it’s reasonable, whether it’s likely. And we do that on the basis of a lot more experience than whether it’s just possible, because the number of things that are possible is not fully appreciated by the average individual. And it is also not clear, then, to them how many things that are possible must not be happening. That it’s impossible that everything that is possible is happening. And there is too much variety, so most likely anything that you think of that is possible isn’t true. In fact that’s a general principle in physics theories: no matter what a guy thinks of, it’s almost always false. So there have been five or ten theories that have been right in the history of physics, and those are the ones we want. But that doesn’t mean that everything’s false. We’ll find out.

The fifth trick is a very, very common one, even 50 years after Feynman pointed it out. You cannot judge the probability of something happening after it’s already happened. That’s cherry-picking. You have to run the experiment forward for it to mean anything:

I now turn to another kind of principle or idea, and that is that there is no sense in calculating the probability or the chance that something happens after it happens. A lot of scientists don’t even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it’s a general principle of psychologists that in these tests they arrange so that the odds that the things that happen happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out.

This man had designed an experiment which would show something which I do not remember, if the rats always went to the right, let’s say. I can’t remember exactly. He had to do a great number of tests, because, of course, they could go to the right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And its hard to do, and he did his number. Then he found that it didn’t work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn’t count.”

He said, “Why?” I said, “Because it doesn’t make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”

For example, I had the most remarkable experience this evening. While coming in here, I saw license plate ANZ 912. Calculate for me, please, the odds that of all the license plates in the state of Washington I should happen to see ANZ 912. Well, it’s a ridiculous thing. And, in the same way, what he must do is this: The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn’t work.

The sixth trick is one that’s familiar to almost all of us, yet almost all of us forget about every day: The plural of anecdote is not data. We must use proper statistical sampling to know whether or not we know what we’re talking about:

The next kind of technique that’s involved is statistical sampling. I referred to that idea when I said they tried to arrange things so that they had one in twenty odds. The whole subject of statistical sampling is somewhat mathematical, and I won’t go into the details. The general idea is kind of obvious. If you want to know how many people are taller than six feet tall, then you just pick people out at random, and you see that maybe forty of them are more than six feet so you guess that maybe everybody is. Sounds stupid.

Well, it is and it isn’t. If you pick the hundred out by seeing which ones come through a low door, you’re going to get it wrong. If you pick the hundred out by looking at your friends you’ll get it wrong because they’re all in one place in the country. But if you pick out a way that as far as anybody can figure out has no connection with their height at all, then if you find forty out of a hundred, then, in a hundred million there will be more or less forty million. How much more or how much less can be worked out quite accurately. In fact, it turns out that to be more or less correct to 1 percent, you have to have 10,000 samples. People don’t realize how difficult it is to get the accuracy high. For only 1 or 2 percent you need 10,000 tries.

The last trick is to realize that many errors people make simply come from lack of information. They don’t even know they’re missing the tools they need. This can be a very tough one to guard against — it’s hard to know when you’re missing information that would change your mind — but Feynman gives the simple case of astrology to prove the point:

Now, looking at the troubles that we have with all the unscientific and peculiar things in the world, there are a number of them which cannot be associated with difficulties in how to think, I think, but are just due to some lack of information. In particular, there are believers in astrology, of which, no doubt, there are a number here. Astrologists say that there are days when it’s better to go to the dentist than other days. There are days when it’s better to fly in an airplane, for you, if you are born on such a day and such and such an hour. And it’s all calculated by very careful rules in terms of the position of the stars. If it were true it would be very interesting. Insurance people would be very interested to change the insurance rates on people if they follow the astrological rules, because they have a better chance when they are in the airplane. Tests to determine whether people who go on the day that they are not supposed to go are worse off or not have never been made by the astrologers. The question of whether it’s a good day for business or a bad day for business has never been established. Now what of it? Maybe it’s still true, yes.

On the other hand, there’s an awful lot of information that indicates that it isn’t true. Because we have a lot of knowledge about how things work, what people are, what the world is, what those stars are, what the planets are that you are looking at, what makes them go around more or less, where they’re going to be in the next 2000 years is completely known. They don’t have to look up to find out where it is. And furthermore, if you look very carefully at the different astrologers they don’t agree with each other, so what are you going to do? Disbelieve it. There’s no evidence at all for it. It’s pure nonsense.

The only way you can believe it is to have a general lack of information about the stars and the world and what the rest of the things look like. If such a phenomenon existed it would be most remarkable, in the face of all the other phenomena that exist, and unless someone can demonstrate it to you with a real experiment, with a real test, took people who believe and people who didn’t believe and made a test, and so on, then there’s no point in listening to them.

***

Still Interested? Check out the (short) book: The Meaning of it All: Thoughts of a Citizen-Scientist.