Tag: Learning

Lifelong Learning

“By three methods we may learn wisdom: First, by reflection, which is noblest; Second, by imitation, which is easiest; and third by experience, which is the bitterest.”

— Confucius

I’m a huge fan of Laurence Endersen’s book: Pebbles of Perception: How a Few Good Choices Make All The Difference. I think it deserves to be on the shelf of every knowledge seeker in the world.

There is a chapter in the book on lifelong learning.

When we are captain of our own ship, life can be a wonderful continuous voyage of discovery. Yet we frequently pigeonhole our learning and discovery into limiting discrete blocks. There are the childhood years, filled with exploration and getting to know the world around us at a sensory level. The early school years follow, during which we are introduced to reading and writing. Middle school years bring a range of core subjects and some people will finish up the formal part of their education with university-level learning.

Then we add some work experience and attain a certain level of competence. From this perch we coast pretty well. It’s like driving. How many of us are getting better at driving? All of those hours behind the wheel are not deliberate practice. If you consider the product of the modern knowledge worker to be decisions, all of this coasting without getting better should concern you.

***

Lifelong Learning

The incentives to follow a path of lifelong learning are not easily apparent.

When assessing our competence in any particular discipline, we can place our level of ability somewhere along a continuum moving from ignorance, to conversational competence, to operational competence, then towards proficiency, and finally all the way to mastery. For most of us, if we get to operational competence in our main career area we are happy enough. We can get by and we don’t have to expend too much energy continuously learning. We become what I call flat-line learners. For the flat-line learner the learning curve might look something like this:

And yet, Endersen argues that if we pursue a path of lifelong learning our path more closely resembles this:

Lifelong Learning
You could even make an argument that lifelong learning puts you on a non-linear path but I’ll leave that for you to think about.

The question as to why everyone doesn’t want to become a lifelong learner remains.

It may boil down to choices and priorities. It is easy to be drawn towards passive entertainment, which requires less from us, over more energetic, active understanding. Inconvenience might be an alibi: “I don’t have time for continuous learning as I am too busy with real life”. But that excuse doesn’t withstand close scrutiny, as experiences (coupled with reflection) can be the richest of all sources of investigation and discovery.

Why not make a conscious decision to learn something new every day? No matter how small the daily learning, it is significant when aggregated over a lifetime. Resolving early in life to have a continuous learning mindset is not only more interesting than the passive alternative, it is also remarkably powerful. Choosing lifelong learning is one of the few good choices that can make a big difference in our lives, giving us an enormous advantage when practised over a long period of time.

***

Reflection

The ignorant man can’t learn from his own mistake and the fool can’t learn from the mistakes of others. These are the primary ways we learn: Through our own experiences and through the experiences of others.

While both avenues have their place, there is no substitute for direct learning through experience – which we enhance through reflection. The process of thoughtful reflection makes our experiences more concrete, and helps with future recall and understanding. Reflecting about what we learned, how we felt, how we and others behaved, and what interests were at play, hardwires the learning in our brain and gives us a depth of context and relevance that would otherwise be absent.

Even if it were desirable, which it’s not, there simply is not enough time to learn everything we need to know through direct experience.

***

Reading

“Reading,” writes Endersen, “is the foundation of indirect learning.” Learning how to read and finding time to read are two of the easiest and best changes you can make if you want to pursue lifelong learning.

Many read for entertainment. Some read for information. Too few read for understanding. Adler’s book (How To Read a Book) is concerned with reading to understand. Being widely read is not the same as being well read. The more effort and skill we put into reading, the greater our understanding.

***

The Feynman Technique

As for testing whether we really understand something after we’ve read it, there is a powerful and elegant technique called the Feynman Technique.

Step 1. Choose the topic or concept that you are trying to understand. Take a blank piece of paper and write the name of the topic at the top.

Step 2. Assume you’re teaching the topic to someone else. Write out a clear explanation of the topic, as if you were trying to teach it. A great way to learn is to teach. You identify gaps in your knowledge very quickly when trying to explain something to someone else in simple terms.

Step 3. If you get bogged down, go back to the source materials. Keep going back until you can explain the concept in its most basic form.

Step 4. Go back and simplify your language. The goal is to use your own words, not the words of the source material. Overly elaborate language is often a sure sign that you don’t fully understand the concept. Use simple language and build on that with a clear analogy. An example that springs to mind is Warren Buffet’s explanation of compound interest (i.e., interest earned on interest), when he likened it to a snowball that gathers snow as it rolls down a hill.

***

Lifelong learning is a better path than flat-line learning.

Savour experiences as opportunities to learn. Reflect on your experiences. Read regularly. Learn how to read for understanding. Know how to test whether you really understand something by demonstrating that you could teach it in simple terms with a clear analogy.

The best way to do that? Follow Einstein’s advice.

Ray Dalio: Open-Mindedness And The Power of Not Knowing

ray-dalio
Ray Dalio, founder of the investment firm Bridgewater Associates, offers a prime example of what a learning organization looks like in the best book I’ve ever read on learning, Learn or Die: Using Science to Build a Leading-Edge Learning Organization. He comes to us again with this bit of unconventional wisdom.

First, the context …

To make money in the markets, you have to think independently and be humble. You have to be an independent thinker because you can’t make money agreeing with the consensus view, which is already embedded in the price. Yet whenever you’re betting against the consensus there’s a significant probability you’re going to be wrong, so you have to be humble.

Early in my career I learned this lesson the hard way — through some very painful bad bets. The biggest of these mistakes occurred in 1981–’82, when I became convinced that the U.S. economy was about to fall into a depression. My research had led me to believe that, with the Federal Reserve’s tight money policy and lots of debt outstanding, there would be a global wave of debt defaults, and if the Fed tried to handle it by printing money, inflation would accelerate. I was so certain that a depression was coming that I proclaimed it in newspaper columns, on TV, even in testimony to Congress. When Mexico defaulted on its debt in August 1982, I was sure I was right. Boy, was I wrong. What I’d considered improbable was exactly what happened: Fed chairman Paul Volcker’s move to lower interest rates and make money and credit available helped jump-start a bull market in stocks and the U.S. economy’s greatest ever noninflationary growth period

What’s important isn’t that he was wrong, it’s what the experience taught him and how he implemented those lessons at Bridgewater.

This episode taught me the importance of always fearing being wrong, no matter how confident I am that I’m right. As a result, I began seeking out the smartest people I could find who disagreed with me so that I could understand their reasoning. Only after I fully grasped their points of view could I decide to reject or accept them. By doing this again and again over the years, not only have I increased my chances of being right, but I have also learned a huge amount.

There’s an art to this process of seeking out thoughtful disagreement. People who are successful at it realize that there is always some probability they might be wrong and that it’s worth the effort to consider what others are saying — not simply the others’ conclusions, but the reasoning behind them — to be assured that they aren’t making a mistake themselves. They approach disagreement with curiosity, not antagonism, and are what I call “open-minded and assertive at the same time.” This means that they possess the ability to calmly take in what other people are thinking rather than block it out, and to clearly lay out the reasons why they haven’t reached the same conclusion. They are able to listen carefully and objectively to the reasoning behind differing opinions.

When most people hear me describe this approach, they typically say, “No problem, I’m open-minded!” But what they really mean is that they’re open to being wrong. True open-mindedness is an entirely different mind-set. It is a process of being intensely worried about being wrong and asking questions instead of defending a position. It demands that you get over your ego-driven desire to have whatever answer you happen to have in your head be right. Instead, you need to actively question all of your opinions and seek out the reasoning behind alternative points of view.

Still curious? Check out my lengthy interview with Ed Hess.

How To Think

I wrote a response on quora recently to the question ‘how do I become a better thinker’ that generated a lot of attention and feedback so I thought I’d build on that a little and post it here too.

(c) Shane Parrish fs.blog

Thinking is not IQ. When people talk about thinking they make the mistake of thinking that people with high IQs think better. That’s not what I’m talking about. I hate to break it to you but unless you’re trying to get into Mensa, IQ tests don’t matter as much as we think they do. After a certain point, that’s not the type of knowledge or brainpower that makes you better at life, happier, or more successful. It’s a measure sure, but a relatively useless one.

If you want to outsmart people who are smarter than you, temperament and life-long learning are more important than IQ.

Two of the guiding principles that I follow on my path towards seeking wisdom are: (1) Go to bed smarter than when you woke up; and (2) I’m not smart enough to figure everything out myself, so I want to ‘master the best of what other people have already figured out.’

Acquiring wisdom is hard. Learning how to think is hard. It means sifting through information, filtering the bunk, and connecting it to a framework that you can use. A lot of people want to get their opinions from someone else. I know this because whenever anyone blurts out an opinion and I ask why, I get some hastily re-phrased sound-byte that doesn’t contextualize the problem, identify the forces at play, demonstrate differences or similarities with previous situations, consider base rates, or … anything else that would demonstrate some level of thinking. (One of my favorite questions to probe thinking is to ask what information would cause someone to change their mind. Immediately stop listening and leave if they say ‘I can’t think of anything.’)

Thinking is hard work. I get it. You don’t have time to think but that doesn’t mean you get a pass from me. I want to think for myself, thank you.

***

So one effective thing you can do if you want to think better is to become better at probing other people’s thinking. Ask questions. Simple ones are better. “Why” is the best. If you ask that three or four times you get to a place where you’re going to understand more and you’ll be able to tell who really knows what they are talking about. Shortcuts in thinking are easy, and this is how you tease them out. Not to make the other person look bad – don’t do this maliciously – but to avoid mistakes, air assumptions, and discuss conclusions.

Another thing you can do is to slow down. Make sure you give yourself time to think. I know, it’s a fast-paced internet world where we get some cultural machoism points for answering on the spot but unless it has to be decided at that very moment, simply say “let me think about that for a bit and get back to you.” The world will not end while you think about it.

You should also probe yourself. Try and understand if you’re talking about something you really know something about or if you’re just regurgitating some talking head you heard on the news last night. Your life will become instantly better and your mind clearer if you simply stop the latter. You’re only fooling yourself and if you don’t understand the limits of what you know, you’re going to get in trouble.

***

Learning how to think really means continuously learning.

How can we do that?

First we need a framework to put things on so we can remember, integrate, and make them available for use.

A Latticework of Mental Models, if you will.

Acquiring knowledge may seem like a daunting task. There is so much to know and time is precious. Luckily, we don’t have to master everything. To get the biggest bang for the buck we can study the big ideas from physics, biology, psychology, philosophy, literature, and sociology.

Our aim is not to remember facts and try to repeat them when asked. We’re going to try and hang these ideas on a latticework of mental models. Doing this puts them in a useable form and enables us to make better decisions.

A mental model is simply a representation of an external reality inside your head. Mental models are concerned with understanding knowledge about the world.

Decisions are more likely to be correct when ideas from multiple disciplines all point towards the same conclusion.

It’s like the old saying, “To the man with only a hammer, every problem looks like a nail.” Let’s make every attempt not to be the man with only a hammer.

Charlie Munger further elaborates:

And the models have to come from multiple disciplines because all the wisdom of the world is not to be found in one little academic department. That’s why poetry professors, by and large, are so unwise in a worldly sense. They don’t have enough models in their heads. So you’ve got to have models across a fair array of disciplines.

You may say, “My God, this is already getting way too tough.” But, fortunately, it isn’t that tough because 80 or 90 important models will carry about 90% of the freight in making you a worldly wise person. And, of those, only a mere handful really carry very heavy freight.

These models generally fall into two categories: (1) ones that help us simulate time (and predict the future) and better understand how the world works (e.g. understanding a useful idea from like autocatalysis), and (2) ones that help us better understand how our mental processes lead us astray (e.g., availability bias).

When our mental models line up with reality they help us avoid problems. However, they also cause problems when they don’t line up with reality as we think something that isn’t true. So Beware.

In Peter Bevelin’s masterful book Seeking Wisdom, he highlights Munger talking about autocatalysis:

If you get a certain kind of process going in chemistry, it speeds up on its own. So you get this marvellous boost in what you’re trying to do that runs on and on. Now, the laws of physics are such that it doesn’t run on forever. But it runs on for a goodly while. So you get a huge boost. You accomplish A – and, all of a sudden, you’re getting A + B + C for awhile.

But knowing is not enough. You need to know how to apply this to other problems outside of the domain in which you learned it.

Munger continues:

Disney is an amazing example of autocatalysis … They had those movies in the can. They owned the copyright. And just as Coke could prosper when refrigeration came, when the videocassette was invented, Disney didn’t have to invent anything or do anything except take the thing out of the can and stick it on the cassette.

***

What models do we need?

I keep a running list that I’m filling in over time, but really how we store and sort these are individual preferences. The framework is not a one-stop-shop, it’s how it fits into your brain.

How can we acquire these models?

There are several ways to acquire the models, the first and probably best source is reading. Even Warren Buffett says reading is one of the best ways to get wiser.

But sadly if your goal is wisdom acquisition, you can’t just pick up a book and read it. You need to Learn How To Read A Book all over again. Most people look at my reading habits (What I’m Reading) and think that I speed read. I don’t. I think that’s a bunch of hot air. If you think you can pick up a book on a subject you’re unfamiliar with and in 30 minutes become an expert … well, good luck to you. Please go back to getting your opinions from twitter.

Focus on the big, simple ideas.

Focus on deeply understanding the simple ideas (see Five Elements of Effective Thinking). These simple ideas, not the cutting-edge ones are the ones you want to hang on your latticework. The latticework is important because it makes the knowledge useable – you not only recall but you internalize.

But the world is always changing … what should we learn first?

One of the biggest mistakes I see people making is to try and learn the cutting-edge research first. The way we prioritize learning has huge implications beyond the day-to-day. When we chase the latest thing, we’re really jumping into an arms race (see: The Red Queen Effect). We have to spend more and more of our time and energy to stay in the same place.

Despite our intentions, learning in this way fails to take advantage of cumulative knowledge. We’re not adding, we’re only maintaining.

If we are to prioritize learning, we should focus on ideas that change slowly – these tend to be the ones from the hard sciences. (see Adding Mental Models to Your Toolbox)

The models that come from hard science and engineering are the most reliable models on this Earth. And engineering quality control – at least the guts of it that matters to you and me and people who are not professional engineers – is very much based on the elementary mathematics of Fermat and Pascal: It costs so much and you get so much less likelihood of it breaking if you spend this much… And, of course, the engineering idea of a backup system is a very powerful idea. The engineering idea of breakpoints – that’s a very powerful model, too. The notion of a critical mass – that comes out of physics – is a very powerful model.

To help further prioritize learning

From : What Should I Read?

Knowledge has a half-life. The most useful knowledge is a broad-based multidisciplinary education of the basics. These ideas are ones that have lasted, and thus will last, for a long time. And by last, I mean mathematical expectation; I know what will happen in general but not each individual case.

Integrating Knowledge

(Source: Adding Mental Models to Your Toolbox)

Our world is mutli-dimensional and our problems are complicated. Most problems cannot be solved using one model alone. The more models we have the better able we are to rationally solve problems. But if we don’t have the models we become the proverbial man with a hammer.

To the man with a hammer everything looks like a nail. If you only have one model you will fit whatever problem you face to the model you have. If you have more than one model, however, you can look at the problem from a variety of perspectives and increase the odds you come to a better solution.

No one discipline has all the answers, only by looking at them all can we come to grow worldly wisdom.

Charles Munger illustrates the importance of this:

Suppose you want to be good at declarer play in contract bridge. Well, you know the contract – you know what you have to achieve. And you can count up the sure winners you have by laying down your high cards and your invincible trumps.

But if you’re a trick or two short, how are you going to get the other needed tricks? Well, there are only six or so different, standard methods: You’ve got long-suit establishment. You’ve got finesses. You’ve got throw-in plays.

You’ve got cross-ruffs. You’ve got squeezes. And you’ve got various ways of misleading the defense into making errors. So it’s a very limited number of models. But if you only know one or two of those models, then you’re going to be a horse’s patoot in declarer play…

If you don’t have the full repertoire, I guarantee you that you’ll over-utilize the limited repertoire you have – including use of models that are inappropriate just because they’re available to you in the limited stock you have in mind.

As for how we can use different ideas, Munger again shows the way …

Have a full kit of tools … go through them in your mind checklist-style. … [Y]ou can never make any explanation that can be made in a more fundamental way in any other way than the most fundamental way.

When you combine things you get lollapalooza effects — the integration of more than one effect to create a non-linear response.

A two-step process for making effective decisions

There is no point in being wiser unless you use it for good. You know, as Aunt May put it to Peter Parker, “with great power comes great responsibility.”

(Source: A Two-step Process for Making Effective Decisions)

Personally, I’ve gotten so that I now use a kind of two-track analysis. First, what are the factors that really govern the interests involved, rationally considered? And second, what are the subconscious influences where the brain at a subconscious level is automatically doing these things-which by and large are useful, but which often misfunction.

One approach is rationality-the way you’d work out a bridge problem: by evaluating the real interests, the real probabilities and so forth. And the other is to evaluate the psychological factors that cause subconscious conclusions-many of which are wrong.

This is the path, the rest is up to you.

Elon Musk on How To Build Knowledge

elon musk
Elon Musk recently did an AMA on reddit. Here are three question-and-response pairs that I enjoyed, including how to build knowledge.

He knows how to say I don’t know.

Previously, you’ve stated that you estimate a 50% probability of success with the attempted landing on the automated spaceport drone ship tomorrow. Can you discuss the factors that were considered to make that estimation?

Musk: I pretty much made that up. I have no idea :)

Everyone has that one teacher…

I’m a teacher, and I always wonder what I can do to help my students achieve big things. What’s something your teachers did for you while you were in school that helped to encourage your ideas and thinking? Or, if they didn’t, what’s something they could have done better?

Musk: The best teacher I ever had was my elementary school principal. Our math teacher quit for some reason and he decided to sub in himself for math and accelerate the syllabus by a year.

We had to work like the house was on fire for the first half of the lesson and do extra homework, but then we got to hear stories of when he was a soldier in WWII. If you didn’t do the work, you didn’t get to hear the stories. Everybody did the work.

Finally, his answer on building knowledge reminds me of The Five Elements of Effective Thinking and the latticework of mental models.

How do you learn so much so fast? Lots of people read books and talk to other smart people, but you’ve taken it to a whole new level.

Musk: I do kinda feel like my head is full! My context switching penalty is high and my process isolation is not what it used to be.

Frankly, though, I think most people can learn a lot more than they think they can. They sell themselves short without trying.

One bit of advice: it is important to view knowledge as sort of a semantic tree — make sure you understand the fundamental principles, ie the trunk and big branches, before you get into the leaves/details or there is nothing for them to hang on to.

Follow your curiosity to Elon Musk Recommends 12 Books.

(image source: forbes)

Richard Feynman: The Difference Between Knowing the Name of Something and Knowing Something

Richard Feynman

Richard Feynman (1918-1988), who believed that “the world is much more interesting than any one discipline,” was no ordinary genius.

His explanations — on why questions, why trains stay on the tracks as they go around a curve, how we look for new laws of science, how rubber bands work, — are simple and powerful. Even his letter writing moves you. His love letter to his wife sixteen months after her death still stirs my soul.

In this short clip (below), Feynman articulates the difference between knowing the name of something and understanding it.

See that bird? It’s a brown-throated thrush, but in Germany it’s called a halzenfugel, and in Chinese they call it a chung ling and even if you know all those names for it, you still know nothing about the bird. You only know something about people; what they call the bird. Now that thrush sings, and teaches its young to fly, and flies so many miles away during the summer across the country, and nobody knows how it finds its way.

Knowing the name of something doesn’t mean you understand it. We talk in fact-deficient, obfuscating generalities to cover up our lack of understanding.

How then should we go about learning? On this Feynman echoes Einstein, and proposes that we take things apart:

In order to talk to each other, we have to have words, and that’s all right. It’s a good idea to try to see the difference, and it’s a good idea to know when we are teaching the tools of science, such as words, and when we are teaching science itself.

[…]

There is a first grade science book which, in the first lesson of the first grade, begins in an unfortunate manner to teach science, because it starts off with the wrong idea of what science is. There is a picture of a dog–a windable toy dog–and a hand comes to the winder, and then the dog is able to move. Under the last picture, it says “What makes it move?” Later on, there is a picture of a real dog and the question, “What makes it move?” Then there is a picture of a motorbike and the question, “What makes it move?” and so on.

I thought at first they were getting ready to tell what science was going to be about–physics, biology, chemistry–but that wasn’t it. The answer was in the teacher’s edition of the book: the answer I was trying to learn is that “energy makes it move.”

Now, energy is a very subtle concept. It is very, very difficult to get right. What I mean is that it is not easy to understand energy well enough to use it right, so that you can deduce something correctly using the energy idea–it is beyond the first grade. It would be equally well to say that “God makes it move,” or “spirit makes it move,” or “movability makes it move.” (In fact, one could equally well say “energy makes it stop.”)

Look at it this way: that’s only the definition of energy; it should be reversed. We might say when something can move that it has energy in it, but not what makes it move is energy. This is a very subtle difference. It’s the same with this inertia proposition.

Perhaps I can make the difference a little clearer this way: If you ask a child what makes the toy dog move, you should think about what an ordinary human being would answer. The answer is that you wound up the spring; it tries to unwind and pushes the gear around.

What a good way to begin a science course! Take apart the toy; see how it works. See the cleverness of the gears; see the ratchets. Learn something about the toy, the way the toy is put together, the ingenuity of people devising the ratchets and other things. That’s good. The question is fine. The answer is a little unfortunate, because what they were trying to do is teach a definition of what is energy. But nothing whatever is learned.

[…]

I think for lesson number one, to learn a mystic formula for answering questions is very bad.

There is a way to test whether you understand the idea or only know the definition. It’s called the Feynman Technique and it looks like this:

Test it this way: you say, “Without using the new word which you have just learned, try to rephrase what you have just learned in your own language.” Without using the word “energy,” tell me what you know now about the dog’s motion.” You cannot. So you learned nothing about science. That may be all right. You may not want to learn something about science right away. You have to learn definitions. But for the very first lesson, is that not possibly destructive?

I think this is what Montaigne was hinting at in his Essays when he wrote:

We take other men’s knowledge and opinions upon trust; which is an idle and superficial learning. We must make them our own. We are just like a man who, needing fire, went to a neighbor’s house to fetch it, and finding a very good one there, sat down to warm himself without remembering to carry any back home. What good does it do us to have our belly full of meat if it is not digested, if it is not transformed into us, if it does not nourish and support us?

Charlie Munger: Adding Mental Models to Your Mind’s Toolbox

In The Art of War Sun Tzu said “The general who wins a battle makes many calculations in his temple before the battle is fought.”

Those ‘calculations’ are the tools we have available to think better. One of the best questions you can ask is how we can make our mental processes work better.

Charlie Munger says that “developing the habit of mastering the multiple models which underlie reality is the best thing you can do.”

Those models are mental models.

They fall into two categories: (1) ones that help us simulate time (and predict the future) and better understand how the world works (e.g. understanding a useful idea  autocatalysis), and (2) ones that help us better understand how our mental processes lead us astray (e.g., availability bias).

When our mental models line up with reality they help us avoid problems. However, they also cause problems when they don’t line up with reality as we think something that isn’t true.

Your Mind’s Toolbox

In Peter Bevelin’s Seeking Wisdom, he highlights Munger talking about autocatalysis:

If you get a certain kind of process going in chemistry, it speeds up on its own. So you get this marvellous boost in what you’re trying to do that runs on and on. Now, the laws of physics are such that it doesn’t run on forever. But it runs on for a goodly while. So you get a huge boost. You accomplish A – and, all of a sudden, you’re getting A + B + C for awhile.

He continues telling us how this idea can be applied:

Disney is an amazing example of autocatalysis … They had those movies in the can. They owned the copyright. And just as Coke could prosper when refrigeration came, when the videocassette was invented, Disney didn’t have to invent anything or do anything except take the thing out of the can and stick it on the cassette.

***

This leads us to an interesting problem. The world is always changing so which models should we prioritize learning?

How we prioritize our learning has implications beyond the day-to-day. Often we focus on things that change quickly. We chase the latest study, the latest findings, the most recent best-sellers. We do this to keep up-to-date with the latest-and-greatest.

Despite our intentions, learning in this way fails to account for cumulative knowledge. Instead, we consume all of our time keeping up to date.

If we are prioritize learning, we should focus on things that change slowly.

The models that come from hard science and engineering are the most reliable models on this Earth. And engineering quality control – at least the guts of it that matters to you and me and people who are not professional engineers – is very much based on the elementary mathematics of Fermat and Pascal: It costs so much and you get so much less likelihood of it breaking if you spend this much…

And, of course, the engineering idea of a backup system is a very powerful idea. The engineering idea of breakpoints – that’s a very powerful model, too. The notion of a critical mass – that comes out of physics – is a very powerful model.

After we learn a model we have to make it useful. We have to integrate it into our existing knowledge.

Our world is mutli-dimensional and our problems are complicated. Most problems cannot be solved using one model alone. The more models we have the better able we are to rationally solve problems.

But if we don’t have the models we become the proverbial man with a hammer. To the man with a hammer, everything looks like a nail. If you only have one model you will fit whatever problem you face to the model you have. If you have more than one model, however, you can look at the problem from a variety of perspectives and increase the odds you come to a better solution.

“Since no single discipline has all the answers,” Peter Bevelin writes in Seeking Wisdom, “we need to understand and use the big ideas from all the important disciplines: Mathematics, physics, chemistry, engineering, biology, psychology, and rank and use them in order of reliability.”

Charles Munger illustrates the importance of this:

Suppose you want to be good at declarer play in contract bridge. Well, you know the contract – you know what you have to achieve. And you can count up the sure winners you have by laying down your high cards and your invincible trumps.

But if you’re a trick or two short, how are you going to get the other needed tricks? Well, there are only six or so different, standard methods: You’ve got long-suit establishment. You’ve got finesses. You’ve got throw-in plays. You’ve got cross-ruffs. You’ve got squeezes. And you’ve got various ways of misleading the defense into making errors. So it’s a very limited number of models. But if you only know one or two of those models, then you’re going to be a horse’s patoot in declarer play…

If you don’t have the full repertoire, I guarantee you that you’ll overutilize the limited repertoire you have – including use of models that are inappropriate just because they’re available to you in the limited stock you have in mind.

As for how we can use different ideas, Munger again shows the way …

Have a full kit of tools … go through them in your mind checklist-style.. .you can never make any explanation that can be made in a more fundamental way in any other way than the most fundamental way. And you always take with full attribution to the most fundamental ideas that you are required to use. When you’re using physics, you say you’re using physics. When you’re using biology, you say you’re using biology.

But ideas alone are not enough. We need to understand how they interact and combine. This leads to lollapalooza effects.

You get lollapalooza effects when two, three or four forces are all operating in the same direction. And, frequently, you don’t get simple addition. It’s often like a critical mass in physics where you get a nuclear explosion if you get to a certain point of mass – and you don’t get anything much worth seeing if you don’t reach the mass.

Sometimes the forces just add like ordinary quantities and sometimes they combine on a break-point or critical-mass basis … More commonly, the forces coming out of … models are conflicting to some extent. And you get huge, miserable trade-offs … So you [must] have the models and you [must] see the relatedness and the effects from the relatedness.