Tag: Richard Feynman

The Feynman Learning Technique

If you’re after a way to supercharge your learning and become smarter, the Feynman Technique might just be the best way to learn absolutely anything. Devised by a Nobel Prize-winning physicist, it leverages the power of teaching for better learning.

The Feynman Learning Technique is a simple way of approaching anything new you want to learn.
Why use it? Because learning doesn’t happen from skimming through a book or remembering enough to pass a test. Information is learned when you can explain it and use it in a wide variety of situations. The Feynman Technique gets more mileage from the ideas you encounter instead of rendering anything new into isolated, useless factoids.

When you really learn something, you give yourself a tool to use for the rest of your life. The more you know, the fewer surprises you will encounter, because most new things will connect to something you already understand.

Ultimately, the point of learning is to understand the world. But most of us don’t bother to deliberately learn anything. We memorize what we need to as we move through school, then forget most of it. As we continue through life, we don’t extrapolate from our experiences to broaden the applicability of our knowledge. Consequently, life kicks us in the ass time and again.

To avoid the pain of being bewildered by the unexpected, the Feynman Technique helps you turn information into knowledge that you can access as easily as a shirt in your closet.

Let’s go.


The Feynman Technique

“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction.” —E.F. Schumacher

There are four steps to the Feynman Learning Technique, based on the method Richard Feynman originally used. We have adapted it slightly after reflecting on our own experiences using this process to learn. The steps are as follows:

  1. Pretend to teach a concept you want to learn about to a student in the sixth grade.
  2. Identify gaps in your explanation. Go back to the source material to better understand it.
  3. Organize and simplify.
  4. Transmit (optional).

Step 1: Pretend to teach it to a child or a rubber duck

Take out a blank sheet of paper. At the top, write the subject you want to learn. Now write out everything you know about the subject as if you were teaching it to a child or a rubber duck sitting on your desk. You are not teaching to your smart adult friend, but rather a child who has just enough vocabulary and attention span to understand basic concepts and relationships.

Or, for a different angle on the Feynman Technique, you could place a rubber duck on your desk and try explaining the concept to it. Software engineers sometimes tackle debugging by explaining their code, line by line, to a rubber duck. The idea is that explaining something to a silly-looking inanimate object will force you to be as simple as possible.

It turns out that one of the ways we mask our lack of understanding is by using complicated vocabulary and jargon. The truth is, if you can’t define the words and terms you are using, you don’t really know what you’re talking about. If you look at a painting and describe it as “abstract” because that’s what you heard in art class, you aren’t displaying any comprehension of the painting. You’re just mimicking what you’ve heard. And you haven’t learned anything. You need to make sure your explanation isn’t above, say, a sixth-grade reading level by using easily accessible words and phrases.

When you write out an idea from start to finish in simple language that a child can understand, you force yourself to understand the concept at a deeper level and simplify relationships and connections between ideas. You can better explain the why behind your description of the what.

Looking at that same painting again, you will be able to say that the painting doesn’t display buildings like the ones we look at every day. Instead it uses certain shapes and colors to depict a city landscape. You will be able to point out what these are. You will be able to engage in speculation about why the artist chose those shapes and those colors. You will be able to explain why artists sometimes do this, and you will be able to communicate what you think of the piece considering all of this. Chances are, after capturing a full explanation of the painting in the simplest possible terms that would be easily understood by a sixth-grader, you will have learned a lot about that painting and abstract art in general.

Some of capturing what you would teach will be easy. These are the places where you have a clear understanding of the subject. But you will find many places where things are much foggier.

Step 2: Identify gaps in your explanation

Areas where you struggle in Step 1 are the points where you have some gaps in your understanding.
Identifying gaps in your knowledge—where you forget something important, aren’t able to explain it, or simply have trouble thinking of how variables interact—is a critical part of the learning process. Filling those gaps is when you really make the learning stick.

Now that you know where you have gaps in your understanding, go back to the source material. Augment it with other sources. Look up definitions. Keep going until you can explain everything you need to in basic terms.

Only when you can explain your understanding without jargon and in simple terms can you demonstrate your understanding. Think about it this way. If you require complicated terminology to explain what you know, you have no flexibility. When someone asks you a question, you can only repeat what you’ve already said.

Simple terms can be rearranged and easily combined with other words to communicate your point. When you can say something in multiple ways using different words, you understand it really well.
Being able to explain something in a simple, accessible way shows you’ve done the work required to learn. Skipping it leads to the illusion of knowledge—an illusion that can be quickly shattered when challenged.

Identifying the boundaries of your understanding is also a way of defining your circle of competence. When you know what you know (and are honest about what you don’t know), you limit the mistakes you’re liable to make and increase your chance of success when applying knowledge.

Step 3. Organize and simplify

Now you have a set of hand-crafted notes containing a simple explanation. Organize them into a narrative that you can tell from beginning to end. Read it out loud. If the explanation sounds confusing at any point, go back to Step 2. Keep iterating until you have a story that you can tell to anyone who will listen.

If you follow this approach over and over, you will end up with a binder full of pages on different subjects. If you take some time twice a year to go through this binder, you will find just how much you retain.

Step 4: Transmit (optional)

This part is optional, but it’s the logical result of everything you’ve just done. If you really want to be sure of your understanding, run it past someone (ideally someone who knows little of the subject). The ultimate test of your knowledge is your capacity to convey it to another. You can read out directly what you’ve written. You can present the material like a lecture. You can ask your friends for a few minutes of their time while you’re buying them dinner. You can volunteer as a guest speaker in your child’s classroom or your parents’ retirement residence. All that really matters is that you attempt to transmit the material to at least one person who isn’t that familiar with it.

The questions you get and the feedback you receive are invaluable for further developing your understanding. Hearing what your audience is curious about will likely pique your own curiosity and set you on a path for further learning. After all, it’s only when you begin to learn a few things really well do you appreciate how much there is to know.


The Feynman Technique is not only a wonderful recipe for learning but also a window into a different way of thinking that allows you to tear ideas apart and reconstruct them from the ground up.
When you’re having a conversation with someone and they start using words or relationships that you don’t understand, ask them to explain it to you like you’re twelve.

Not only will you supercharge your own learning, but you’ll also supercharge theirs.

Feynman’s approach intuitively believes that intelligence is a process of growth, which dovetails nicely with the work of Carol Dweck, who describes the difference between a fixed and growth mindset.

“If you can’t reduce a difficult engineering problem to just one 8-1/2 x 11-inch sheet of paper, you will probably never understand it.” —Ralph Peck

What does it mean to “know?”

Richard Feynman believed that “the world is much more interesting than any one discipline.” He understood the difference between knowing something and knowing the name of something, as well as how, when you truly know something, you can use that knowledge broadly. When you only know what something is called, you have no real sense of what it is. You can’t take it apart and play with it or use it to make new connections and generate new insights. When you know something, the labels are unimportant, because it’s not necessary to keep it in the box it came in.

“The person who says he knows what he thinks but cannot express it usually does not know what he thinks.” —Mortimer Adler

Feynman’s explanations—on why questions, why trains stay on the tracks as they go around a curve, how we look for new laws of science, or how rubber bands work—are simple and powerful. Here he articulates the difference between knowing the name of something and understanding it.

“See that bird? It’s a brown-throated thrush, but in Germany it’s called a halzenfugel, and in Chinese they call it a chung ling, and even if you know all those names for it, you still know nothing about the bird. You only know something about people: what they call the bird. Now that thrush sings, and teaches its young to fly, and flies so many miles away during the summer across the country, and nobody knows how it finds its way.”

Knowing the name of something doesn’t mean you understand it. We talk in fact-deficient, obfuscating generalities to cover up our lack of understanding.

How then should we go about learning? On this Feynman echoes Albert Einstein and proposes that we take things apart. He describes a dismal first-grade science book that attempts to teach kids about energy by showing a series of pictures about a wind-up dog toy and asking, “What makes it move?” For Feynman, this was the wrong approach because it was too abstract. Saying that energy made the dog move was equal to saying “that ‘God makes it move,’ or ‘spirit makes it move,’ or ‘movability makes it move.’ (In fact, one could equally well say ‘energy makes it stop.’)”

Staying at the level of the abstract imparts no real understanding. Kids might subsequently get the question right on a test, if they have a decent memory. But they aren’t going to have any understanding of what energy actually is.

Feynman then goes on to describe a more useful approach:

“Perhaps I can make the difference a little clearer this way: if you ask a child what makes the toy dog move, you should think about what an ordinary human being would answer. The answer is that you wound up the spring; it tries to unwind and pushes the gear around.

What a good way to begin a science course! Take apart the toy; see how it works. See the cleverness of the gears; see the ratchets. Learn something about the toy, the way the toy is put together, the ingenuity of people devising the ratchets and other things. That’s good.”


After the Feynman Technique

“We take other men’s knowledge and opinions upon trust; which is an idle and superficial learning. We must make them our own. We are just like a man who, needing fire, went to a neighbor’s house to fetch it, and finding a very good one there, sat down to warm himself without remembering to carry any back home. What good does it do us to have our belly full of meat if it is not digested, if it is not transformed into us, if it does not nourish and support us?” —Michel de Montaigne

The Feynman Technique helps you learn stuff. But learning doesn’t happen in isolation. We learn not only from the books we read but also the people we talk to and the various positions, ideas, and opinions we are exposed to. Richard Feynman also provided advice on how to sort through information so you can decide what is relevant and what you should bother learning.

In a series of non-technical lectures in 1963, memorialized in a short book called The Meaning of It All: Thoughts of a Citizen Scientist, Feynman talks through basic reasoning and some of the problems of his day. His method of evaluating information is another set of tools you can use along with the Feynman Learning Technique to refine what you learn.

Particularly useful are a series of “tricks of the trade” he gives in a section called “This Unscientific Age.” These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day.

Before we start, it’s worth noting that Feynman takes pains to mention that not everything needs to be considered with scientific accuracy. It’s up to you to determine where applying these tricks might be most beneficial in your life.

Regardless of what you are trying to gather information on, these tricks help you dive deeper into topics and ideas and not get waylaid by inaccuracies or misunderstandings on your journey to truly know something.

As we enter the realm of “knowable” things in a scientific sense, the first trick has to do with deciding whether someone else truly knows their stuff or is mimicking others:

“My trick that I use is very easy. If you ask him intelligent questions—that is, penetrating, interested, honest, frank, direct questions on the subject, and no trick questions—then he quickly gets stuck. It is like a child asking naive questions. If you ask naive but relevant questions, then almost immediately the person doesn’t know the answer, if he is an honest man. It is important to appreciate that.

And I think that I can illustrate one unscientific aspect of the world which would be probably very much better if it were more scientific. It has to do with politics. Suppose two politicians are running for president, and one goes through the farm section and is asked, “What are you going to do about the farm question?” And he knows right away—bang, bang, bang.

Now he goes to the next campaigner who comes through. “What are you going to do about the farm problem?” “Well, I don’t know. I used to be a general, and I don’t know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem. And it must be a hard problem. So the way that I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can’t tell you ahead of time what conclusion, but I can give you some of the principles I’ll try to use—not to make things difficult for individual farmers, if there are any special problems we will have to have some way to take care of them, etc., etc., etc.””

If you learn something via the Feynman Technique, you will be able to answer questions on the subject. You can make educated analogies, extrapolate the principles to other situations, and easily admit what you do not know.

The second trick has to do with dealing with uncertainty. Very few ideas in life are absolutely true. What you want is to get as close to the truth as you can with the information available:

“I would like to mention a somewhat technical idea, but it’s the way, you see, we have to understand how to handle uncertainty. How does something move from being almost certainly false to being almost certainly true? How does experience change? How do you handle the changes of your certainty with experience? And it’s rather complicated, technically, but I’ll give a rather simple, idealized example.

You have, we suppose, two theories about the way something is going to happen, which I will call “Theory A” and “Theory B.” Now it gets complicated. Theory A and Theory B. Before you make any observations, for some reason or other, that is, your past experiences and other observations and intuition and so on, suppose that you are very much more certain of Theory A than of Theory B—much more sure. But suppose that the thing that you are going to observe is a test. According to Theory A, nothing should happen. According to Theory B, it should turn blue. Well, you make the observation, and it turns sort of a greenish. Then you look at Theory A, and you say, “It’s very unlikely,” and you turn to Theory B, and you say, “Well, it should have turned sort of blue, but it wasn’t impossible that it should turn sort of greenish color.”

So the result of this observation, then, is that Theory A is getting weaker, and Theory B is getting stronger. And if you continue to make more tests, then the odds on Theory B increase. Incidentally, it is not right to simply repeat the same test over and over and over and over, no matter how many times you look and it still looks greenish, you haven’t made up your mind yet. But if you find a whole lot of other things that distinguish Theory A from Theory B that are different, then by accumulating a large number of these, the odds on Theory B increase.”

Feynman is talking about grey thinking here, the ability to put things on a gradient from “probably true” to “probably false” and how we deal with that uncertainty. He isn’t proposing a method of figuring out absolute, doctrinaire truth.

Another term for what he’s proposing is Bayesian updating—starting with a priori odds, based on earlier understanding, and “updating” the odds of something based on what you learn thereafter. An extremely useful tool.

Feynman’s third trick is the realization that as we investigate whether something is true or not, new evidence and new methods of experimentation should show the effect of getting stronger and stronger, not weaker. Knowledge is not static, and we need to be open to continually evaluating what we think we know. Here he uses an excellent example of analyzing mental telepathy:

“A professor, I think somewhere in Virginia, has done a lot of experiments for a number of years on the subject of mental telepathy, the same kind of stuff as mind reading. In his early experiments the game was to have a set of cards with various designs on them (you probably know all this, because they sold the cards and people used to play this game), and you would guess whether it’s a circle or a triangle and so on while someone else was thinking about it. You would sit and not see the card, and he would see the card and think about the card and you’d guess what it was. And in the beginning of these researches, he found very remarkable effects. He found people who would guess ten to fifteen of the cards correctly, when it should be on the average only five. More even than that. There were some who would come very close to a hundred percent in going through all the cards. Excellent mind readers.

A number of people pointed out a set of criticisms. One thing, for example, is that he didn’t count all the cases that didn’t work. And he just took the few that did, and then you can’t do statistics anymore. And then there were a large number of apparent clues by which signals inadvertently, or advertently, were being transmitted from one to the other.

Various criticisms of the techniques and the statistical methods were made by people. The technique was therefore improved. The result was that, although five cards should be the average, it averaged about six and a half cards over a large number of tests. Never did he get anything like ten or fifteen or twenty-five cards. Therefore, the phenomenon is that the first experiments are wrong. The second experiments proved that the phenomenon observed in the first experiment was nonexistent. The fact that we have six and a half instead of five on the average now brings up a new possibility, that there is such a thing as mental telepathy, but at a much lower level. It’s a different idea, because, if the thing was really there before, having improved the methods of experiment, the phenomenon would still be there. It would still be fifteen cards. Why is it down to six and a half? Because the technique improved. Now it still is that the six and a half is a little bit higher than the average of statistics, and various people criticized it more subtly and noticed a couple of other slight effects which might account for the results.

It turned out that people would get tired during the tests, according to the professor. The evidence showed that they were getting a little bit lower on the average number of agreements. Well, if you take out the cases that are low, the laws of statistics don’t work, and the average is a little higher than the five, and so on. So if the man was tired, the last two or three were thrown away. Things of this nature were improved still further. The results were that mental telepathy still exists, but this time at 5.1 on the average, and therefore all the experiments which indicated 6.5 were false. Now what about the five? . . . Well, we can go on forever, but the point is that there are always errors in experiments that are subtle and unknown. But the reason that I do not believe that the researchers in mental telepathy have led to a demonstration of its existence is that as the techniques were improved, the phenomenon got weaker. In short, the later experiments in every case disproved all the results of the former experiments. If remembered that way, then you can appreciate the situation.”

We must refine our process for probing and experimenting if we’re to get at real truth, always watching out for little troubles. Otherwise, we torture the world so that our results fit our expectations. If we carefully refine and re-test and the effect gets weaker all the time, it’s likely to not be true, or at least not to the magnitude originally hoped for.

The fourth trick is to ask the right question, which is not “Could this be the case?” but “Is this actually the case?” Many get so caught up with the former that they forget to ask the latter:

“That brings me to the fourth kind of attitude toward ideas, and that is that the problem is not what is possible. That’s not the problem. The problem is what is probable, what is happening.

It does no good to demonstrate again and again that you can’t disprove that this could be a flying saucer. We have to guess ahead of time whether we have to worry about the Martian invasion. We have to make a judgment about whether it is a flying saucer, whether it’s reasonable, whether it’s likely. And we do that on the basis of a lot more experience than whether it’s just possible, because the number of things that are possible is not fully appreciated by the average individual. And it is also not clear, then, to them how many things that are possible must not be happening. That it’s impossible that everything that is possible is happening. And there is too much variety, so most likely anything that you think of that is possible isn’t true. In fact that’s a general principle in physics theories: no matter what a guy thinks of, it’s almost always false. So there have been five or ten theories that have been right in the history of physics, and those are the ones we want. But that doesn’t mean that everything’s false. We’ll find out.”

The fifth trick is a very, very common one, even 50 years after Feynman pointed it out. You cannot judge the probability of something happening after it’s already happened. That’s cherry-picking. You have to run the experiment forward for it to mean anything:

“A lot of scientists don’t even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it’s a general principle of psychologists that in these tests they arrange so that the odds that the things that happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out.

This man had designed an experiment which would show something which I do not remember, if the rats always went to the right, let’s say. He had to do a great number of tests, because, of course, they could go to the right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And it’s hard to do, and he did his number. Then he found that it didn’t work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn’t count.”

He said, “Why?” I said, “Because it doesn’t make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”

The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn’t work.”

The sixth trick is one that’s familiar to almost all of us, yet almost all of us forget about every day: the plural of anecdote is not data. We must use proper statistical sampling to know whether or not we know what we’re talking about:

“The next kind of technique that’s involved is statistical sampling. I referred to that idea when I said they tried to arrange things so that they had one in twenty odds. The whole subject of statistical sampling is somewhat mathematical, and I won’t go into the details. The general idea is kind of obvious. If you want to know how many people are taller than six feet tall, then you just pick people out at random, and you see that maybe forty of them are more than six feet so you guess that maybe everybody is. Sounds stupid.

Well, it is and it isn’t. If you pick the hundred out by seeing which ones come through a low door, you’re going to get it wrong. If you pick the hundred out by looking at your friends, you’ll get it wrong, because they’re all in one place in the country. But if you pick out a way that as far as anybody can figure out has no connection with their height at all, then if you find forty out of a hundred, then in a hundred million there will be more or less forty million. How much more or how much less can be worked out quite accurately. In fact, it turns out that to be more or less correct to 1 percent, you have to have 10,000 samples. People don’t realize how difficult it is to get the accuracy high. For only 1 or 2 percent you need 10,000 tries.”

The last trick is to realize that many errors people make simply come from lack of information. They don’t even know they’re missing the tools they need. This can be a very tough one to guard against—it’s hard to know when you’re missing information that would change your mind—but Feynman gives the simple case of astrology to prove the point:

“Now, looking at the troubles that we have with all the unscientific and peculiar things in the world, there are a number of them which cannot be associated with difficulties in how to think, I think, but are just due to some lack of information. In particular, there are believers in astrology, of which, no doubt, there are a number here. Astrologists say that there are days when it’s better to go to the dentist than other days. There are days when it’s better to fly in an airplane, for you, if you are born on such a day and such and such an hour. And it’s all calculated by very careful rules in terms of the position of the stars. If it were true it would be very interesting. Insurance people would be very interested to change the insurance rates on people if they follow the astrological rules, because they have a better chance when they are in the airplane. Tests to determine whether people who go on the day that they are not supposed to go are worse off or not have never been made by the astrologers. The question of whether it’s a good day for business or a bad day for business has never been established. Now what of it? Maybe it’s still true, yes.

On the other hand, there’s an awful lot of information that indicates that it isn’t true. Because we have a lot of knowledge about how things work, what people are, what the world is, what those stars are, what the planets are that you are looking at, what makes them go around more or less, where they’re going to be in the next 2,000 years is completely known. They don’t have to look up to find out where it is. And furthermore, if you look very carefully at the different astrologers they don’t agree with each other, so what are you going to do? Disbelieve it. There’s no evidence at all for it. It’s pure nonsense.

The only way you can believe it is to have a general lack of information about the stars and the world and what the rest of the things look like. If such a phenomenon existed it would be most remarkable, in the face of all the other phenomena that exist, and unless someone can demonstrate it to you with a real experiment, with a real test, took people who believe and people who didn’t believe and made a test, and so on, then there’s no point in listening to them.”




Knowing something is valuable. The more you understand about how the world works, the more options you have for dealing with the unexpected and the better you can create and capitalize on opportunities. The Feynman Learning Technique is a great method to develop mastery over sets of information. Once you do, the knowledge becomes a powerful tool at your disposal.

But as Feynman himself showed, being willing and able to question your knowledge and the knowledge of others is how you keep improving. Learning is a journey.

If you want to learn more about Feynman’s ideas and teachings, we recommend:

Surely You’re Joking, Mr. Feynman!: Adventures of a Curious Character

The Pleasure of Finding Things Out: The Best Short Works of Richard Feynman

What Do You Care What Other People Think?: Further Adventures of a Curious Character

12 Life Lessons From Mathematician and Philosopher Gian-Carlo Rota

The mathematician and philosopher Gian-Carlo Rota spent much of his career at MIT, where students adored him for his engaging, passionate lectures. In 1996, Rota gave a talk entitled “Ten Lessons I Wish I Had Been Taught,” which contains valuable advice for making people pay attention to your ideas.

Many mathematicians regard Rota as single-handedly responsible for turning combinatorics into a significant field of study. He specialized in functional analysis, probability theory, phenomenology, and combinatorics. His 1996 talk, “Ten Lessons I Wish I Had Been Taught,” was later printed in his book, Indiscrete Thoughts.

Rota began by explaining that the advice we give others is always the advice we need to follow most. Seeing as it was too late for him to follow certain lessons, he decided he would share them with the audience. Here, we summarize twelve insights from Rota’s talk—which are fascinating and practical, even if you’re not a mathematician.


Every lecture should make only one point

“Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards.”

When we wish to communicate with people—in an article, an email to a coworker, a presentation, a text to a partner, and so on—it’s often best to stick to making one point at a time. This matters all the more so if we’re trying to get our ideas across to a large audience.

If we make one point well enough, we can be optimistic about people understanding and remembering it. But if we try to fit too much in, “the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.


Never run over time

“After fifty minutes (one microcentury as von Neumann used to say), everybody’s attention will turn elsewhere even if we are trying to prove the Riemann hypothesis. One minute over time can destroy the best of lectures.”

Rota considered running over the allotted time slot to be the worst thing a lecturer could do. Our attention spans are finite. After a certain point, we stop taking in new information.

In your work, it’s important to respect the time and attention of others. Put in the extra work required for brevity and clarity. Don’t expect them to find what you have to say as interesting as you do. Condensing and compressing your ideas both ensures you truly understand them and makes them easier for others to remember.


Relate to your audience

“As you enter the lecture hall, try to spot someone in the audience whose work you have some familiarity with. Quickly rearrange your presentation so as to manage to mention some of that person’s work.”

Reciprocity is remarkably persuasive. Sometimes, how people respond to your work has as much to do with how you respond to theirs as it does with the work itself. If you want people to pay attention to your work, always give before you take and pay attention to theirs first. Show that you see them and appreciate them. Rota explains that “everyone in the audience has come to listen to your lecture with the secret hope of hearing their work mentioned.

The less acknowledgment someone’s work has received, the more of an impact your attention is likely to have. A small act of encouragement can be enough to deter someone from quitting. With characteristic humor, Rota recounts:

“I have always felt miffed after reading a paper in which I felt I was not being given proper credit, and it is safe to conjecture that the same happens to everyone else. One day I tried an experiment. After writing a rather long paper, I began to draft a thorough bibliography. On the spur of the moment I decided to cite a few papers which had nothing whatsoever to do with the content of my paper to see what might happen.

Somewhat to my surprise, I received letters from two of the authors whose papers I believed were irrelevant to my article. Both letters were written in an emotionally charged tone. Each of the authors warmly congratulated me for being the first to acknowledge their contribution to the field.”


Give people something to take home

“I often meet, in airports, in the street, and occasionally in embarrassing situations, MIT alumni who have taken one or more courses from me. Most of the time they admit that they have forgotten the subject of the course and all the mathematics I thought I had taught them. However, they will gladly recall some joke, some anecdote, some quirk, some side remark, or some mistake I made.”

When we have a conversation, read a book, or listen to a talk, the sad fact is that we are unlikely to remember much of it even a few hours later, let alone years after the event. Even if we enjoyed and valued it, only a small part will stick in our memory.

So when you’re communicating with people, try to be conscious about giving them something to take home. Choose a memorable line or idea, create a visual image, or use humor in your work.

For example, in The Righteous Mind, Jonathan Haidt repeats many times that the mind is like a tiny rider on a gigantic elephant. The rider represents controlled mental processes, while the elephant represents automatic ones. It’s a distinctive image, one readers are quite likely to take home with them.


Make sure the blackboard is spotless

“By starting with a spotless blackboard, you will subtly convey the impression that the lecture they are about to hear is equally spotless.”

Presentation matters. The way our work looks influences how people perceive it. Taking the time to clean our equivalent of a blackboard signals that we care about what we’re doing and consider it important.

In “How To Spot Bad Science,” we noted that one possible sign of bad science is that the research is presented in a thoughtless, messy way. Most researchers who take their work seriously will put in the extra effort to ensure it’s well presented.


Make it easy for people to take notes

“What we write on the blackboard should correspond to what we want an attentive listener to take down in his notebook. It is preferable to write slowly and in a large handwriting, with no abbreviations. Those members of the audience who are taking notes are doing us a favor, and it is up to us to help them with their copying.”

If a lecturer is using slides with writing on them instead of a blackboard, Rota adds that they should give people time to take notes. This might mean repeating themselves in a few different ways so each slide takes longer to explain (which ties in with the idea that every lecture should make only one point). Moving too fast with the expectation that people will look at the slides again later is “wishful thinking.”

When we present our work to people, we should make it simple for them to understand our ideas on the spot. We shouldn’t expect them to revisit it later. They might forget. And even if they don’t, we won’t be there to answer questions, take feedback, and clear up any misunderstandings.


Share the same work multiple times

Rota learned this lesson when he bought Collected Papers, a volume compiling the publications of mathematician Frederic Riesz. He noted that “the editors had gone out of their way to publish every little scrap Riesz had ever published.” Putting them all in one place revealed that he had published the same ideas multiple times:

Riesz would publish the first rough version of an idea in some obscure Hungarian journal. A few years later, he would send a series of notes to the French Academy’s Comptes Rendus in which the same material was further elaborated. A few more years would pass, and he would publish the definitive paper, either in French or in English.

Riesz would also develop his ideas while lecturing. Explaining the same subject again and again for years allowed him to keep improving it until he was ready to publish. Rota notes, “No wonder the final version was perfect.

In our work, we might feel as if we need to have fresh ideas all of the time and that anything we share with others needs to be a finished product. But sometimes we can do our best work through an iterative process.

For example, a writer might start by sharing an idea as a tweet. This gets a good response, and the replies help them expand it into a blog post. From there they keep reworking the post over several years, making it longer and more definite each time. They give a talk on the topic. Eventually, it becomes a book.

Award-winning comedian Chris Rock prepares for global tours by performing dozens of times in small venues for a handful of people. Each performance is an experiment to see which jokes land, which ones don’t, and which need tweaking. By the time he’s performed a routine forty or fifty times, making it better and better, he’s ready to share it with huge audiences.

Another reason to share the same work multiple times is that different people will see it each time and understand it in different ways:

“The mathematical community is split into small groups, each one with its own customs, notation, and terminology. It may soon be indispensable to present the same result in several versions, each one accessible to a specific group; the price one might have to pay otherwise is to have our work rediscovered by someone who uses a different language and notation, and who will rightly claim it as his own.”

Sharing your work multiple times thus has two benefits. The first is that the feedback allows you to improve and refine your work. The second is that you increase the chance of your work being definitively associated with you. If the core ideas are strong enough, they’ll shine through even in the initial incomplete versions.


You are more likely to be remembered for your expository work

“Allow me to digress with a personal reminiscence. I sometimes publish in a branch of philosophy called phenomenology. . . . It so happens that the fundamental treatises of phenomenology are written in thick, heavy philosophical German. Tradition demands that no examples ever be given of what one is talking about. One day I decided, not without serious misgivings, to publish a paper that was essentially an updating of some paragraphs from a book by Edmund Husserl, with a few examples added. While I was waiting for the worst at the next meeting of the Society for Phenomenology and Existential Philosophy, a prominent phenomenologist rushed towards me with a smile on his face. He was full of praise for my paper, and he strongly encouraged me to further develop the novel and original ideas presented in it.”

Rota realized that many of the mathematicians he admired the most were known more for their work explaining and building upon existing knowledge, as opposed to their entirely original work. Their extensive knowledge of their domain meant they could expand a little beyond their core specialization and synthesize charted territory.

For example, David Hilbert was best known for a textbook on integral equations which was “in large part expository, leaning on the work of Hellinger and several other mathematicians whose names are now forgotten.” William Feller was known for an influential treatise on probability, with few recalling his original work in convex geometry.

One of our core goals at Farnam Street is to share the best of what other people have already figured out. We all want to make original and creative contributions to the world. But the best ideas that are already out there are quite often much more useful than what we can contribute from scratch.

We should never be afraid to stand on the shoulders of giants.


Every mathematician has only a few tricks

“. . . mathematicians, even the very best, also rely on a few tricks which they use over and over.”

Upon reading the complete works of certain influential mathematicians, such as David Hilbert, Rota realized that they always used the same tricks again and again.

We don’t need to be amazing at everything to do high-quality work. The smartest and most successful people are often only good at a few things—or even one thing. Their secret is that they maximize those strengths and don’t get distracted. They define their circle of competence and don’t attempt things they’re not good at if there’s any room to double down further on what’s already going well.

It might seem as if this lesson contradicts the previous one (you are more likely to be remembered for your expository work), but there’s a key difference. If you’ve hit diminishing returns with improvements to what’s already inside your circle of competence, it makes sense to experiment with things you already have an aptitude for (or a strong suspicion you might) but you just haven’t made them your focus.


Don’t worry about small mistakes

“Once more let me begin with Hilbert. When the Germans were planning to publish Hilbert’s collected papers and to present him with a set on the occasion of one of his later birthdays, they realized that they could not publish the papers in their original versions because they were full of errors, some of them quite serious. Thereupon they hired a young unemployed mathematician, Olga Taussky-Todd, to go over Hilbert’s papers and correct all mistakes. Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. . . . At last, on Hilbert’s birthday, a freshly printed set of Hilbert’s collected papers was presented to the Geheimrat. Hilbert leafed through them carefully and did not notice anything.”

Rota goes on to say: “There are two kinds of mistakes. There are fatal mistakes that destroy a theory; but there are also contingent ones, which are useful in testing the stability of a theory.

Mistakes are either contingent or fatal. Contingent mistakes don’t completely ruin what you’re working on; fatal ones do. Building in a margin of safety (such as having a bit more time or funding that you expect to need) turns many fatal mistakes into contingent ones.

Contingent mistakes can even be useful. When details change, but the underlying theory is still sound, you know which details not to sweat.


Use Feynman’s method for solving problems

“Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: ‘How did he do it? He must be a genius!’”


Write informative introductions

“Nowadays, reading a mathematics paper from top to bottom is a rare event. If we wish our paper to be read, we had better provide our prospective readers with strong motivation to do so. A lengthy introduction, summarizing the history of the subject, giving everybody his due, and perhaps enticingly outlining the content of the paper in a discursive manner, will go some of the way towards getting us a couple of readers.”

As with the lesson of don’t run over time, respect that people have limited time and attention. Introductions are all about explaining what a piece of work is going to be about, what its purpose is, and why someone should be interested in it.

A job posting is an introduction to a company. The description on a calendar invite to a meeting is an introduction to that meeting. An about page is an introduction to an author. The subject line on a cold email is an introduction to that message. A course curriculum is an introduction to a class.

Putting extra effort into our introductions will help other people make an accurate assessment of whether they want to engage with the full thing. It will prime their minds for what to expect and answer some of their questions.


If you’re interested in learning more, check out Rota’s “10 Lessons of an MIT Education.

An Investment Approach That Works

There are as many investment strategies as there are investment opportunities. Some are good; many are terrible. Here’s the one that I lean on the most when I’m looking for low risk and above average returns.


“The whole secret of investment is to find places where it’s safe and wise to non-diversify. It’s just that simple.”

— Charlie Munger

Goal: An investment algorithm to lean on hard when it is available. Low-risk, long-duration, above-average returns.

There are many paths to investment heaven (and we’ve written on the topic before). The diversity of working approaches demonstrates that. However, fundamentally, any useful approach must do three things:

  1. It must work in all conceivable financial environments.
  2. It must be within the “circle of competence” and “circle of interestingness” of its user.
  3. It must meet the moral criteria of its user.

I believe the system below satisfies the first criterion, and allows for all three.

A 7–Element Algorithm for Equity Investing

Everyone Wins
Successful businesses have indefinitely sustainable business systems: owners, employees, customers, suppliers, all content.

Ballast for the Storm
Look for a sustainable balance sheet, given the capricious nature of the world. Past bad events do not predict future bad events. Sometimes inefficient balance sheets allow companies to survive by positioning them for all environments, not just optimizing for one.

Different and Hard to Match
Candidates must occupy a structurally profitable, indefinitely sustainable business niche, allowing for the truism that all moats are subject to being crossed eventually. There must be an element of mystery.

Operational Soundness
Reliable execution of the “blocking and tackling” of operations is a must. Getting this wrong always costs big, and can ruin a good niche.

A Few Simple Variables
Allowing for the difficulty of predicting the future, candidates should have just a few reasonably predictable economic variables that will dominate their outcomes. In the words of Warren Buffett, “There are all kinds of businesses where we have no idea what they’ll earn this year, let alone any future year.” Look for boring investments, sexy is usually complicated and full of competition. Look for what’s staying the same.

Long Runway
You should be able to foresee an indefinite period of growth ahead, through some combination of market creation, market penetration, and pricing power.

Priced Attractively
Stock should be priced so that stock returns >= business returns, always including a margin for error in forward-looking estimates.

Seemingly missing is the concept of “good management,” but I consider this redundant in light of the first four elements. Any business meeting those criteria is being managed properly, and any investment going 7/7 has a very low probability of failure.

Notice that the word indefinite is used several times. This does not mean the same thing as infinite. There are no infinities in the business world. Indefinite means “as far out as can presently be seen.”

Caution: what psychologists call the “representativeness heuristic” puts us at risk of over-fitting to this or any algorithm. Always be on guard. In the words of Richard Feynman, “Never fool yourself, and remember that you are the easiest person to fool.”

The Best of Farnam Street 2018

We read for the same reasons we have conversations — to enrich our lives.

Reading helps us to think, feel, and reflect — not only upon ourselves and others but upon our ideas, and our relationship with the world. Reading deepens our understanding and helps us live consciously.

Of the 46 articles we published on FS this year, here are the top ten as measured by a combination of page views, responses, and feeling.

  1. Smarter, Not Harder: How to Succeed at Work — We each have 96 energy blocks each day to spend however we’d like. Using this energy blocking system will ensure you’re spending each block wisely.
  2. Your First Thought Is Rarely Your Best Thought: Lessons on Thinking — Most people have no time to think. They schedule themselves like lawyers. They work in five- to eight-minute increments, scheduled back to back. They think only in first thoughts never in second thoughts.
  3. The Pygmalion Effect: Proving Them Right — The Pygmalion Effect is a powerful secret weapon. Without even realizing it, we can nudge others towards success. In this article, discover how expectations can influence performance for better or worse.
  4. First Principles: The Building Blocks of True Knowledge — First Principles thinking breaks down true understanding into building blocks we can reassemble. It turns out most of us don’t know as much as we think we do.
  5. Understanding Speed and Velocity: Saying “NO” to the Non-Essential — It’s tempting to think that in order to be a valuable team player, you should say “yes” to every request and task that is asked of you. People who say yes to everything have a lot of speed. They’re always doing stuff but never getting anything done. Why? Because they don’t think in terms of velocity. Understanding the difference between speed and velocity will change how you work.
  6. The Surprising Power of The Long Game — In everything we do, we play the long or the short game. The short game is easy, pleasurable, and offers visible and immediate benefits. But it almost never leads to success. Here’s how to play the long game.
  7. Double Loop Learning: Download New Skills and Information into Your Brain — We’re taught single loop learning from the time we are in grade school, but there’s a better way. Double loop learning is the quickest and most efficient way to learn anything that you want to “stick.”
  8. Complexity Bias: Why We Prefer Complicated to Simple — Complexity bias is a logical fallacy that leads us to give undue credence to complex concepts. Faced with two competing hypotheses, we are likely to choose the most complex one.
  9. Deductive vs Inductive Reasoning: Make Smarter Arguments, Better Decisions, and Stronger Conclusions — You can’t prove the truth, but using deductive and inductive reasoning, you can get close. Learn the difference between the two types of reasoning and how to use them when evaluating facts and arguments.
  10. The Decision Matrix: How to Prioritize What Matters — The decision matrix is a powerful tool to help you prioritize which decisions deserve your attention as a leader, and which should be delegated. Here’s how you can start using it today.

More interesting things, you might have missed

Thank you

As we touched on in the annual letter, it’s been a wonderful year at FS. While the frequency of our articles decreased in 2018, the words published actually increased. As longtime readers know, we are not bound to frequency or length constraints, our only mission is quality. Next year will see a more eclectic mix of content as we get back to our roots.

Thank you for an amazing 2018 and I’m looking forward to learning new things with you in 2019.

Still curious? You can find the top five podcast episodes in 2018 here.

First Principles: The Building Blocks of True Knowledge

First Principles

The Great Mental Models Volumes One and Two are out.
Learn more about the project here.

First-principles thinking is one of the best ways to reverse-engineer complicated problems and unleash creative possibility. Sometimes called “reasoning from first principles,” the idea is to break down complicated problems into basic elements and then reassemble them from the ground up. It’s one of the best ways to learn to think for yourself, unlock your creative potential, and move from linear to non-linear results.

This approach was used by the philosopher Aristotle and is used now by Elon Musk and Charlie Munger. It allows them to cut through the fog of shoddy reasoning and inadequate analogies to see opportunities that others miss.

“I don’t know what’s the matter with people: they don’t learn by understanding; they learn by some other way—by rote or something. Their knowledge is so fragile!”

— Richard Feynman

The Basics

A first principle is a foundational proposition or assumption that stands alone. We cannot deduce first principles from any other proposition or assumption.

Aristotle, writing[1] on first principles, said:

In every systematic inquiry (methodos) where there are first principles, or causes, or elements, knowledge and science result from acquiring knowledge of these; for we think we know something just in case we acquire knowledge of the primary causes, the primary first principles, all the way to the elements.

Later he connected the idea to knowledge, defining first principles as “the first basis from which a thing is known.”[2]

The search for first principles is not unique to philosophy. All great thinkers do it.

Reasoning by first principles removes the impurity of assumptions and conventions. What remains is the essentials. It’s one of the best mental models you can use to improve your thinking because the essentials allow you to see where reasoning by analogy might lead you astray.

The Coach and the Play Stealer

My friend Mike Lombardi (a former NFL executive) and I were having dinner in L.A. one night, and he said, “Not everyone that’s a coach is really a coach. Some of them are just play stealers.”

Every play we see in the NFL was at some point created by someone who thought, “What would happen if the players did this?” and went out and tested the idea. Since then, thousands, if not millions, of plays have been created. That’s part of what coaches do. They assess what’s physically possible, along with the weaknesses of the other teams and the capabilities of their own players, and create plays that are designed to give their teams an advantage.

The coach reasons from first principles. The rules of football are the first principles: they govern what you can and can’t do. Everything is possible as long as it’s not against the rules.

The play stealer works off what’s already been done. Sure, maybe he adds a tweak here or there, but by and large he’s just copying something that someone else created.

While both the coach and the play stealer start from something that already exists, they generally have different results. These two people look the same to most of us on the sidelines or watching the game on the TV. Indeed, they look the same most of the time, but when something goes wrong, the difference shows. Both the coach and the play stealer call successful plays and unsuccessful plays. Only the coach, however, can determine why a play was successful or unsuccessful and figure out how to adjust it. The coach, unlike the play stealer, understands what the play was designed to accomplish and where it went wrong, so he can easily course-correct. The play stealer has no idea what’s going on. He doesn’t understand the difference between something that didn’t work and something that played into the other team’s strengths.

Musk would identify the play stealer as the person who reasons by analogy, and the coach as someone who reasons by first principles. When you run a team, you want a coach in charge and not a play stealer. (If you’re a sports fan, you need only look at the difference between the Cleveland Browns and the New England Patriots.)

We’re all somewhere on the spectrum between coach and play stealer. We reason by first principles, by analogy, or a blend of the two.

Another way to think about this distinction comes from another friend, Tim Urban. He says[3] it’s like the difference between the cook and the chef. While these terms are often used interchangeably, there is an important nuance. The chef is a trailblazer, the person who invents recipes. He knows the raw ingredients and how to combine them. The cook, who reasons by analogy, uses a recipe. He creates something, perhaps with slight variations, that’s already been created.

The difference between reasoning by first principles and reasoning by analogy is like the difference between being a chef and being a cook. If the cook lost the recipe, he’d be screwed. The chef, on the other hand, understands the flavor profiles and combinations at such a fundamental level that he doesn’t even use a recipe. He has real knowledge as opposed to know-how.


So much of what we believe is based on some authority figure telling us that something is true. As children, we learn to stop questioning when we’re told “Because I said so.” (More on this later.) As adults, we learn to stop questioning when people say “Because that’s how it works.” The implicit message is “understanding be damned — shut up and stop bothering me.” It’s not intentional or personal. OK, sometimes it’s personal, but most of the time, it’s not.

If you outright reject dogma, you often become a problem: a student who is always pestering the teacher. A kid who is always asking questions and never allowing you to cook dinner in peace. An employee who is always slowing things down by asking why.

When you can’t change your mind, though, you die. Sears was once thought indestructible before Wal-Mart took over. Sears failed to see the world change. Adapting to change is an incredibly hard thing to do when it comes into conflict with the very thing that caused so much success. As Upton Sinclair aptly pointed out, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Wal-Mart failed to see the world change and is now under assault from Amazon.

If we never learn to take something apart, test the assumptions, and reconstruct it, we end up trapped in what other people tell us — trapped in the way things have always been done. When the environment changes, we just continue as if things were the same.

First-principles reasoning cuts through dogma and removes the blinders. We can see the world as it is and see what is possible.

When it comes down to it, everything that is not a law of nature is just a shared belief. Money is a shared belief. So is a border. So are bitcoins. The list goes on.

Some of us are naturally skeptical of what we’re told. Maybe it doesn’t match up to our experiences. Maybe it’s something that used to be true but isn’t true anymore. And maybe we just think very differently about something.

“To understand is to know what to do.”

— Wittgenstein

Techniques for Establishing First Principles

There are many ways to establish first principles. Let’s take a look at a few of them.

Socratic Questioning

Socratic questioning can be used to establish first principles through stringent analysis. This a disciplined questioning process, used to establish truths, reveal underlying assumptions, and separate knowledge from ignorance. The key distinction between Socratic questioning and normal discussions is that the former seeks to draw out first principles in a systematic manner. Socratic questioning generally follows this process:

  1. Clarifying your thinking and explaining the origins of your ideas (Why do I think this? What exactly do I think?)
  2. Challenging assumptions (How do I know this is true? What if I thought the opposite?)
  3. Looking for evidence (How can I back this up? What are the sources?)
  4. Considering alternative perspectives (What might others think? How do I know I am correct?)
  5. Examining consequences and implications (What if I am wrong? What are the consequences if I am?)
  6. Questioning the original questions (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?)

This process stops you from relying on your gut and limits strong emotional responses. This process helps you build something that lasts.

“Because I Said So” or “The Five Whys”

Children instinctively think in first principles. Just like us, they want to understand what’s happening in the world. To do so, they intuitively break through the fog with a game some parents have come to hate.




Here’s an example that has played out numerous times at my house:

“It’s time to brush our teeth and get ready for bed.”


“Because we need to take care of our bodies, and that means we need sleep.”

“Why do we need sleep?”

“Because we’d die if we never slept.”

“Why would that make us die?”

“I don’t know; let’s go look it up.”

Kids are just trying to understand why adults are saying something or why they want them to do something.

The first time your kid plays this game, it’s cute, but for most teachers and parents, it eventually becomes annoying. Then the answer becomes what my mom used to tell me: “Because I said so!” (Love you, Mom.)

Of course, I’m not always that patient with the kids. For example, I get testy when we’re late for school, or we’ve been travelling for 12 hours, or I’m trying to fit too much into the time we have. Still, I try never to say “Because I said so.”

People hate the “because I said so” response for two reasons, both of which play out in the corporate world as well. The first reason we hate the game is that we feel like it slows us down. We know what we want to accomplish, and that response creates unnecessary drag. The second reason we hate this game is that after one or two questions, we are often lost. We actually don’t know why. Confronted with our own ignorance, we resort to self-defense.

I remember being in meetings and asking people why we were doing something this way or why they thought something was true. At first, there was a mild tolerance for this approach. After three “whys,” though, you often find yourself on the other end of some version of “we can take this offline.”

Can you imagine how that would play out with Elon Musk? Richard Feynman? Charlie Munger? Musk would build a billion-dollar business to prove you wrong, Feynman would think you’re an idiot, and Munger would profit based on your inability to think through a problem.

“Science is a way of thinking much more than it is a body of knowledge.”

— Carl Sagan

Examples of First Principles in Action

So we can better understand how first-principles reasoning works, let’s look at four examples.

Elon Musk and SpaceX

Perhaps no one embodies first-principles thinking more than Elon Musk. He is one of the most audacious entrepreneurs the world has ever seen. My kids (grades 3 and 2) refer to him as a real-life Tony Stark, thereby conveniently providing a good time for me to remind them that by fourth grade, Musk was reading the Encyclopedia Britannica and not Pokemon.

What’s most interesting about Musk is not what he thinks but how he thinks:

I think people’s thinking process is too bound by convention or analogy to prior experiences. It’s rare that people try to think of something on a first principles basis. They’ll say, “We’ll do that because it’s always been done that way.” Or they’ll not do it because “Well, nobody’s ever done that, so it must not be good. But that’s just a ridiculous way to think. You have to build up the reasoning from the ground up—“from the first principles” is the phrase that’s used in physics. You look at the fundamentals and construct your reasoning from that, and then you see if you have a conclusion that works or doesn’t work, and it may or may not be different from what people have done in the past.[4]

His approach to understanding reality is to start with what is true — not with his intuition. The problem is that we don’t know as much as we think we do, so our intuition isn’t very good. We trick ourselves into thinking we know what’s possible and what’s not. The way Musk thinks is much different.

Musk starts out with something he wants to achieve, like building a rocket. Then he starts with the first principles of the problem. Running through how Musk would think, Larry Page said in an

interview, “What are the physics of it? How much time will it take? How much will it cost? How much cheaper can I make it? There’s this level of engineering and physics that you need to make judgments about what’s possible and interesting. Elon is unusual in that he knows that, and he also knows business and organization and leadership and governmental issues.”[5]

Rockets are absurdly expensive, which is a problem because Musk wants to send people to Mars. And to send people to Mars, you need cheaper rockets. So he asked himself, “What is a rocket made of? Aerospace-grade aluminum alloys, plus some titanium, copper, and carbon fiber. And … what is the value of those materials on the commodity market? It turned out that the materials cost of a rocket was around two percent of the typical price.”[6]

Why, then, is it so expensive to get a rocket into space? Musk, a notorious self-learner with degrees in both economics and physics, literally taught himself rocket science. He figured that the only reason getting a rocket into space is so expensive is that people are stuck in a mindset that doesn’t hold up to first principles. With that, Musk decided to create SpaceX and see if he could build rockets himself from the ground up.

In an interview with Kevin Rose, Musk summarized his approach:

I think it’s important to reason from first principles rather than by analogy. So the normal way we conduct our lives is, we reason by analogy. We are doing this because it’s like something else that was done, or it is like what other people are doing… with slight iterations on a theme. And it’s … mentally easier to reason by analogy rather than from first principles. First principles is kind of a physics way of looking at the world, and what that really means is, you … boil things down to the most fundamental truths and say, “okay, what are we sure is true?” … and then reason up from there. That takes a lot more mental energy.[7]

Musk then gave an example of how Space X uses first principles to innovate at low prices:

Somebody could say — and in fact people do — that battery packs are really expensive and that’s just the way they will always be because that’s the way they have been in the past. … Well, no, that’s pretty dumb… Because if you applied that reasoning to anything new, then you wouldn’t be able to ever get to that new thing…. you can’t say, … “oh, nobody wants a car because horses are great, and we’re used to them and they can eat grass and there’s lots of grass all over the place and … there’s no gasoline that people can buy….”

He then gives a fascinating example about battery packs:

… they would say, “historically, it costs $600 per kilowatt-hour. And so it’s not going to be much better than that in the future. … So the first principles would be, … what are the material constituents of the batteries? What is the spot market value of the material constituents? … It’s got cobalt, nickel, aluminum, carbon, and some polymers for separation, and a steel can. So break that down on a material basis; if we bought that on a London Metal Exchange, what would each of these things cost? Oh, jeez, it’s … $80 per kilowatt-hour. So, clearly, you just need to think of clever ways to take those materials and combine them into the shape of a battery cell, and you can have batteries that are much, much cheaper than anyone realizes.


After studying the psychology of virality, Jonah Peretti founded BuzzFeed in 2006. The site quickly grew to be one of the most popular on the internet, with hundreds of employees and substantial revenue.

Peretti figured out early on the first principle of a successful website: wide distribution. Rather than publishing articles people should read, BuzzFeed focuses on publishing those that people want to read. This means aiming to garner maximum social shares to put distribution in the hands of readers.

Peretti recognized the first principles of online popularity and used them to take a new approach to journalism. He also ignored SEO, saying, “Instead of making content robots like, it was more satisfying to make content humans want to share.”[8] Unfortunately for us, we share a lot of cat videos.

A common aphorism in the field of viral marketing is, “content might be king, but distribution is queen, and she wears the pants” (or “and she has the dragons”; pick your metaphor). BuzzFeed’s distribution-based approach is based on obsessive measurement, using A/B testing and analytics.

Jon Steinberg, president of BuzzFeed, explains the first principles of virality:

Keep it short. Ensure [that] the story has a human aspect. Give people the chance to engage. And let them react. People mustn’t feel awkward sharing it. It must feel authentic. Images and lists work. The headline must be persuasive and direct.

Derek Sivers and CD Baby

When Sivers founded his company CD Baby, he reduced the concept down to first principles. Sivers asked, What does a successful business need? His answer was happy customers.

Instead of focusing on garnering investors or having large offices, fancy systems, or huge numbers of staff, Sivers focused on making each of his customers happy. An example of this is his famous order confirmation email, part of which reads:

Your CD has been gently taken from our CD Baby shelves with sterilized contamination-free gloves and placed onto a satin pillow. A team of 50 employees inspected your CD and polished it to make sure it was in the best possible condition before mailing. Our packing specialist from Japan lit a candle and a hush fell over the crowd as he put your CD into the finest gold-lined box money can buy.

By ignoring unnecessary details that cause many businesses to expend large amounts of money and time, Sivers was able to rapidly grow the company to $4 million in monthly revenue. In Anything You Want, Sivers wrote:

Having no funding was a huge advantage for me.
A year after I started CD Baby, the dot-com boom happened. Anyone with a little hot air and a vague plan was given millions of dollars by investors. It was ridiculous. …
Even years later, the desks were just planks of wood on cinder blocks from the hardware store. I made the office computers myself from parts. My well-funded friends would spend $100,000 to buy something I made myself for $1,000. They did it saying, “We need the very best,” but it didn’t improve anything for their customers. …
It’s counterintuitive, but the way to grow your business is to focus entirely on your existing customers. Just thrill them, and they’ll tell everyone.

To survive as a business, you need to treat your customers well. And yet so few of us master this principle.

Employing First Principles in Your Daily Life

Most of us have no problem thinking about what we want to achieve in life, at least when we’re young. We’re full of big dreams, big ideas, and boundless energy. The problem is that we let others tell us what’s possible, not only when it comes to our dreams but also when it comes to how we go after them. And when we let other people tell us what’s possible or what the best way to do something is, we outsource our thinking to someone else.

The real power of first-principles thinking is moving away from incremental improvement and into possibility. Letting others think for us means that we’re using their analogies, their conventions, and their possibilities. It means we’ve inherited a world that conforms to what they think. This is incremental thinking.

When we take what already exists and improve on it, we are in the shadow of others. It’s only when we step back, ask ourselves what’s possible, and cut through the flawed analogies that we see what is possible. Analogies are beneficial; they make complex problems easier to communicate and increase understanding. Using them, however, is not without a cost. They limit our beliefs about what’s possible and allow people to argue without ever exposing our (faulty) thinking. Analogies move us to see the problem in the same way that someone else sees the problem.

The gulf between what people currently see because their thinking is framed by someone else and what is physically possible is filled by the people who use first principles to think through problems.

First-principles thinking clears the clutter of what we’ve told ourselves and allows us to rebuild from the ground up. Sure, it’s a lot of work, but that’s why so few people are willing to do it. It’s also why the rewards for filling the chasm between possible and incremental improvement tend to be non-linear.

Let’s take a look at a few of the limiting beliefs that we tell ourselves.

“I don’t have a good memory.” [10]
People have far better memories than they think they do. Saying you don’t have a good memory is just a convenient excuse to let you forget. Taking a first-principles approach means asking how much information we can physically store in our minds. The answer is “a lot more than you think.” Now that we know it’s possible to put more into our brains, we can reframe the problem into finding the most optimal way to store information in our brains.

“There is too much information out there.”
A lot of professional investors read Farnam Street. When I meet these people and ask how they consume information, they usually fall into one of two categories. The differences between the two apply to all of us. The first type of investor says there is too much information to consume. They spend their days reading every press release, article, and blogger commenting on a position they hold. They wonder what they are missing. The second type of investor realizes that reading everything is unsustainable and stressful and makes them prone to overvaluing information they’ve spent a great amount of time consuming. These investors, instead, seek to understand the variables that will affect their investments. While there might be hundreds, there are usually three to five variables that will really move the needle. The investors don’t have to read everything; they just pay attention to these variables.

“All the good ideas are taken.”
A common way that people limit what’s possible is to tell themselves that all the good ideas are taken. Yet, people have been saying this for hundreds of years — literally — and companies keep starting and competing with different ideas, variations, and strategies.

“We need to move first.”
I’ve heard this in boardrooms for years. The answer isn’t as black and white as this statement. The iPhone wasn’t first, it was better. Microsoft wasn’t the first to sell operating systems; it just had a better business model. There is a lot of evidence showing that first movers in business are more likely to fail than latecomers. Yet this myth about the need to move first continues to exist.

Sometimes the early bird gets the worm and sometimes the first mouse gets killed. You have to break each situation down into its component parts and see what’s possible. That is the work of first-principles thinking.

“I can’t do that; it’s never been done before.”
People like Elon Musk are constantly doing things that have never been done before. This type of thinking is analogous to looking back at history and building, say, floodwalls, based on the worst flood that has happened before. A better bet is to look at what could happen and plan for that.

“As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.”

— Harrington Emerson


The thoughts of others imprison us if we’re not thinking for ourselves.

Reasoning from first principles allows us to step outside of history and conventional wisdom and see what is possible. When you really understand the principles at work, you can decide if the existing methods make sense. Often they don’t.

Reasoning by first principles is useful when you are (1) doing something for the first time, (2) dealing with complexity, and (3) trying to understand a situation that you’re having problems with. In all of these areas, your thinking gets better when you stop making assumptions and you stop letting others frame the problem for you.

Analogies can’t replace understanding. While it’s easier on your brain to reason by analogy, you’re more likely to come up with better answers when you reason by first principles. This is what makes it one of the best sources of creative thinking. Thinking in first principles allows you to adapt to a changing environment, deal with reality, and seize opportunities that others can’t see.

Many people mistakenly believe that creativity is something that only some of us are born with, and either we have it or we don’t. Fortunately, there seems to be ample evidence that this isn’t true.[11] We’re all born rather creative, but during our formative years, it can be beaten out of us by busy parents and teachers. As adults, we rely on convention and what we’re told because that’s easier than breaking things down into first principles and thinking for ourselves. Thinking through first principles is a way of taking off the blinders. Most things suddenly seem more possible.

“I think most people can learn a lot more than they think they can,” says Musk. “They sell themselves short without trying. One bit of advice: it is important to view knowledge as sort of a semantic tree — make sure you understand the fundamental principles, i.e., the trunk and big branches, before you get into the leaves/details or there is nothing for them to hang on to.”

End Notes

[1] Aristotle, Physics 184a10–21

[2] Aristotle, Metaphysics 1013a14-15

[3] https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html

[4] Elon Musk, quoted by Tim Urban in “The Cook and the Chef: Musk’s Secret Sauce,” Wait But Why https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html

[5] Vance, Ashlee. Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future (p. 354)

[6] https://www.wired.com/2012/10/ff-elon-musk-qa/all/

[7] https://www.youtube.com/watch?v=L-s_3b5fRd8

[8] David Rowan, “How BuzzFeed mastered social sharing to become a media giant for a new era,” Wired.com. 2 January 2014. https://www.wired.co.uk/article/buzzfeed

[9] https://www.quora.com/What-does-Elon-Musk-mean-when-he-said-I-think-it%E2%80%99s-important-to-reason-from-first-principles-rather-than-by-analogy/answer/Bruce-Achterberg

[10] https://www.scientificamerican.com/article/new-estimate-boosts-the-human-brain-s-memory-capacity-10-fold/

[11] Breakpoint and Beyond: Mastering the Future Today, George Land

[12] https://www.reddit.com/r/IAmA/comments/2rgsan/i_am_elon_musk_ceocto_of_a_rocket_company_ama/cnfre0a/

What Are You Doing About It? Reaching Deep Fluency with Mental Models

The mental models approach is very intellectually appealing, almost seductive to a certain type of person. (It certainly is for us.)

The whole idea is to take the world’s greatest, most useful ideas and make them work for you!

How hard can it be?

Nearly all of the models themselves are perfectly well understandable by the average well-educated knowledge worker, including all of you reading this piece. Ideas like Bayesian updating, multiplicative thinking, hindsight bias, or the bias from envy and jealousy, are all obviously true and part of the reality we live in.

There’s a bit of a problem we’re seeing though: People are reading the stuff, enjoying it, agreeing with it…but not taking action. It’s not becoming part of their standard repertoire.

Let’s say you followed up on Bayesian thinking after reading our post on it — you spent some time soaking in Thomas Bayes‘ great wisdom on updating your understanding of the world incrementally and probabilistically rather than changing your mind in black-and-white. Great!

But a week later, what have you done with that knowledge? How has it actually impacted your life? If the honest answer is “It hasn’t,” then haven’t you really wasted your time?

Ironically, it’s this habit of “going halfway” instead of “going all the way,” like Sisyphus constantly getting halfway up the mountain, which is the biggest waste of time!

See, the common reason why people don’t truly “follow through” with all of this stuff is that they haven’t raised their knowledge to a “deep fluency” — they’re skimming the surface. They pick up bits and pieces — some heuristics or biases here, a little physics or biology there, and then call it a day and pull up Netflix. They get a little understanding, but not that much, and certainly no doing.

The better approach, if you actually care about making changes, is to imitate Charlie Munger, Charles Darwin, and Richard Feynman, and start raising your knowledge of the Big Ideas to a deep fluency, and then figuring out systems, processes, and mental tricks to implement them in your own life.

Let’s work through an example.


Say you’re just starting to explore all the wonderful literature on heuristics and biases and come across the idea of Confirmation Bias: The idea that once we’ve landed on an idea we really like, we tend to keep looking for further data to confirm our already-held notions rather than trying to disprove our idea.

This is common, widespread, and perfectly natural. We all do it. John Kenneth Galbraith put it best:

“In the choice between changing one’s mind and proving there’s no need to do so, most people get busy on the proof.”

Now, what most people do, the ones you’re trying to outperform, is say “Great idea! Thanks Galbraith.” and then stop thinking about it.

Don’t do that!

The next step would be to push a bit further, to get beyond the sound bite: What’s the process that leads to confirmation bias? Why do I seek confirmatory information and in which contexts am I particularly susceptible? What other models are related to the confirmation bias? How do I solve the problem?

The answers are out there: They’re in Daniel Kahneman and in Charlie Munger and in Elster. They’re available by searching through Farnam Street.

The big question: How far do you go? A good question without a perfect answer. But the best test I can think of is to perform something like the Feynman technique, and to think about the chauffeur problem.

Can you explain it simply to an intelligent layperson, using vivid examples? Can you answer all the follow-ups? That’s fluency. And you must be careful not to fool yourself, because in the wise words of Feynman, “…you are the easiest person to fool.

While that’s great work, you’re not done yet. You have to make the rubber hit the road now. Something has to happen in your life and mind.

The way to do that is to come up with rules, systems, parables, and processes of your own, or to copy someone else’s that are obviously sound.

In the case of Confirmation Bias, we have two wonderful models to copy, one from each of the Charlies — Darwin, and Munger.

Darwin had rule, one we have written about before but will restate here: Make a note, immediately, if you come across a thought or idea that is contrary to something you currently believe. 

As for Munger, he implemented a rule in his own life: “I never allow myself to have an opinion on anything that I don’t know the other side’s argument better than they do.”

Now we’re getting somewhere! With the implementation of those two habits and some well-earned deep fluency, you can immediately, tomorrow, start improving the quality of your decision-making.

Sometimes when we get outside the heuristic/biases stuff, it’s less obvious how to make the “rubber hit the road” — and that will be a constant challenge for you as you take this path.

But that’s also the fun part! With every new idea and model you pick up, you also pick up the opportunity to synthesize for yourself a useful little parable to make it stick or a new habit that will help you use it. Over time, you’ll come up with hundreds of them, and people might even look to you when they’re having problems doing it themselves!

Look at Buffett and Munger — both guys are absolute machines, chock full of pithy little rules and stories they use in order to implement and recall what they’ve learned.

For example, Buffett discovered early on the manipulative psychology behind open-outcry auctions. What did he do? He made a rule to never go to one! That’s how it’s done.

Even if you can’t come up with a great rule like that, you can figure out a way to use any new model or idea you learn. It just takes some creative thinking.

Sometimes it’s just a little mental rule or story that sticks particularly well. (Recall one of the prime lessons from our series on memory: Salient, often used, well-associated, and important information sticks best.)

We did this very thing recently with Lee Kuan Yew’s Rule. What a trite way to refer to the simple idea of asking if something actually works…attributing it to a Singaporean political leader!

But that’s exactly the point. Give the thing a name and a life and, like clockwork, you’ll start recalling it. The phrase “Lee Kuan Yew’s Rule” actually appears in my head when I’m approaching some new system or ideology, and as soon as it does, I find myself backing away from ideology and towards pragmatism. Exactly as I’d hoped.

Your goal should be to create about a thousand of those little tools in your head, attached to a deep fluency in the material from which it came. 


I can hear the objection coming. Who has time for this stuff?

You do. It’s about making time for the things that really matter. And what could possibly matter more than upgrading your whole mental operating system?I solemnly promise that you’re spending way more time right now making sub-optimal decisions and trying to deal with the fallout.

If you need help learning to manage your time right this second, check out the resources in our learning community including our productivity seminar, which changed thousands people’s lives for the better. The central idea is to become more thoughtful and deliberate with how you spend your hours. When you start doing that, you’ll notice you do have an hour a day to spend on this Big Ideas stuff.

Once you find that solid hour (or more), start using it in the way outlined above, and let the world’s great knowledge actually start making an impact. Just do a little every day.

What you’ll notice, over the weeks and months and years of doing this, is that your mind will really change! It has to! And with that, your life will change too. The only way to fail at improving your brain is by imitating Sisyphus, pushing the boulder halfway up, over and over.

Unless and until you really understand this, you’ll continue spinning your wheels. So here’s your call to action. Go get to it!