Blog

Avoiding Bad Decisions

Sometimes success is just about avoiding failure.

At FS, we help people make better decisions without needing to rely on getting lucky. One aspect of decision-making that’s rarely talked about is how to avoid making bad decisions.

Here are five of the biggest reasons we make bad decisions.

***

1. We’re unintentionally stupid

We like to think that we can rationally process information like a computer, but we can’t. Cognitive biases explain why we made a bad decision but rarely help us avoid them in the first place. It’s better to focus on these warning signs that signal something is about to go wrong.

Warning signs you’re about to unintentionally do something stupid:

  • You’re tired, emotional, in a rush, or distracted.
  • You’re operating in a group or working with an authority figure.

The rule: Never make important decisions when you’re tired, emotional, distracted, or in a rush.

2. We solve the wrong problem

The first person to state the problem rarely has the best insight into the problem. Once a problem is thrown out on the table, however, our type-A problem-solving nature kicks in and forgets to first ask if we’re solving the right problem.

Warning signs you’re solving the wrong problem:

  • You let someone else define the problem for you.
  • You’re far away from the problem.
  • You’re thinking about the problem at only one level or through a narrow lens.

The rule: Never let anyone define the problem for you.

3. We use incorrect or insufficient information

We like to believe that people tell us the truth. We like to believe the people we talk to understand what they are talking about. We like to believe that we have all the information.

Warning signs you have incorrect or insufficient information:

  • You’re speaking to someone who spoke to someone who spoke to someone. Someone will get in trouble when the truth comes out.
  • You’re reading about it in the news.

The rule: Seek out information from someone as close to the source as possible, because they’ve earned their knowledge and have an understanding that you don’t. When information is filtered (and it often is), first consider the incentives involved and then think of the proximity to earned knowledge.

4. We fail to learn

You know the person that sits beside you at work that has twenty years of experience but keeps making the same mistakes over and over? They don’t have twenty years of experience—they have one year of experience repeated twenty times. If you can’t learn, you can’t get better.

Most of us can observe and react accordingly. But to truly learn from our experiences, we must reflect on our reactions. Reflection has to be part of your process, not something you might do if you have time. Don’t use the excuse of being too busy or get too invested in protecting your ego. In short, we can’t learn from experience without reflection. Only reflection allows us to distill experience into something we can learn from to make better decisions in the future.

Warning signs you’re not learning:

  • You’re too busy to reflect.
  • You don’t keep track of your decisions.
  • You can’t calibrate your own decision-making.

The rule: Be less busy. Keep a learning journal. Reflect every day.

5. We focus on optics over outcomes

Our evolutionary programming conditions us to do what’s easy over what’s right. After all, it’s often easier to signal being virtuous than to actually be virtuous.

Warning signs you’re focused on optics:

  • You’re thinking about how you’ll defend your decision.
  • You’re knowingly choosing what’s defendable over what’s right.
  • You’d make a different decision if you owned the company.
  • You catch yourself saying this is what your boss would want.

The rule: Act as you would want an employee to act if you owned the company.

***

Avoiding bad decisions is just as important as making good ones. Knowing the warning signs and having a set of rules for your decision-making process limits the amount of luck you need to get good outcomes.

Your Thinking Rate Is Fixed

You can’t force yourself to think faster. If you try, you’re likely to end up making much worse decisions. Here’s how to improve the actual quality of your decisions instead of chasing hacks to speed them up.

If you’re a knowledge worker, as an ever-growing proportion of people are, the product of your job is decisions.

Much of what you do day to day consists of trying to make the right choices among competing options, meaning you have to process large amounts of information, discern what’s likely to be most effective for moving towards your desired goal, and try to anticipate potential problems further down the line. And all the while, you’re operating in an environment of uncertainty where anything could happen tomorrow.

When the product of your job is your decisions, you might find yourself wanting to be able to make more decisions more quickly so you can be more productive overall.

Chasing speed is a flawed approach. Because decisions—at least good ones—don’t come out of thin air. They’re supported by a lot of thinking.

While experience and education can grant you the pattern-matching abilities to make some kinds of decisions using intuition, you’re still going to run into decisions that require you to sit and consider the problem from multiple angles. You’re still going to need to schedule time to do nothing but think. Otherwise making more decisions will make you less productive overall, not more, because your decisions will suck.

Here’s a secret that might sound obvious but can actually transform the way you work: you can’t force yourself to think faster. Our brains just don’t work that way. The rate at which you make mental discernments is fixed.

Sure, you can develop your ability to do certain kinds of thinking faster over time. You can learn new methods for decision-making. You can develop your mental models. You can build your ability to focus. But if you’re trying to speed up your thinking so you can make an extra few decisions today, forget it.

***

Beyond the “hurry up” culture

Management consultant Tom DeMarco writes in Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency that many knowledge work organizations have a culture where the dominant message at all times is to hurry up.

Everyone is trying to work faster at all times, and they pressure everyone around them to work faster, too. No one wants to be perceived as a slacker. The result is that managers put pressure on their subordinates through a range of methods. DeMarco lists the following examples:

  • “Turning the screws on delivery dates (aggressive scheduling)
  • Loading on extra work
  • Encouraging overtime
  • Getting angry when disappointed
  • Noting one subordinate’s extraordinary effort and praising it in the presence of others
  • Being severe about anything other than superb performance
  • Expecting great things of all your workers
  • Railing against any apparent waste of time
  • Setting an example yourself (with the boss laboring so mightily there is certainly no time for anyone else to goof off)
  • Creating incentives to encourage desired behavior or results.”

All of these things increase pressure in the work environment and repeatedly reinforce the “hurry up!” message. They make managers feel like they’re moving things along faster. That way if work isn’t getting done, it’s not their fault. But, DeMarco writes, they don’t lead to meaningful changes in behavior that make the whole organization more productive. Speeding up often results in poor decisions that create future problems.

The reason more pressure doesn’t mean better productivity is that the rate at which we think is fixed.

We can’t force ourselves to start making faster decisions right now just because we’re faced with an unrealistic deadline. DeMarco writes, “Think rate is fixed. No matter what you do, no matter how hard you try, you can’t pick up the pace of thinking.

If you’re doing a form of physical labor, you can move your body faster when under pressure. (Of course, if it’s too fast, you’ll get injured or won’t be able to sustain it for long.)

If you’re a knowledge worker, you can’t pick up the pace of mental discriminations just because you’re under pressure. Chances are good that you’re already going as fast as you can. Because guess what? You can’t voluntarily slow down your thinking, either.

***

The limits of pressure

Faced with added stress and unable to accelerate our brains instantaneously, we can do any of three things:

  • “Eliminate wasted time.
  • Defer tasks that are not on the critical path.
  • Stay late.”

Even if those might seem like positive things, they’re less advantageous than they appear at first glance. Their effects are marginal at best. The smarter and more qualified the knowledge worker, the less time they’re likely to be wasting anyway. Most people don’t enjoy wasting time. What you’re more likely to end up eliminating is valuable slack time for thinking.

Deferring non-critical tasks doesn’t save any time overall, it just pushes work forwards—to the point where those tasks do become critical. Then something else gets deferred.

Staying late might work once in a while. Again, though, its effects are limited. If we keep doing it night after night, we run out of energy, our personal lives suffer, and we make worse decisions as a result.

None of the outcomes of increasing pressure result in more or better decisions. None of them speed up the rate at which people think. Even if an occasional, tactical increase in pressure (whether it comes from the outside or we choose to apply it to ourselves) can be effective, ongoing pressure increases are unsustainable in the long run.

***

Think rate is fixed

It’s incredibly important to truly understand the point DeMarco makes in this part of Slack: the rate at which we process information is fixed.

When you’re under pressure, the quality of your decisions plummets. You miss possible angles, you don’t think ahead, you do what makes sense now, you panic, and so on. Often, you make a snap judgment then grasp for whatever information will support it for the people you work with. You don’t have breathing room to stress-test your decisions.

The clearer you can think, the better your decisions will be. Trying to think faster can only cloud your judgment. It doesn’t matter how many decisions you make if they’re not good ones. As DeMarco reiterates throughout the book, you can be efficient without being effective.

Try making a list of the worst decisions you’ve made so far in your career. There’s a good chance most of them were made under intense pressure or without taking much time over them.

At Farnam Street, we write a lot about how to make better decisions, and we share a lot of tools for better thinking. We made a whole course on decision-making. But none of these resources are meant to immediately accelerate your thinking. Many of them require you to actually slow down a whole lot and spend more time on your decisions. They improve the rate at which you can do certain kinds of thinking, but it’s not going to be an overnight process.

***

Upgrading your brain

Some people read one of our articles or books about mental models and complain that it’s not an effective approach because it didn’t lead to an immediate improvement in their thinking. That’s unsurprising; our brains don’t work like that. Integrating new, better approaches takes a ton of time and repetition, just like developing any other skill. You have to keep on reflecting and making course corrections.

At the end of the day, your brain is going to go where it wants to go. You’re going to think the way you think. However much you build awareness of how the world works and learn how to reorient, you’re still, to use Jonathan Haidt’s metaphor from The Righteous Mind, a tiny rider atop a gigantic elephant. None of us can reshape how we think overnight.

Making good decisions is hard work. There’s a limit to how many decisions you can make in a day before you need a break. On top of that, many knowledge workers are in fields where the most relevant information has a short half-life. Making good decisions requires constant learning and verifying what you think you know.

If you want to make better decisions, you need to do everything you can to reduce the pressure you’re under. You need to let your brain take whatever time it needs to think through the problem at hand. You need to get out of a reactive mode, recognize when you need to pause, and spend more time looking at problems.

A good metaphor is installing an update to the operating system on your laptop. Would you rather install an update that fixes bugs and improves existing processes, or one that just makes everything run faster? Obviously, you’d prefer the former. The latter would just lead to more crashes. The same is true for updating your mental operating system.

Stop trying to think faster. Start trying to think better.

The Feynman Learning Technique

If you’re after a way to supercharge your learning and become smarter, the Feynman Technique might just be the best way to learn absolutely anything. Devised by a Nobel Prize-winning physicist, it leverages the power of teaching for better learning.

The Feynman Learning Technique is a simple way of approaching anything new you want to learn.
Why use it? Because learning doesn’t happen from skimming through a book or remembering enough to pass a test. Information is learned when you can explain it and use it in a wide variety of situations. The Feynman Technique gets more mileage from the ideas you encounter instead of rendering anything new into isolated, useless factoids.

When you really learn something, you give yourself a tool to use for the rest of your life. The more you know, the fewer surprises you will encounter, because most new things will connect to something you already understand.

Ultimately, the point of learning is to understand the world. But most of us don’t bother to deliberately learn anything. We memorize what we need to as we move through school, then forget most of it. As we continue through life, we don’t extrapolate from our experiences to broaden the applicability of our knowledge. Consequently, life kicks us in the ass time and again.

To avoid the pain of being bewildered by the unexpected, the Feynman Technique helps you turn information into knowledge that you can access as easily as a shirt in your closet.

Let’s go.

***

The Feynman Technique

“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction.” —E.F. Schumacher

There are four steps to the Feynman Learning Technique, based on the method Richard Feynman originally used. We have adapted it slightly after reflecting on our own experiences using this process to learn. The steps are as follows:

  1. Pretend to teach a concept you want to learn about to a student in the sixth grade.
  2. Identify gaps in your explanation. Go back to the source material to better understand it.
  3. Organize and simplify.
  4. Transmit (optional).

Step 1: Pretend to teach it to a child or a rubber duck

Take out a blank sheet of paper. At the top, write the subject you want to learn. Now write out everything you know about the subject as if you were teaching it to a child or a rubber duck sitting on your desk. You are not teaching to your smart adult friend, but rather a child who has just enough vocabulary and attention span to understand basic concepts and relationships.

Or, for a different angle on the Feynman Technique, you could place a rubber duck on your desk and try explaining the concept to it. Software engineers sometimes tackle debugging by explaining their code, line by line, to a rubber duck. The idea is that explaining something to a silly-looking inanimate object will force you to be as simple as possible.

It turns out that one of the ways we mask our lack of understanding is by using complicated vocabulary and jargon. The truth is, if you can’t define the words and terms you are using, you don’t really know what you’re talking about. If you look at a painting and describe it as “abstract” because that’s what you heard in art class, you aren’t displaying any comprehension of the painting. You’re just mimicking what you’ve heard. And you haven’t learned anything. You need to make sure your explanation isn’t above, say, a sixth-grade reading level by using easily accessible words and phrases.

When you write out an idea from start to finish in simple language that a child can understand, you force yourself to understand the concept at a deeper level and simplify relationships and connections between ideas. You can better explain the why behind your description of the what.

Looking at that same painting again, you will be able to say that the painting doesn’t display buildings like the ones we look at every day. Instead it uses certain shapes and colors to depict a city landscape. You will be able to point out what these are. You will be able to engage in speculation about why the artist chose those shapes and those colors. You will be able to explain why artists sometimes do this, and you will be able to communicate what you think of the piece considering all of this. Chances are, after capturing a full explanation of the painting in the simplest possible terms that would be easily understood by a sixth-grader, you will have learned a lot about that painting and abstract art in general.

Some of capturing what you would teach will be easy. These are the places where you have a clear understanding of the subject. But you will find many places where things are much foggier.

Step 2: Identify gaps in your explanation

Areas where you struggle in Step 1 are the points where you have some gaps in your understanding.
Identifying gaps in your knowledge—where you forget something important, aren’t able to explain it, or simply have trouble thinking of how variables interact—is a critical part of the learning process. Filling those gaps is when you really make the learning stick.

Now that you know where you have gaps in your understanding, go back to the source material. Augment it with other sources. Look up definitions. Keep going until you can explain everything you need to in basic terms.

Only when you can explain your understanding without jargon and in simple terms can you demonstrate your understanding. Think about it this way. If you require complicated terminology to explain what you know, you have no flexibility. When someone asks you a question, you can only repeat what you’ve already said.

Simple terms can be rearranged and easily combined with other words to communicate your point. When you can say something in multiple ways using different words, you understand it really well.
Being able to explain something in a simple, accessible way shows you’ve done the work required to learn. Skipping it leads to the illusion of knowledge—an illusion that can be quickly shattered when challenged.

Identifying the boundaries of your understanding is also a way of defining your circle of competence. When you know what you know (and are honest about what you don’t know), you limit the mistakes you’re liable to make and increase your chance of success when applying knowledge.

Step 3. Organize and simplify

Now you have a set of hand-crafted notes containing a simple explanation. Organize them into a narrative that you can tell from beginning to end. Read it out loud. If the explanation sounds confusing at any point, go back to Step 2. Keep iterating until you have a story that you can tell to anyone who will listen.

If you follow this approach over and over, you will end up with a binder full of pages on different subjects. If you take some time twice a year to go through this binder, you will find just how much you retain.

Step 4: Transmit (optional)

This part is optional, but it’s the logical result of everything you’ve just done. If you really want to be sure of your understanding, run it past someone (ideally someone who knows little of the subject). The ultimate test of your knowledge is your capacity to convey it to another. You can read out directly what you’ve written. You can present the material like a lecture. You can ask your friends for a few minutes of their time while you’re buying them dinner. You can volunteer as a guest speaker in your child’s classroom or your parents’ retirement residence. All that really matters is that you attempt to transmit the material to at least one person who isn’t that familiar with it.

The questions you get and the feedback you receive are invaluable for further developing your understanding. Hearing what your audience is curious about will likely pique your own curiosity and set you on a path for further learning. After all, it’s only when you begin to learn a few things really well do you appreciate how much there is to know.

***

The Feynman Technique is not only a wonderful recipe for learning but also a window into a different way of thinking that allows you to tear ideas apart and reconstruct them from the ground up.
When you’re having a conversation with someone and they start using words or relationships that you don’t understand, ask them to explain it to you like you’re twelve.

Not only will you supercharge your own learning, but you’ll also supercharge theirs.

Feynman’s approach intuitively believes that intelligence is a process of growth, which dovetails nicely with the work of Carol Dweck, who describes the difference between a fixed and growth mindset.

“If you can’t reduce a difficult engineering problem to just one 8-1/2 x 11-inch sheet of paper, you will probably never understand it.” —Ralph Peck

What does it mean to “know?”

Richard Feynman believed that “the world is much more interesting than any one discipline.” He understood the difference between knowing something and knowing the name of something, as well as how, when you truly know something, you can use that knowledge broadly. When you only know what something is called, you have no real sense of what it is. You can’t take it apart and play with it or use it to make new connections and generate new insights. When you know something, the labels are unimportant, because it’s not necessary to keep it in the box it came in.

“The person who says he knows what he thinks but cannot express it usually does not know what he thinks.” —Mortimer Adler

Feynman’s explanations—on why questions, why trains stay on the tracks as they go around a curve, how we look for new laws of science, or how rubber bands work—are simple and powerful. Here he articulates the difference between knowing the name of something and understanding it.

“See that bird? It’s a brown-throated thrush, but in Germany it’s called a halzenfugel, and in Chinese they call it a chung ling, and even if you know all those names for it, you still know nothing about the bird. You only know something about people: what they call the bird. Now that thrush sings, and teaches its young to fly, and flies so many miles away during the summer across the country, and nobody knows how it finds its way.”

Knowing the name of something doesn’t mean you understand it. We talk in fact-deficient, obfuscating generalities to cover up our lack of understanding.

How then should we go about learning? On this Feynman echoes Albert Einstein and proposes that we take things apart. He describes a dismal first-grade science book that attempts to teach kids about energy by showing a series of pictures about a wind-up dog toy and asking, “What makes it move?” For Feynman, this was the wrong approach because it was too abstract. Saying that energy made the dog move was equal to saying “that ‘God makes it move,’ or ‘spirit makes it move,’ or ‘movability makes it move.’ (In fact, one could equally well say ‘energy makes it stop.’)”

Staying at the level of the abstract imparts no real understanding. Kids might subsequently get the question right on a test, if they have a decent memory. But they aren’t going to have any understanding of what energy actually is.

Feynman then goes on to describe a more useful approach:

“Perhaps I can make the difference a little clearer this way: if you ask a child what makes the toy dog move, you should think about what an ordinary human being would answer. The answer is that you wound up the spring; it tries to unwind and pushes the gear around.

What a good way to begin a science course! Take apart the toy; see how it works. See the cleverness of the gears; see the ratchets. Learn something about the toy, the way the toy is put together, the ingenuity of people devising the ratchets and other things. That’s good.”

***

After the Feynman Technique

“We take other men’s knowledge and opinions upon trust; which is an idle and superficial learning. We must make them our own. We are just like a man who, needing fire, went to a neighbor’s house to fetch it, and finding a very good one there, sat down to warm himself without remembering to carry any back home. What good does it do us to have our belly full of meat if it is not digested, if it is not transformed into us, if it does not nourish and support us?” —Michel de Montaigne

The Feynman Technique helps you learn stuff. But learning doesn’t happen in isolation. We learn not only from the books we read but also the people we talk to and the various positions, ideas, and opinions we are exposed to. Richard Feynman also provided advice on how to sort through information so you can decide what is relevant and what you should bother learning.

In a series of non-technical lectures in 1963, memorialized in a short book called The Meaning of It All: Thoughts of a Citizen Scientist, Feynman talks through basic reasoning and some of the problems of his day. His method of evaluating information is another set of tools you can use along with the Feynman Learning Technique to refine what you learn.

Particularly useful are a series of “tricks of the trade” he gives in a section called “This Unscientific Age.” These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day.

Before we start, it’s worth noting that Feynman takes pains to mention that not everything needs to be considered with scientific accuracy. It’s up to you to determine where applying these tricks might be most beneficial in your life.

Regardless of what you are trying to gather information on, these tricks help you dive deeper into topics and ideas and not get waylaid by inaccuracies or misunderstandings on your journey to truly know something.

As we enter the realm of “knowable” things in a scientific sense, the first trick has to do with deciding whether someone else truly knows their stuff or is mimicking others:

“My trick that I use is very easy. If you ask him intelligent questions—that is, penetrating, interested, honest, frank, direct questions on the subject, and no trick questions—then he quickly gets stuck. It is like a child asking naive questions. If you ask naive but relevant questions, then almost immediately the person doesn’t know the answer, if he is an honest man. It is important to appreciate that.

And I think that I can illustrate one unscientific aspect of the world which would be probably very much better if it were more scientific. It has to do with politics. Suppose two politicians are running for president, and one goes through the farm section and is asked, “What are you going to do about the farm question?” And he knows right away—bang, bang, bang.

Now he goes to the next campaigner who comes through. “What are you going to do about the farm problem?” “Well, I don’t know. I used to be a general, and I don’t know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem. And it must be a hard problem. So the way that I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can’t tell you ahead of time what conclusion, but I can give you some of the principles I’ll try to use—not to make things difficult for individual farmers, if there are any special problems we will have to have some way to take care of them, etc., etc., etc.””

If you learn something via the Feynman Technique, you will be able to answer questions on the subject. You can make educated analogies, extrapolate the principles to other situations, and easily admit what you do not know.

The second trick has to do with dealing with uncertainty. Very few ideas in life are absolutely true. What you want is to get as close to the truth as you can with the information available:

“I would like to mention a somewhat technical idea, but it’s the way, you see, we have to understand how to handle uncertainty. How does something move from being almost certainly false to being almost certainly true? How does experience change? How do you handle the changes of your certainty with experience? And it’s rather complicated, technically, but I’ll give a rather simple, idealized example.

You have, we suppose, two theories about the way something is going to happen, which I will call “Theory A” and “Theory B.” Now it gets complicated. Theory A and Theory B. Before you make any observations, for some reason or other, that is, your past experiences and other observations and intuition and so on, suppose that you are very much more certain of Theory A than of Theory B—much more sure. But suppose that the thing that you are going to observe is a test. According to Theory A, nothing should happen. According to Theory B, it should turn blue. Well, you make the observation, and it turns sort of a greenish. Then you look at Theory A, and you say, “It’s very unlikely,” and you turn to Theory B, and you say, “Well, it should have turned sort of blue, but it wasn’t impossible that it should turn sort of greenish color.”

So the result of this observation, then, is that Theory A is getting weaker, and Theory B is getting stronger. And if you continue to make more tests, then the odds on Theory B increase. Incidentally, it is not right to simply repeat the same test over and over and over and over, no matter how many times you look and it still looks greenish, you haven’t made up your mind yet. But if you find a whole lot of other things that distinguish Theory A from Theory B that are different, then by accumulating a large number of these, the odds on Theory B increase.”

Feynman is talking about grey thinking here, the ability to put things on a gradient from “probably true” to “probably false” and how we deal with that uncertainty. He isn’t proposing a method of figuring out absolute, doctrinaire truth.

Another term for what he’s proposing is Bayesian updating—starting with a priori odds, based on earlier understanding, and “updating” the odds of something based on what you learn thereafter. An extremely useful tool.

Feynman’s third trick is the realization that as we investigate whether something is true or not, new evidence and new methods of experimentation should show the effect of getting stronger and stronger, not weaker. Knowledge is not static, and we need to be open to continually evaluating what we think we know. Here he uses an excellent example of analyzing mental telepathy:

“A professor, I think somewhere in Virginia, has done a lot of experiments for a number of years on the subject of mental telepathy, the same kind of stuff as mind reading. In his early experiments the game was to have a set of cards with various designs on them (you probably know all this, because they sold the cards and people used to play this game), and you would guess whether it’s a circle or a triangle and so on while someone else was thinking about it. You would sit and not see the card, and he would see the card and think about the card and you’d guess what it was. And in the beginning of these researches, he found very remarkable effects. He found people who would guess ten to fifteen of the cards correctly, when it should be on the average only five. More even than that. There were some who would come very close to a hundred percent in going through all the cards. Excellent mind readers.

A number of people pointed out a set of criticisms. One thing, for example, is that he didn’t count all the cases that didn’t work. And he just took the few that did, and then you can’t do statistics anymore. And then there were a large number of apparent clues by which signals inadvertently, or advertently, were being transmitted from one to the other.

Various criticisms of the techniques and the statistical methods were made by people. The technique was therefore improved. The result was that, although five cards should be the average, it averaged about six and a half cards over a large number of tests. Never did he get anything like ten or fifteen or twenty-five cards. Therefore, the phenomenon is that the first experiments are wrong. The second experiments proved that the phenomenon observed in the first experiment was nonexistent. The fact that we have six and a half instead of five on the average now brings up a new possibility, that there is such a thing as mental telepathy, but at a much lower level. It’s a different idea, because, if the thing was really there before, having improved the methods of experiment, the phenomenon would still be there. It would still be fifteen cards. Why is it down to six and a half? Because the technique improved. Now it still is that the six and a half is a little bit higher than the average of statistics, and various people criticized it more subtly and noticed a couple of other slight effects which might account for the results.

It turned out that people would get tired during the tests, according to the professor. The evidence showed that they were getting a little bit lower on the average number of agreements. Well, if you take out the cases that are low, the laws of statistics don’t work, and the average is a little higher than the five, and so on. So if the man was tired, the last two or three were thrown away. Things of this nature were improved still further. The results were that mental telepathy still exists, but this time at 5.1 on the average, and therefore all the experiments which indicated 6.5 were false. Now what about the five? . . . Well, we can go on forever, but the point is that there are always errors in experiments that are subtle and unknown. But the reason that I do not believe that the researchers in mental telepathy have led to a demonstration of its existence is that as the techniques were improved, the phenomenon got weaker. In short, the later experiments in every case disproved all the results of the former experiments. If remembered that way, then you can appreciate the situation.”

We must refine our process for probing and experimenting if we’re to get at real truth, always watching out for little troubles. Otherwise, we torture the world so that our results fit our expectations. If we carefully refine and re-test and the effect gets weaker all the time, it’s likely to not be true, or at least not to the magnitude originally hoped for.

The fourth trick is to ask the right question, which is not “Could this be the case?” but “Is this actually the case?” Many get so caught up with the former that they forget to ask the latter:

“That brings me to the fourth kind of attitude toward ideas, and that is that the problem is not what is possible. That’s not the problem. The problem is what is probable, what is happening.

It does no good to demonstrate again and again that you can’t disprove that this could be a flying saucer. We have to guess ahead of time whether we have to worry about the Martian invasion. We have to make a judgment about whether it is a flying saucer, whether it’s reasonable, whether it’s likely. And we do that on the basis of a lot more experience than whether it’s just possible, because the number of things that are possible is not fully appreciated by the average individual. And it is also not clear, then, to them how many things that are possible must not be happening. That it’s impossible that everything that is possible is happening. And there is too much variety, so most likely anything that you think of that is possible isn’t true. In fact that’s a general principle in physics theories: no matter what a guy thinks of, it’s almost always false. So there have been five or ten theories that have been right in the history of physics, and those are the ones we want. But that doesn’t mean that everything’s false. We’ll find out.”

The fifth trick is a very, very common one, even 50 years after Feynman pointed it out. You cannot judge the probability of something happening after it’s already happened. That’s cherry-picking. You have to run the experiment forward for it to mean anything:

“A lot of scientists don’t even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it’s a general principle of psychologists that in these tests they arrange so that the odds that the things that happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out.

This man had designed an experiment which would show something which I do not remember, if the rats always went to the right, let’s say. He had to do a great number of tests, because, of course, they could go to the right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And it’s hard to do, and he did his number. Then he found that it didn’t work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn’t count.”

He said, “Why?” I said, “Because it doesn’t make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”

The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn’t work.”

The sixth trick is one that’s familiar to almost all of us, yet almost all of us forget about every day: the plural of anecdote is not data. We must use proper statistical sampling to know whether or not we know what we’re talking about:

“The next kind of technique that’s involved is statistical sampling. I referred to that idea when I said they tried to arrange things so that they had one in twenty odds. The whole subject of statistical sampling is somewhat mathematical, and I won’t go into the details. The general idea is kind of obvious. If you want to know how many people are taller than six feet tall, then you just pick people out at random, and you see that maybe forty of them are more than six feet so you guess that maybe everybody is. Sounds stupid.

Well, it is and it isn’t. If you pick the hundred out by seeing which ones come through a low door, you’re going to get it wrong. If you pick the hundred out by looking at your friends, you’ll get it wrong, because they’re all in one place in the country. But if you pick out a way that as far as anybody can figure out has no connection with their height at all, then if you find forty out of a hundred, then in a hundred million there will be more or less forty million. How much more or how much less can be worked out quite accurately. In fact, it turns out that to be more or less correct to 1 percent, you have to have 10,000 samples. People don’t realize how difficult it is to get the accuracy high. For only 1 or 2 percent you need 10,000 tries.”

The last trick is to realize that many errors people make simply come from lack of information. They don’t even know they’re missing the tools they need. This can be a very tough one to guard against—it’s hard to know when you’re missing information that would change your mind—but Feynman gives the simple case of astrology to prove the point:

“Now, looking at the troubles that we have with all the unscientific and peculiar things in the world, there are a number of them which cannot be associated with difficulties in how to think, I think, but are just due to some lack of information. In particular, there are believers in astrology, of which, no doubt, there are a number here. Astrologists say that there are days when it’s better to go to the dentist than other days. There are days when it’s better to fly in an airplane, for you, if you are born on such a day and such and such an hour. And it’s all calculated by very careful rules in terms of the position of the stars. If it were true it would be very interesting. Insurance people would be very interested to change the insurance rates on people if they follow the astrological rules, because they have a better chance when they are in the airplane. Tests to determine whether people who go on the day that they are not supposed to go are worse off or not have never been made by the astrologers. The question of whether it’s a good day for business or a bad day for business has never been established. Now what of it? Maybe it’s still true, yes.

On the other hand, there’s an awful lot of information that indicates that it isn’t true. Because we have a lot of knowledge about how things work, what people are, what the world is, what those stars are, what the planets are that you are looking at, what makes them go around more or less, where they’re going to be in the next 2,000 years is completely known. They don’t have to look up to find out where it is. And furthermore, if you look very carefully at the different astrologers they don’t agree with each other, so what are you going to do? Disbelieve it. There’s no evidence at all for it. It’s pure nonsense.

The only way you can believe it is to have a general lack of information about the stars and the world and what the rest of the things look like. If such a phenomenon existed it would be most remarkable, in the face of all the other phenomena that exist, and unless someone can demonstrate it to you with a real experiment, with a real test, took people who believe and people who didn’t believe and made a test, and so on, then there’s no point in listening to them.”

 

***

Conclusion

Knowing something is valuable. The more you understand about how the world works, the more options you have for dealing with the unexpected and the better you can create and capitalize on opportunities. The Feynman Learning Technique is a great method to develop mastery over sets of information. Once you do, the knowledge becomes a powerful tool at your disposal.

But as Feynman himself showed, being willing and able to question your knowledge and the knowledge of others is how you keep improving. Learning is a journey.

If you want to learn more about Feynman’s ideas and teachings, we recommend:

Surely You’re Joking, Mr. Feynman!: Adventures of a Curious Character

The Pleasure of Finding Things Out: The Best Short Works of Richard Feynman

What Do You Care What Other People Think?: Further Adventures of a Curious Character

Solve Problems Before They Happen by Developing an “Inner Sense of Captaincy”

Too often we reward people who solve problems while ignoring those who prevent them in the first place. This incentivizes creating problems. According to poet David Whyte, the key to taking initiative and being proactive is viewing yourself as the captain of your own “voyage of work.”

If we want to get away from glorifying those who run around putting out fires, we need to cultivate an organizational culture that empowers everyone to act responsibly at the first sign of smoke.

How do we make that shift?

We can start by looking at ourselves and how we consider the voyage that is our work. When do we feel fulfillment? Is it when we swoop in to save the day and everyone congratulates us? It’s worth asking why, if we think something is worth saving, we don’t put more effort into protecting it ahead of time.

In Crossing the Unknown Sea, poet David Whyte suggests that we should view our work as a lifelong journey. In particular, he frames it as a sea voyage in which the greatest rewards lie in what we learn through the process, as opposed to the destination.

Like a long sea voyage, the nature of our work is always changing. There are stormy days and sunny ones. There are days involving highs of delight and lows of disaster. All of this happens against the backdrop of events in our personal lives and the wider world with varying levels of influence.

On a voyage, you need to look after your boat. There isn’t always time to solve problems after they happen. You need to learn how to preempt them or risk a much rougher journey—or even the end of it.

Whyte refers to the practice of taking control of your voyage as “developing an inner sense of captaincy,” offering a metaphor we can all apply to our work. Developing an inner sense of captaincy is good for both us and the organizations we work in. We end up with more agency over our own lives, and our organizations waste fewer resources. Whyte’s story of how he learned this lesson highlights why that’s the case.

***

A moment of reckoning

Any life, and any life’s work, is a hidden journey, a secret code, deciphered in fits and starts. The details only given truth by the whole, and the whole dependent on the detail.

Shortly after graduating, Whyte landed a dream job working as a naturalist guide on board a ship in the Galapagos Islands. One morning, he awoke and could tell at once that the vessel had drifted from its anchorage during the night. Whyte leaped up to find the captain fast asleep and the boat close to crashing into a cliff. Taking control of it just in time, he managed to steer himself and the other passengers back to safety—right as the captain awoke. Though they were safe, he was profoundly shaken both by the near miss and the realization that their leader had failed.

At first, Whyte’s reaction to the episode was to feel a smug contempt for the captain who had “slept through not only the anchor dragging but our long, long, nighttime drift.” The captain had failed to predict the problem or notice when it started. If Whyte hadn’t awakened, everyone on the ship could have died.

But something soon changed in his perspective. Whyte knew the captain was new and far less familiar with that particular boat than himself and the other crew member. Every boat has its quirks, and experience counts for more than seniority when it comes to knowing them. He’d also felt sure the night before that they needed to put down a second anchor and knew they “should have dropped another anchor without consultation, as crews are wont to do when they do not want to argue with their captain. We should have woken too.” He writes that “this moment of reckoning under the lava cliff speaks to the many dangerous arrivals in a life of work and to the way we must continually forge our identities through our endeavors.”

Whyte’s experience contains lessons with wide applicability for those of us on dry land. The idea of having an inner sense of captaincy means understanding the overarching goals of your work and being willing to make decisions that support them, even if something isn’t strictly your job or you might not get rewarded for it, or sometimes even if you don’t have permission.

When you play the long game, you’re thinking of the whole voyage, not whether you’ll get a pat on the back today.

***

Skin in the game

It’s all too easy to buy into the view that leaders have full responsibility for everything that happens, especially disasters. Sometimes in our work, when we’re not in a leadership position, we see a potential problem or an unnoticed existing one but choose not to take action. Instead, we stick to doing whatever we’ve been told to do because that feels safer. If it’s important, surely the person in charge will deal with it. If not, that’s their problem. Anyway, there’s already more than enough to do.

Leaders give us a convenient scapegoat when things go wrong. However, when we assume all responsibility lies with them, we don’t learn from our mistakes. We don’t have “our own personal compass, a direction, a willingness to meet life unmediated by any cushioning parental presence.

At some point, things do become our problem. No leader can do everything and see everything. The more you rise within an organization, the more you need to take initiative. If a leader can’t rely on their subordinates to take action when they see a potential problem, everything will collapse.

When we’ve been repeatedly denied agency by poor leadership and seen our efforts fall flat, we may sense we lack control. Taking action no longer feels natural. However, if we view our work as a voyage that helps us change and grow, it’s obvious why we need to overcome learned helplessness. We can’t abdicate all responsibility and blame other people for what we chose to ignore in the first place (as Whyte puts it, “The captain was there in all his inherited and burdened glory and thus convenient for the blame”). By understanding how our work helps us change and grow, we develop skin in the game.

On a ship, everyone is in it together. If something goes wrong, they’re all at risk. And it may not be easy or even possible to patch up a serious problem in the middle of the sea. As a result, everyone needs to pay attention and act on anything that seems amiss. Everyone needs to take responsibility for what happens, as Whyte goes on to detail:

“No matter that the inherited world of the sea told us that the captain is the be-all and end-all of all responsibility, we had all contributed to the lapse, the inexcusable lapse. The edge is no place for apportioning blame. If we had merely touched that cliff, we would have been for the briny deep, crew and passengers alike. The undertow and the huge waves lacerating against that undercut, barnacle-encrusted fortress would have killed us all.”

Having an inner sense of captaincy means viewing ourselves as the ones in charge of our voyage of work. It means not acting as if there are certain areas where we are incapacitated, or ignoring potential problems, just because someone else has a particular title.

***

Space and support to create success

Developing an inner sense of captaincy is not about compensating for an incompetent leader—nor does it mean thinking we always know best. The better someone is at leading people, the more they create the conditions for their team to take initiative and be proactive about preventing problems. They show by example that they inhabit a state rather than a particular role. A stronger leader can mean a more independent team.

Strong leaders instill autonomy by teaching and supervising processes with the intention of eventually not needing to oversee them. Captaincy is a way of being. It is embodied in the role of captain, but it is available to everyone. For a crew to develop it, the captain needs to step back a little and encourage them to take responsibility for outcomes. They can test themselves bit by bit, building up confidence. When people feel like it’s their responsibility to contribute to overall success, not just perform specific tasks, they can respond to the unexpected without waiting for instructions. They become ever more familiar with what their organization needs to stay healthy and use second-order thinking so potential problems are more noticeable before they happen.

Whyte realized that the near-disaster had a lot to do with their previous captain, Raphael. He was too good at his job, being “preternaturally alert and omnipresent, appearing on deck at the least sign of trouble.” The crew felt comfortable, knowing they could always rely on Raphael to handle any problems. Although this worked well at the time, once he left and they were no longer in such safe hands they were unused to taking initiative. Whyte explains:

Raphael had so filled his role of captain to capacity that we ourselves had become incapacitated in one crucial area: we had given up our own inner sense of captaincy. Somewhere inside of us, we had come to the decision that ultimate responsibility lay elsewhere.

Being a good leader isn’t about making sure your team doesn’t experience failure. Rather, it’s giving everyone the space and support to create success.

***

The voyage of work

Having an inner sense of captaincy means caring about outcomes, not credit or blame. When Whyte realized that he should have dropped a second anchor the night before the near miss, he would have been doing something that ideally no one other than the crew, or even just him, would have known about. The captain and passengers would have enjoyed an untroubled night and woken none the wiser.

If we prioritize getting good outcomes, our focus shifts from solving existing problems to preventing problems from happening in the first place. We put down a second anchor so the boat doesn’t drift, rather than steering it to safety when it’s about to crash. After all, we’re on the boat too.

Another good comparison is picking up litter. The less connected to and responsible for a place we feel, the less likely we might be to pick up trash lying on the ground. In our homes, we’re almost certain to pick it up. If we’re walking along our street or in our neighborhood, it’s a little less likely. In a movie theater or bar when we know it’s someone’s job to pick up trash, we’re less likely to bother. What’s the equivalent to leaving trash on the ground in your job?

Most organizations don’t incentivize prevention because it’s invisible. Who knows what would have happened? How do you measure something that doesn’t exist? After all, problem preventers seem relaxed. They often go home on time. They take lots of time to think. We don’t know how well they would deal with conflict, because they never seem to experience any. The invisibility of the work they do to prevent problems in the first place makes it seem like their job isn’t challenging.

When we promote problem solvers, we incentivize having problems. We fail to unite everyone towards a clear goal. Because most organizations reward problem solvers, it can seem like a better idea to let things go wrong, then fix them after. That’s how you get visibility. You run from one high-level meeting to the next, reacting to one problem after another.

It’s great to have people to solve those problems but it is better not to have them in the first place. Solving problems generally requires more resources than preventing them, not to mention the toll it takes on our stress levels. As the saying goes, an ounce of prevention is worth a pound of cure.

An inner sense of captaincy on our voyage of work is good for us and for our organizations. It changes how we think about preventing problems. It becomes a part of an overall voyage, an opportunity to build courage and face fears. We become more fully ourselves and more in touch with our nature. Whyte writes that “having the powerful characteristics of captaincy or leadership of any form is almost always an outward sign of a person inhabiting their physical body and the deeper elements of their own nature.”

12 Life Lessons From Mathematician and Philosopher Gian-Carlo Rota

The mathematician and philosopher Gian-Carlo Rota spent much of his career at MIT, where students adored him for his engaging, passionate lectures. In 1996, Rota gave a talk entitled “Ten Lessons I Wish I Had Been Taught,” which contains valuable advice for making people pay attention to your ideas.

Many mathematicians regard Rota as single-handedly responsible for turning combinatorics into a significant field of study. He specialized in functional analysis, probability theory, phenomenology, and combinatorics. His 1996 talk, “Ten Lessons I Wish I Had Been Taught,” was later printed in his book, Indiscrete Thoughts.

Rota began by explaining that the advice we give others is always the advice we need to follow most. Seeing as it was too late for him to follow certain lessons, he decided he would share them with the audience. Here, we summarize twelve insights from Rota’s talk—which are fascinating and practical, even if you’re not a mathematician.

***

Every lecture should make only one point

“Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards.”

When we wish to communicate with people—in an article, an email to a coworker, a presentation, a text to a partner, and so on—it’s often best to stick to making one point at a time. This matters all the more so if we’re trying to get our ideas across to a large audience.

If we make one point well enough, we can be optimistic about people understanding and remembering it. But if we try to fit too much in, “the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.

***

Never run over time

“After fifty minutes (one microcentury as von Neumann used to say), everybody’s attention will turn elsewhere even if we are trying to prove the Riemann hypothesis. One minute over time can destroy the best of lectures.”

Rota considered running over the allotted time slot to be the worst thing a lecturer could do. Our attention spans are finite. After a certain point, we stop taking in new information.

In your work, it’s important to respect the time and attention of others. Put in the extra work required for brevity and clarity. Don’t expect them to find what you have to say as interesting as you do. Condensing and compressing your ideas both ensures you truly understand them and makes them easier for others to remember.

***

Relate to your audience

“As you enter the lecture hall, try to spot someone in the audience whose work you have some familiarity with. Quickly rearrange your presentation so as to manage to mention some of that person’s work.”

Reciprocity is remarkably persuasive. Sometimes, how people respond to your work has as much to do with how you respond to theirs as it does with the work itself. If you want people to pay attention to your work, always give before you take and pay attention to theirs first. Show that you see them and appreciate them. Rota explains that “everyone in the audience has come to listen to your lecture with the secret hope of hearing their work mentioned.

The less acknowledgment someone’s work has received, the more of an impact your attention is likely to have. A small act of encouragement can be enough to deter someone from quitting. With characteristic humor, Rota recounts:

“I have always felt miffed after reading a paper in which I felt I was not being given proper credit, and it is safe to conjecture that the same happens to everyone else. One day I tried an experiment. After writing a rather long paper, I began to draft a thorough bibliography. On the spur of the moment I decided to cite a few papers which had nothing whatsoever to do with the content of my paper to see what might happen.

Somewhat to my surprise, I received letters from two of the authors whose papers I believed were irrelevant to my article. Both letters were written in an emotionally charged tone. Each of the authors warmly congratulated me for being the first to acknowledge their contribution to the field.”

***

Give people something to take home

“I often meet, in airports, in the street, and occasionally in embarrassing situations, MIT alumni who have taken one or more courses from me. Most of the time they admit that they have forgotten the subject of the course and all the mathematics I thought I had taught them. However, they will gladly recall some joke, some anecdote, some quirk, some side remark, or some mistake I made.”

When we have a conversation, read a book, or listen to a talk, the sad fact is that we are unlikely to remember much of it even a few hours later, let alone years after the event. Even if we enjoyed and valued it, only a small part will stick in our memory.

So when you’re communicating with people, try to be conscious about giving them something to take home. Choose a memorable line or idea, create a visual image, or use humor in your work.

For example, in The Righteous Mind, Jonathan Haidt repeats many times that the mind is like a tiny rider on a gigantic elephant. The rider represents controlled mental processes, while the elephant represents automatic ones. It’s a distinctive image, one readers are quite likely to take home with them.

***

Make sure the blackboard is spotless

“By starting with a spotless blackboard, you will subtly convey the impression that the lecture they are about to hear is equally spotless.”

Presentation matters. The way our work looks influences how people perceive it. Taking the time to clean our equivalent of a blackboard signals that we care about what we’re doing and consider it important.

In “How To Spot Bad Science,” we noted that one possible sign of bad science is that the research is presented in a thoughtless, messy way. Most researchers who take their work seriously will put in the extra effort to ensure it’s well presented.

***

Make it easy for people to take notes

“What we write on the blackboard should correspond to what we want an attentive listener to take down in his notebook. It is preferable to write slowly and in a large handwriting, with no abbreviations. Those members of the audience who are taking notes are doing us a favor, and it is up to us to help them with their copying.”

If a lecturer is using slides with writing on them instead of a blackboard, Rota adds that they should give people time to take notes. This might mean repeating themselves in a few different ways so each slide takes longer to explain (which ties in with the idea that every lecture should make only one point). Moving too fast with the expectation that people will look at the slides again later is “wishful thinking.”

When we present our work to people, we should make it simple for them to understand our ideas on the spot. We shouldn’t expect them to revisit it later. They might forget. And even if they don’t, we won’t be there to answer questions, take feedback, and clear up any misunderstandings.

***

Share the same work multiple times

Rota learned this lesson when he bought Collected Papers, a volume compiling the publications of mathematician Frederic Riesz. He noted that “the editors had gone out of their way to publish every little scrap Riesz had ever published.” Putting them all in one place revealed that he had published the same ideas multiple times:

Riesz would publish the first rough version of an idea in some obscure Hungarian journal. A few years later, he would send a series of notes to the French Academy’s Comptes Rendus in which the same material was further elaborated. A few more years would pass, and he would publish the definitive paper, either in French or in English.

Riesz would also develop his ideas while lecturing. Explaining the same subject again and again for years allowed him to keep improving it until he was ready to publish. Rota notes, “No wonder the final version was perfect.

In our work, we might feel as if we need to have fresh ideas all of the time and that anything we share with others needs to be a finished product. But sometimes we can do our best work through an iterative process.

For example, a writer might start by sharing an idea as a tweet. This gets a good response, and the replies help them expand it into a blog post. From there they keep reworking the post over several years, making it longer and more definite each time. They give a talk on the topic. Eventually, it becomes a book.

Award-winning comedian Chris Rock prepares for global tours by performing dozens of times in small venues for a handful of people. Each performance is an experiment to see which jokes land, which ones don’t, and which need tweaking. By the time he’s performed a routine forty or fifty times, making it better and better, he’s ready to share it with huge audiences.

Another reason to share the same work multiple times is that different people will see it each time and understand it in different ways:

“The mathematical community is split into small groups, each one with its own customs, notation, and terminology. It may soon be indispensable to present the same result in several versions, each one accessible to a specific group; the price one might have to pay otherwise is to have our work rediscovered by someone who uses a different language and notation, and who will rightly claim it as his own.”

Sharing your work multiple times thus has two benefits. The first is that the feedback allows you to improve and refine your work. The second is that you increase the chance of your work being definitively associated with you. If the core ideas are strong enough, they’ll shine through even in the initial incomplete versions.

***

You are more likely to be remembered for your expository work

“Allow me to digress with a personal reminiscence. I sometimes publish in a branch of philosophy called phenomenology. . . . It so happens that the fundamental treatises of phenomenology are written in thick, heavy philosophical German. Tradition demands that no examples ever be given of what one is talking about. One day I decided, not without serious misgivings, to publish a paper that was essentially an updating of some paragraphs from a book by Edmund Husserl, with a few examples added. While I was waiting for the worst at the next meeting of the Society for Phenomenology and Existential Philosophy, a prominent phenomenologist rushed towards me with a smile on his face. He was full of praise for my paper, and he strongly encouraged me to further develop the novel and original ideas presented in it.”

Rota realized that many of the mathematicians he admired the most were known more for their work explaining and building upon existing knowledge, as opposed to their entirely original work. Their extensive knowledge of their domain meant they could expand a little beyond their core specialization and synthesize charted territory.

For example, David Hilbert was best known for a textbook on integral equations which was “in large part expository, leaning on the work of Hellinger and several other mathematicians whose names are now forgotten.” William Feller was known for an influential treatise on probability, with few recalling his original work in convex geometry.

One of our core goals at Farnam Street is to share the best of what other people have already figured out. We all want to make original and creative contributions to the world. But the best ideas that are already out there are quite often much more useful than what we can contribute from scratch.

We should never be afraid to stand on the shoulders of giants.

***

Every mathematician has only a few tricks

“. . . mathematicians, even the very best, also rely on a few tricks which they use over and over.”

Upon reading the complete works of certain influential mathematicians, such as David Hilbert, Rota realized that they always used the same tricks again and again.

We don’t need to be amazing at everything to do high-quality work. The smartest and most successful people are often only good at a few things—or even one thing. Their secret is that they maximize those strengths and don’t get distracted. They define their circle of competence and don’t attempt things they’re not good at if there’s any room to double down further on what’s already going well.

It might seem as if this lesson contradicts the previous one (you are more likely to be remembered for your expository work), but there’s a key difference. If you’ve hit diminishing returns with improvements to what’s already inside your circle of competence, it makes sense to experiment with things you already have an aptitude for (or a strong suspicion you might) but you just haven’t made them your focus.

***

Don’t worry about small mistakes

“Once more let me begin with Hilbert. When the Germans were planning to publish Hilbert’s collected papers and to present him with a set on the occasion of one of his later birthdays, they realized that they could not publish the papers in their original versions because they were full of errors, some of them quite serious. Thereupon they hired a young unemployed mathematician, Olga Taussky-Todd, to go over Hilbert’s papers and correct all mistakes. Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. . . . At last, on Hilbert’s birthday, a freshly printed set of Hilbert’s collected papers was presented to the Geheimrat. Hilbert leafed through them carefully and did not notice anything.”

Rota goes on to say: “There are two kinds of mistakes. There are fatal mistakes that destroy a theory; but there are also contingent ones, which are useful in testing the stability of a theory.

Mistakes are either contingent or fatal. Contingent mistakes don’t completely ruin what you’re working on; fatal ones do. Building in a margin of safety (such as having a bit more time or funding that you expect to need) turns many fatal mistakes into contingent ones.

Contingent mistakes can even be useful. When details change, but the underlying theory is still sound, you know which details not to sweat.

***

Use Feynman’s method for solving problems

“Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: ‘How did he do it? He must be a genius!’”

***

Write informative introductions

“Nowadays, reading a mathematics paper from top to bottom is a rare event. If we wish our paper to be read, we had better provide our prospective readers with strong motivation to do so. A lengthy introduction, summarizing the history of the subject, giving everybody his due, and perhaps enticingly outlining the content of the paper in a discursive manner, will go some of the way towards getting us a couple of readers.”

As with the lesson of don’t run over time, respect that people have limited time and attention. Introductions are all about explaining what a piece of work is going to be about, what its purpose is, and why someone should be interested in it.

A job posting is an introduction to a company. The description on a calendar invite to a meeting is an introduction to that meeting. An about page is an introduction to an author. The subject line on a cold email is an introduction to that message. A course curriculum is an introduction to a class.

Putting extra effort into our introductions will help other people make an accurate assessment of whether they want to engage with the full thing. It will prime their minds for what to expect and answer some of their questions.

***

If you’re interested in learning more, check out Rota’s “10 Lessons of an MIT Education.

The Best-Case Outcomes Are Statistical Outliers

There’s nothing wrong with hoping for the best. But the best-case scenario is rarely the one that comes to pass. Being realistic about what is likely to happen positions you for a range of possible outcomes and gives you peace of mind.

We dream about achieving the best-case outcomes, but they are rare. We can’t forget to acknowledge all the other possibilities of what may happen if we want to position ourselves for success.

“Hoping for the best, prepared for the worst, and unsurprised by anything in between.” —Maya Angelou

It’s okay to hope for the best—to look at whatever situation you’re in and say, “This time I have it figured out. This time it’s going to work.” First, having some degree of optimism is necessary for trying anything new. If we weren’t overconfident, we’d never have the guts to do something as risky and unlikely to succeed as starting a business, entering a new relationship, or sending that cold email. Anticipating that a new venture will work helps you overcome obstacles and make it work.

Second, sometimes we do have it figured out. Sometimes our solutions do make things better.

Even when the best-case scenario comes to pass, however, it rarely unfolds exactly as planned. Some choices create unanticipated consequences that we have to deal with. We may encounter unexpected roadblocks due to a lack of information. Or the full implementation of all our ideas and aspirations might take a lot longer than we planned for.

When you look back over history, we rarely find best-case outcomes.

Sure, sometimes they happen—maybe more than we think, given not every moment of the past is recorded. But let’s be honest: even historical wins, like developing the polio vaccine and figuring out how to produce clean drinking water, were not all smooth sailing. There are still people who are unable or unwilling to get the polio vaccine. And there are still many people in the world, even in developed countries like Canada, who don’t have access to clean drinking water.

The best-case outcomes in these situations—a world without polio and a world with globally available clean drinking water—have not happened, despite the existence of reliable, proven technology that can make these outcomes a reality.

There are a lot of reasons why, in these situations, we haven’t achieved the best-case outcomes. Furthermore, situations like these are not unusual. We rarely achieve the dream. The more complicated a situation, the more people it involves, the more variables and dependencies that exist, the more it’s unlikely that it’s all going to work out.

If we narrow our scope and say, for example, the best-case scenario for this Friday night is that we don’t burn the pizza, we can all agree on a movie, and the power doesn’t go out, it’s more likely we’ll achieve it. There are fewer variables, so there’s a greater chance that this specific scenario will come to pass.

The problem is that most of us plan as if we live in an easy-to-anticipate Friday night kind of world. We don’t.

There are no magic bullets for the complicated challenges facing society. There is only hard work, planning for the wide spectrum of human behavior, adjusting to changing conditions, and perseverance. There are many possible outcomes for any given endeavor and only one that we consider the best case.

That is why the best-case outcomes are statistical outliers—they are only one possibility in a sea of many. They might come to pass, but you’re much better off preparing for the likelihood that they won’t.

Our expectations matter. Anticipating a range of outcomes can make us feel better. If we expect the best and it happens, we’re merely satisfied. If we expect less and something better happens, we’re delighted.

Knowing that the future is probably not going to be all sunshine and roses allows you to prepare for a variety of more likely outcomes, including some of the bad ones. Sometimes, too, when the worst-case scenario happens, it’s actually a huge relief. We realize it’s not all bad, we didn’t die, and we can manage if it happens again. Preparation and knowing you can handle a wide spectrum of possible challenges is how you get the peace of mind to be unsurprised by anything in between the worst and the best.