Blog

The Feynman Learning Technique

If you’re after a way to supercharge your learning and become smarter, the Feynman Technique might just be the best way to learn absolutely anything. Devised by a Nobel Prize-winning physicist, it leverages the power of teaching for better learning.

The Feynman Learning Technique is a simple way of approaching anything new you want to learn.
Why use it? Because learning doesn’t happen from skimming through a book or remembering enough to pass a test. Information is learned when you can explain it and use it in a wide variety of situations. The Feynman Technique gets more mileage from the ideas you encounter instead of rendering anything new into isolated, useless factoids.

When you really learn something, you give yourself a tool to use for the rest of your life. The more you know, the fewer surprises you will encounter, because most new things will connect to something you already understand.

Ultimately, the point of learning is to understand the world. But most of us don’t bother to deliberately learn anything. We memorize what we need to as we move through school, then forget most of it. As we continue through life, we don’t extrapolate from our experiences to broaden the applicability of our knowledge. Consequently, life kicks us in the ass time and again.

To avoid the pain of being bewildered by the unexpected, the Feynman Technique helps you turn information into knowledge that you can access as easily as a shirt in your closet.

Let’s go.

***

The Feynman Technique

“Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction.” —E.F. Schumacher

There are four steps to the Feynman Learning Technique, based on the method Richard Feynman originally used. We have adapted it slightly after reflecting on our own experiences using this process to learn. The steps are as follows:

  1. Pretend to teach a concept you want to learn about to a student in the sixth grade.
  2. Identify gaps in your explanation. Go back to the source material to better understand it.
  3. Organize and simplify.
  4. Transmit (optional).

Step 1: Pretend to teach it to a child or a rubber duck

Take out a blank sheet of paper. At the top, write the subject you want to learn. Now write out everything you know about the subject as if you were teaching it to a child or a rubber duck sitting on your desk. You are not teaching to your smart adult friend, but rather a child who has just enough vocabulary and attention span to understand basic concepts and relationships.

Or, for a different angle on the Feynman Technique, you could place a rubber duck on your desk and try explaining the concept to it. Software engineers sometimes tackle debugging by explaining their code, line by line, to a rubber duck. The idea is that explaining something to a silly-looking inanimate object will force you to be as simple as possible.

It turns out that one of the ways we mask our lack of understanding is by using complicated vocabulary and jargon. The truth is, if you can’t define the words and terms you are using, you don’t really know what you’re talking about. If you look at a painting and describe it as “abstract” because that’s what you heard in art class, you aren’t displaying any comprehension of the painting. You’re just mimicking what you’ve heard. And you haven’t learned anything. You need to make sure your explanation isn’t above, say, a sixth-grade reading level by using easily accessible words and phrases.

When you write out an idea from start to finish in simple language that a child can understand, you force yourself to understand the concept at a deeper level and simplify relationships and connections between ideas. You can better explain the why behind your description of the what.

Looking at that same painting again, you will be able to say that the painting doesn’t display buildings like the ones we look at every day. Instead it uses certain shapes and colors to depict a city landscape. You will be able to point out what these are. You will be able to engage in speculation about why the artist chose those shapes and those colors. You will be able to explain why artists sometimes do this, and you will be able to communicate what you think of the piece considering all of this. Chances are, after capturing a full explanation of the painting in the simplest possible terms that would be easily understood by a sixth-grader, you will have learned a lot about that painting and abstract art in general.

Some of capturing what you would teach will be easy. These are the places where you have a clear understanding of the subject. But you will find many places where things are much foggier.

Step 2: Identify gaps in your explanation

Areas where you struggle in Step 1 are the points where you have some gaps in your understanding.
Identifying gaps in your knowledge—where you forget something important, aren’t able to explain it, or simply have trouble thinking of how variables interact—is a critical part of the learning process. Filling those gaps is when you really make the learning stick.

Now that you know where you have gaps in your understanding, go back to the source material. Augment it with other sources. Look up definitions. Keep going until you can explain everything you need to in basic terms.

Only when you can explain your understanding without jargon and in simple terms can you demonstrate your understanding. Think about it this way. If you require complicated terminology to explain what you know, you have no flexibility. When someone asks you a question, you can only repeat what you’ve already said.

Simple terms can be rearranged and easily combined with other words to communicate your point. When you can say something in multiple ways using different words, you understand it really well.
Being able to explain something in a simple, accessible way shows you’ve done the work required to learn. Skipping it leads to the illusion of knowledge—an illusion that can be quickly shattered when challenged.

Identifying the boundaries of your understanding is also a way of defining your circle of competence. When you know what you know (and are honest about what you don’t know), you limit the mistakes you’re liable to make and increase your chance of success when applying knowledge.

Step 3. Organize and simplify

Now you have a set of hand-crafted notes containing a simple explanation. Organize them into a narrative that you can tell from beginning to end. Read it out loud. If the explanation sounds confusing at any point, go back to Step 2. Keep iterating until you have a story that you can tell to anyone who will listen.

If you follow this approach over and over, you will end up with a binder full of pages on different subjects. If you take some time twice a year to go through this binder, you will find just how much you retain.

Step 4: Transmit (optional)

This part is optional, but it’s the logical result of everything you’ve just done. If you really want to be sure of your understanding, run it past someone (ideally someone who knows little of the subject). The ultimate test of your knowledge is your capacity to convey it to another. You can read out directly what you’ve written. You can present the material like a lecture. You can ask your friends for a few minutes of their time while you’re buying them dinner. You can volunteer as a guest speaker in your child’s classroom or your parents’ retirement residence. All that really matters is that you attempt to transmit the material to at least one person who isn’t that familiar with it.

The questions you get and the feedback you receive are invaluable for further developing your understanding. Hearing what your audience is curious about will likely pique your own curiosity and set you on a path for further learning. After all, it’s only when you begin to learn a few things really well do you appreciate how much there is to know.

***

The Feynman Technique is not only a wonderful recipe for learning but also a window into a different way of thinking that allows you to tear ideas apart and reconstruct them from the ground up.
When you’re having a conversation with someone and they start using words or relationships that you don’t understand, ask them to explain it to you like you’re twelve.

Not only will you supercharge your own learning, but you’ll also supercharge theirs.

Feynman’s approach intuitively believes that intelligence is a process of growth, which dovetails nicely with the work of Carol Dweck, who describes the difference between a fixed and growth mindset.

“If you can’t reduce a difficult engineering problem to just one 8-1/2 x 11-inch sheet of paper, you will probably never understand it.” —Ralph Peck

What does it mean to “know?”

Richard Feynman believed that “the world is much more interesting than any one discipline.” He understood the difference between knowing something and knowing the name of something, as well as how, when you truly know something, you can use that knowledge broadly. When you only know what something is called, you have no real sense of what it is. You can’t take it apart and play with it or use it to make new connections and generate new insights. When you know something, the labels are unimportant, because it’s not necessary to keep it in the box it came in.

“The person who says he knows what he thinks but cannot express it usually does not know what he thinks.” —Mortimer Adler

Feynman’s explanations—on why questions, why trains stay on the tracks as they go around a curve, how we look for new laws of science, or how rubber bands work—are simple and powerful. Here he articulates the difference between knowing the name of something and understanding it.

“See that bird? It’s a brown-throated thrush, but in Germany it’s called a halzenfugel, and in Chinese they call it a chung ling, and even if you know all those names for it, you still know nothing about the bird. You only know something about people: what they call the bird. Now that thrush sings, and teaches its young to fly, and flies so many miles away during the summer across the country, and nobody knows how it finds its way.”

Knowing the name of something doesn’t mean you understand it. We talk in fact-deficient, obfuscating generalities to cover up our lack of understanding.

How then should we go about learning? On this Feynman echoes Albert Einstein and proposes that we take things apart. He describes a dismal first-grade science book that attempts to teach kids about energy by showing a series of pictures about a wind-up dog toy and asking, “What makes it move?” For Feynman, this was the wrong approach because it was too abstract. Saying that energy made the dog move was equal to saying “that ‘God makes it move,’ or ‘spirit makes it move,’ or ‘movability makes it move.’ (In fact, one could equally well say ‘energy makes it stop.’)”

Staying at the level of the abstract imparts no real understanding. Kids might subsequently get the question right on a test, if they have a decent memory. But they aren’t going to have any understanding of what energy actually is.

Feynman then goes on to describe a more useful approach:

“Perhaps I can make the difference a little clearer this way: if you ask a child what makes the toy dog move, you should think about what an ordinary human being would answer. The answer is that you wound up the spring; it tries to unwind and pushes the gear around.

What a good way to begin a science course! Take apart the toy; see how it works. See the cleverness of the gears; see the ratchets. Learn something about the toy, the way the toy is put together, the ingenuity of people devising the ratchets and other things. That’s good.”

***

After the Feynman Technique

“We take other men’s knowledge and opinions upon trust; which is an idle and superficial learning. We must make them our own. We are just like a man who, needing fire, went to a neighbor’s house to fetch it, and finding a very good one there, sat down to warm himself without remembering to carry any back home. What good does it do us to have our belly full of meat if it is not digested, if it is not transformed into us, if it does not nourish and support us?” —Michel de Montaigne

The Feynman Technique helps you learn stuff. But learning doesn’t happen in isolation. We learn not only from the books we read but also the people we talk to and the various positions, ideas, and opinions we are exposed to. Richard Feynman also provided advice on how to sort through information so you can decide what is relevant and what you should bother learning.

In a series of non-technical lectures in 1963, memorialized in a short book called The Meaning of It All: Thoughts of a Citizen Scientist, Feynman talks through basic reasoning and some of the problems of his day. His method of evaluating information is another set of tools you can use along with the Feynman Learning Technique to refine what you learn.

Particularly useful are a series of “tricks of the trade” he gives in a section called “This Unscientific Age.” These tricks show Feynman taking the method of thought he learned in pure science and applying it to the more mundane topics most of us have to deal with every day.

Before we start, it’s worth noting that Feynman takes pains to mention that not everything needs to be considered with scientific accuracy. It’s up to you to determine where applying these tricks might be most beneficial in your life.

Regardless of what you are trying to gather information on, these tricks help you dive deeper into topics and ideas and not get waylaid by inaccuracies or misunderstandings on your journey to truly know something.

As we enter the realm of “knowable” things in a scientific sense, the first trick has to do with deciding whether someone else truly knows their stuff or is mimicking others:

“My trick that I use is very easy. If you ask him intelligent questions—that is, penetrating, interested, honest, frank, direct questions on the subject, and no trick questions—then he quickly gets stuck. It is like a child asking naive questions. If you ask naive but relevant questions, then almost immediately the person doesn’t know the answer, if he is an honest man. It is important to appreciate that.

And I think that I can illustrate one unscientific aspect of the world which would be probably very much better if it were more scientific. It has to do with politics. Suppose two politicians are running for president, and one goes through the farm section and is asked, “What are you going to do about the farm question?” And he knows right away—bang, bang, bang.

Now he goes to the next campaigner who comes through. “What are you going to do about the farm problem?” “Well, I don’t know. I used to be a general, and I don’t know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem. And it must be a hard problem. So the way that I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can’t tell you ahead of time what conclusion, but I can give you some of the principles I’ll try to use—not to make things difficult for individual farmers, if there are any special problems we will have to have some way to take care of them, etc., etc., etc.””

If you learn something via the Feynman Technique, you will be able to answer questions on the subject. You can make educated analogies, extrapolate the principles to other situations, and easily admit what you do not know.

The second trick has to do with dealing with uncertainty. Very few ideas in life are absolutely true. What you want is to get as close to the truth as you can with the information available:

“I would like to mention a somewhat technical idea, but it’s the way, you see, we have to understand how to handle uncertainty. How does something move from being almost certainly false to being almost certainly true? How does experience change? How do you handle the changes of your certainty with experience? And it’s rather complicated, technically, but I’ll give a rather simple, idealized example.

You have, we suppose, two theories about the way something is going to happen, which I will call “Theory A” and “Theory B.” Now it gets complicated. Theory A and Theory B. Before you make any observations, for some reason or other, that is, your past experiences and other observations and intuition and so on, suppose that you are very much more certain of Theory A than of Theory B—much more sure. But suppose that the thing that you are going to observe is a test. According to Theory A, nothing should happen. According to Theory B, it should turn blue. Well, you make the observation, and it turns sort of a greenish. Then you look at Theory A, and you say, “It’s very unlikely,” and you turn to Theory B, and you say, “Well, it should have turned sort of blue, but it wasn’t impossible that it should turn sort of greenish color.”

So the result of this observation, then, is that Theory A is getting weaker, and Theory B is getting stronger. And if you continue to make more tests, then the odds on Theory B increase. Incidentally, it is not right to simply repeat the same test over and over and over and over, no matter how many times you look and it still looks greenish, you haven’t made up your mind yet. But if you find a whole lot of other things that distinguish Theory A from Theory B that are different, then by accumulating a large number of these, the odds on Theory B increase.”

Feynman is talking about grey thinking here, the ability to put things on a gradient from “probably true” to “probably false” and how we deal with that uncertainty. He isn’t proposing a method of figuring out absolute, doctrinaire truth.

Another term for what he’s proposing is Bayesian updating—starting with a priori odds, based on earlier understanding, and “updating” the odds of something based on what you learn thereafter. An extremely useful tool.

Feynman’s third trick is the realization that as we investigate whether something is true or not, new evidence and new methods of experimentation should show the effect of getting stronger and stronger, not weaker. Knowledge is not static, and we need to be open to continually evaluating what we think we know. Here he uses an excellent example of analyzing mental telepathy:

“A professor, I think somewhere in Virginia, has done a lot of experiments for a number of years on the subject of mental telepathy, the same kind of stuff as mind reading. In his early experiments the game was to have a set of cards with various designs on them (you probably know all this, because they sold the cards and people used to play this game), and you would guess whether it’s a circle or a triangle and so on while someone else was thinking about it. You would sit and not see the card, and he would see the card and think about the card and you’d guess what it was. And in the beginning of these researches, he found very remarkable effects. He found people who would guess ten to fifteen of the cards correctly, when it should be on the average only five. More even than that. There were some who would come very close to a hundred percent in going through all the cards. Excellent mind readers.

A number of people pointed out a set of criticisms. One thing, for example, is that he didn’t count all the cases that didn’t work. And he just took the few that did, and then you can’t do statistics anymore. And then there were a large number of apparent clues by which signals inadvertently, or advertently, were being transmitted from one to the other.

Various criticisms of the techniques and the statistical methods were made by people. The technique was therefore improved. The result was that, although five cards should be the average, it averaged about six and a half cards over a large number of tests. Never did he get anything like ten or fifteen or twenty-five cards. Therefore, the phenomenon is that the first experiments are wrong. The second experiments proved that the phenomenon observed in the first experiment was nonexistent. The fact that we have six and a half instead of five on the average now brings up a new possibility, that there is such a thing as mental telepathy, but at a much lower level. It’s a different idea, because, if the thing was really there before, having improved the methods of experiment, the phenomenon would still be there. It would still be fifteen cards. Why is it down to six and a half? Because the technique improved. Now it still is that the six and a half is a little bit higher than the average of statistics, and various people criticized it more subtly and noticed a couple of other slight effects which might account for the results.

It turned out that people would get tired during the tests, according to the professor. The evidence showed that they were getting a little bit lower on the average number of agreements. Well, if you take out the cases that are low, the laws of statistics don’t work, and the average is a little higher than the five, and so on. So if the man was tired, the last two or three were thrown away. Things of this nature were improved still further. The results were that mental telepathy still exists, but this time at 5.1 on the average, and therefore all the experiments which indicated 6.5 were false. Now what about the five? . . . Well, we can go on forever, but the point is that there are always errors in experiments that are subtle and unknown. But the reason that I do not believe that the researchers in mental telepathy have led to a demonstration of its existence is that as the techniques were improved, the phenomenon got weaker. In short, the later experiments in every case disproved all the results of the former experiments. If remembered that way, then you can appreciate the situation.”

We must refine our process for probing and experimenting if we’re to get at real truth, always watching out for little troubles. Otherwise, we torture the world so that our results fit our expectations. If we carefully refine and re-test and the effect gets weaker all the time, it’s likely to not be true, or at least not to the magnitude originally hoped for.

The fourth trick is to ask the right question, which is not “Could this be the case?” but “Is this actually the case?” Many get so caught up with the former that they forget to ask the latter:

“That brings me to the fourth kind of attitude toward ideas, and that is that the problem is not what is possible. That’s not the problem. The problem is what is probable, what is happening.

It does no good to demonstrate again and again that you can’t disprove that this could be a flying saucer. We have to guess ahead of time whether we have to worry about the Martian invasion. We have to make a judgment about whether it is a flying saucer, whether it’s reasonable, whether it’s likely. And we do that on the basis of a lot more experience than whether it’s just possible, because the number of things that are possible is not fully appreciated by the average individual. And it is also not clear, then, to them how many things that are possible must not be happening. That it’s impossible that everything that is possible is happening. And there is too much variety, so most likely anything that you think of that is possible isn’t true. In fact that’s a general principle in physics theories: no matter what a guy thinks of, it’s almost always false. So there have been five or ten theories that have been right in the history of physics, and those are the ones we want. But that doesn’t mean that everything’s false. We’ll find out.”

The fifth trick is a very, very common one, even 50 years after Feynman pointed it out. You cannot judge the probability of something happening after it’s already happened. That’s cherry-picking. You have to run the experiment forward for it to mean anything:

“A lot of scientists don’t even appreciate this. In fact, the first time I got into an argument over this was when I was a graduate student at Princeton, and there was a guy in the psychology department who was running rat races. I mean, he has a T-shaped thing, and the rats go, and they go to the right, and the left, and so on. And it’s a general principle of psychologists that in these tests they arrange so that the odds that the things that happen by chance is small, in fact, less than one in twenty. That means that one in twenty of their laws is probably wrong. But the statistical ways of calculating the odds, like coin flipping if the rats were to go randomly right and left, are easy to work out.

This man had designed an experiment which would show something which I do not remember, if the rats always went to the right, let’s say. He had to do a great number of tests, because, of course, they could go to the right accidentally, so to get it down to one in twenty by odds, he had to do a number of them. And it’s hard to do, and he did his number. Then he found that it didn’t work. They went to the right, and they went to the left, and so on. And then he noticed, most remarkably, that they alternated, first right, then left, then right, then left. And then he ran to me, and he said, “Calculate the probability for me that they should alternate, so that I can see if it is less than one in twenty.” I said, “It probably is less than one in twenty, but it doesn’t count.”

He said, “Why?” I said, “Because it doesn’t make any sense to calculate after the event. You see, you found the peculiarity, and so you selected the peculiar case.”

The fact that the rat directions alternate suggests the possibility that rats alternate. If he wants to test this hypothesis, one in twenty, he cannot do it from the same data that gave him the clue. He must do another experiment all over again and then see if they alternate. He did, and it didn’t work.”

The sixth trick is one that’s familiar to almost all of us, yet almost all of us forget about every day: the plural of anecdote is not data. We must use proper statistical sampling to know whether or not we know what we’re talking about:

“The next kind of technique that’s involved is statistical sampling. I referred to that idea when I said they tried to arrange things so that they had one in twenty odds. The whole subject of statistical sampling is somewhat mathematical, and I won’t go into the details. The general idea is kind of obvious. If you want to know how many people are taller than six feet tall, then you just pick people out at random, and you see that maybe forty of them are more than six feet so you guess that maybe everybody is. Sounds stupid.

Well, it is and it isn’t. If you pick the hundred out by seeing which ones come through a low door, you’re going to get it wrong. If you pick the hundred out by looking at your friends, you’ll get it wrong, because they’re all in one place in the country. But if you pick out a way that as far as anybody can figure out has no connection with their height at all, then if you find forty out of a hundred, then in a hundred million there will be more or less forty million. How much more or how much less can be worked out quite accurately. In fact, it turns out that to be more or less correct to 1 percent, you have to have 10,000 samples. People don’t realize how difficult it is to get the accuracy high. For only 1 or 2 percent you need 10,000 tries.”

The last trick is to realize that many errors people make simply come from lack of information. They don’t even know they’re missing the tools they need. This can be a very tough one to guard against—it’s hard to know when you’re missing information that would change your mind—but Feynman gives the simple case of astrology to prove the point:

“Now, looking at the troubles that we have with all the unscientific and peculiar things in the world, there are a number of them which cannot be associated with difficulties in how to think, I think, but are just due to some lack of information. In particular, there are believers in astrology, of which, no doubt, there are a number here. Astrologists say that there are days when it’s better to go to the dentist than other days. There are days when it’s better to fly in an airplane, for you, if you are born on such a day and such and such an hour. And it’s all calculated by very careful rules in terms of the position of the stars. If it were true it would be very interesting. Insurance people would be very interested to change the insurance rates on people if they follow the astrological rules, because they have a better chance when they are in the airplane. Tests to determine whether people who go on the day that they are not supposed to go are worse off or not have never been made by the astrologers. The question of whether it’s a good day for business or a bad day for business has never been established. Now what of it? Maybe it’s still true, yes.

On the other hand, there’s an awful lot of information that indicates that it isn’t true. Because we have a lot of knowledge about how things work, what people are, what the world is, what those stars are, what the planets are that you are looking at, what makes them go around more or less, where they’re going to be in the next 2,000 years is completely known. They don’t have to look up to find out where it is. And furthermore, if you look very carefully at the different astrologers they don’t agree with each other, so what are you going to do? Disbelieve it. There’s no evidence at all for it. It’s pure nonsense.

The only way you can believe it is to have a general lack of information about the stars and the world and what the rest of the things look like. If such a phenomenon existed it would be most remarkable, in the face of all the other phenomena that exist, and unless someone can demonstrate it to you with a real experiment, with a real test, took people who believe and people who didn’t believe and made a test, and so on, then there’s no point in listening to them.”

 

***

Conclusion

Knowing something is valuable. The more you understand about how the world works, the more options you have for dealing with the unexpected and the better you can create and capitalize on opportunities. The Feynman Learning Technique is a great method to develop mastery over sets of information. Once you do, the knowledge becomes a powerful tool at your disposal.

But as Feynman himself showed, being willing and able to question your knowledge and the knowledge of others is how you keep improving. Learning is a journey.

If you want to learn more about Feynman’s ideas and teachings, we recommend:

Surely You’re Joking, Mr. Feynman!: Adventures of a Curious Character

The Pleasure of Finding Things Out: The Best Short Works of Richard Feynman

What Do You Care What Other People Think?: Further Adventures of a Curious Character

Solve Problems Before They Happen by Developing an “Inner Sense of Captaincy”

Too often we reward people who solve problems while ignoring those who prevent them in the first place. This incentivizes creating problems. According to poet David Whyte, the key to taking initiative and being proactive is viewing yourself as the captain of your own “voyage of work.”

If we want to get away from glorifying those who run around putting out fires, we need to cultivate an organizational culture that empowers everyone to act responsibly at the first sign of smoke.

How do we make that shift?

We can start by looking at ourselves and how we consider the voyage that is our work. When do we feel fulfillment? Is it when we swoop in to save the day and everyone congratulates us? It’s worth asking why, if we think something is worth saving, we don’t put more effort into protecting it ahead of time.

In Crossing the Unknown Sea, poet David Whyte suggests that we should view our work as a lifelong journey. In particular, he frames it as a sea voyage in which the greatest rewards lie in what we learn through the process, as opposed to the destination.

Like a long sea voyage, the nature of our work is always changing. There are stormy days and sunny ones. There are days involving highs of delight and lows of disaster. All of this happens against the backdrop of events in our personal lives and the wider world with varying levels of influence.

On a voyage, you need to look after your boat. There isn’t always time to solve problems after they happen. You need to learn how to preempt them or risk a much rougher journey—or even the end of it.

Whyte refers to the practice of taking control of your voyage as “developing an inner sense of captaincy,” offering a metaphor we can all apply to our work. Developing an inner sense of captaincy is good for both us and the organizations we work in. We end up with more agency over our own lives, and our organizations waste fewer resources. Whyte’s story of how he learned this lesson highlights why that’s the case.

***

A moment of reckoning

Any life, and any life’s work, is a hidden journey, a secret code, deciphered in fits and starts. The details only given truth by the whole, and the whole dependent on the detail.

Shortly after graduating, Whyte landed a dream job working as a naturalist guide on board a ship in the Galapagos Islands. One morning, he awoke and could tell at once that the vessel had drifted from its anchorage during the night. Whyte leaped up to find the captain fast asleep and the boat close to crashing into a cliff. Taking control of it just in time, he managed to steer himself and the other passengers back to safety—right as the captain awoke. Though they were safe, he was profoundly shaken both by the near miss and the realization that their leader had failed.

At first, Whyte’s reaction to the episode was to feel a smug contempt for the captain who had “slept through not only the anchor dragging but our long, long, nighttime drift.” The captain had failed to predict the problem or notice when it started. If Whyte hadn’t awakened, everyone on the ship could have died.

But something soon changed in his perspective. Whyte knew the captain was new and far less familiar with that particular boat than himself and the other crew member. Every boat has its quirks, and experience counts for more than seniority when it comes to knowing them. He’d also felt sure the night before that they needed to put down a second anchor and knew they “should have dropped another anchor without consultation, as crews are wont to do when they do not want to argue with their captain. We should have woken too.” He writes that “this moment of reckoning under the lava cliff speaks to the many dangerous arrivals in a life of work and to the way we must continually forge our identities through our endeavors.”

Whyte’s experience contains lessons with wide applicability for those of us on dry land. The idea of having an inner sense of captaincy means understanding the overarching goals of your work and being willing to make decisions that support them, even if something isn’t strictly your job or you might not get rewarded for it, or sometimes even if you don’t have permission.

When you play the long game, you’re thinking of the whole voyage, not whether you’ll get a pat on the back today.

***

Skin in the game

It’s all too easy to buy into the view that leaders have full responsibility for everything that happens, especially disasters. Sometimes in our work, when we’re not in a leadership position, we see a potential problem or an unnoticed existing one but choose not to take action. Instead, we stick to doing whatever we’ve been told to do because that feels safer. If it’s important, surely the person in charge will deal with it. If not, that’s their problem. Anyway, there’s already more than enough to do.

Leaders give us a convenient scapegoat when things go wrong. However, when we assume all responsibility lies with them, we don’t learn from our mistakes. We don’t have “our own personal compass, a direction, a willingness to meet life unmediated by any cushioning parental presence.

At some point, things do become our problem. No leader can do everything and see everything. The more you rise within an organization, the more you need to take initiative. If a leader can’t rely on their subordinates to take action when they see a potential problem, everything will collapse.

When we’ve been repeatedly denied agency by poor leadership and seen our efforts fall flat, we may sense we lack control. Taking action no longer feels natural. However, if we view our work as a voyage that helps us change and grow, it’s obvious why we need to overcome learned helplessness. We can’t abdicate all responsibility and blame other people for what we chose to ignore in the first place (as Whyte puts it, “The captain was there in all his inherited and burdened glory and thus convenient for the blame”). By understanding how our work helps us change and grow, we develop skin in the game.

On a ship, everyone is in it together. If something goes wrong, they’re all at risk. And it may not be easy or even possible to patch up a serious problem in the middle of the sea. As a result, everyone needs to pay attention and act on anything that seems amiss. Everyone needs to take responsibility for what happens, as Whyte goes on to detail:

“No matter that the inherited world of the sea told us that the captain is the be-all and end-all of all responsibility, we had all contributed to the lapse, the inexcusable lapse. The edge is no place for apportioning blame. If we had merely touched that cliff, we would have been for the briny deep, crew and passengers alike. The undertow and the huge waves lacerating against that undercut, barnacle-encrusted fortress would have killed us all.”

Having an inner sense of captaincy means viewing ourselves as the ones in charge of our voyage of work. It means not acting as if there are certain areas where we are incapacitated, or ignoring potential problems, just because someone else has a particular title.

***

Space and support to create success

Developing an inner sense of captaincy is not about compensating for an incompetent leader—nor does it mean thinking we always know best. The better someone is at leading people, the more they create the conditions for their team to take initiative and be proactive about preventing problems. They show by example that they inhabit a state rather than a particular role. A stronger leader can mean a more independent team.

Strong leaders instill autonomy by teaching and supervising processes with the intention of eventually not needing to oversee them. Captaincy is a way of being. It is embodied in the role of captain, but it is available to everyone. For a crew to develop it, the captain needs to step back a little and encourage them to take responsibility for outcomes. They can test themselves bit by bit, building up confidence. When people feel like it’s their responsibility to contribute to overall success, not just perform specific tasks, they can respond to the unexpected without waiting for instructions. They become ever more familiar with what their organization needs to stay healthy and use second-order thinking so potential problems are more noticeable before they happen.

Whyte realized that the near-disaster had a lot to do with their previous captain, Raphael. He was too good at his job, being “preternaturally alert and omnipresent, appearing on deck at the least sign of trouble.” The crew felt comfortable, knowing they could always rely on Raphael to handle any problems. Although this worked well at the time, once he left and they were no longer in such safe hands they were unused to taking initiative. Whyte explains:

Raphael had so filled his role of captain to capacity that we ourselves had become incapacitated in one crucial area: we had given up our own inner sense of captaincy. Somewhere inside of us, we had come to the decision that ultimate responsibility lay elsewhere.

Being a good leader isn’t about making sure your team doesn’t experience failure. Rather, it’s giving everyone the space and support to create success.

***

The voyage of work

Having an inner sense of captaincy means caring about outcomes, not credit or blame. When Whyte realized that he should have dropped a second anchor the night before the near miss, he would have been doing something that ideally no one other than the crew, or even just him, would have known about. The captain and passengers would have enjoyed an untroubled night and woken none the wiser.

If we prioritize getting good outcomes, our focus shifts from solving existing problems to preventing problems from happening in the first place. We put down a second anchor so the boat doesn’t drift, rather than steering it to safety when it’s about to crash. After all, we’re on the boat too.

Another good comparison is picking up litter. The less connected to and responsible for a place we feel, the less likely we might be to pick up trash lying on the ground. In our homes, we’re almost certain to pick it up. If we’re walking along our street or in our neighborhood, it’s a little less likely. In a movie theater or bar when we know it’s someone’s job to pick up trash, we’re less likely to bother. What’s the equivalent to leaving trash on the ground in your job?

Most organizations don’t incentivize prevention because it’s invisible. Who knows what would have happened? How do you measure something that doesn’t exist? After all, problem preventers seem relaxed. They often go home on time. They take lots of time to think. We don’t know how well they would deal with conflict, because they never seem to experience any. The invisibility of the work they do to prevent problems in the first place makes it seem like their job isn’t challenging.

When we promote problem solvers, we incentivize having problems. We fail to unite everyone towards a clear goal. Because most organizations reward problem solvers, it can seem like a better idea to let things go wrong, then fix them after. That’s how you get visibility. You run from one high-level meeting to the next, reacting to one problem after another.

It’s great to have people to solve those problems but it is better not to have them in the first place. Solving problems generally requires more resources than preventing them, not to mention the toll it takes on our stress levels. As the saying goes, an ounce of prevention is worth a pound of cure.

An inner sense of captaincy on our voyage of work is good for us and for our organizations. It changes how we think about preventing problems. It becomes a part of an overall voyage, an opportunity to build courage and face fears. We become more fully ourselves and more in touch with our nature. Whyte writes that “having the powerful characteristics of captaincy or leadership of any form is almost always an outward sign of a person inhabiting their physical body and the deeper elements of their own nature.”

12 Life Lessons From Mathematician and Philosopher Gian-Carlo Rota

The mathematician and philosopher Gian-Carlo Rota spent much of his career at MIT, where students adored him for his engaging, passionate lectures. In 1996, Rota gave a talk entitled “Ten Lessons I Wish I Had Been Taught,” which contains valuable advice for making people pay attention to your ideas.

Many mathematicians regard Rota as single-handedly responsible for turning combinatorics into a significant field of study. He specialized in functional analysis, probability theory, phenomenology, and combinatorics. His 1996 talk, “Ten Lessons I Wish I Had Been Taught,” was later printed in his book, Indiscrete Thoughts.

Rota began by explaining that the advice we give others is always the advice we need to follow most. Seeing as it was too late for him to follow certain lessons, he decided he would share them with the audience. Here, we summarize twelve insights from Rota’s talk—which are fascinating and practical, even if you’re not a mathematician.

***

Every lecture should make only one point

“Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards.”

When we wish to communicate with people—in an article, an email to a coworker, a presentation, a text to a partner, and so on—it’s often best to stick to making one point at a time. This matters all the more so if we’re trying to get our ideas across to a large audience.

If we make one point well enough, we can be optimistic about people understanding and remembering it. But if we try to fit too much in, “the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.

***

Never run over time

“After fifty minutes (one microcentury as von Neumann used to say), everybody’s attention will turn elsewhere even if we are trying to prove the Riemann hypothesis. One minute over time can destroy the best of lectures.”

Rota considered running over the allotted time slot to be the worst thing a lecturer could do. Our attention spans are finite. After a certain point, we stop taking in new information.

In your work, it’s important to respect the time and attention of others. Put in the extra work required for brevity and clarity. Don’t expect them to find what you have to say as interesting as you do. Condensing and compressing your ideas both ensures you truly understand them and makes them easier for others to remember.

***

Relate to your audience

“As you enter the lecture hall, try to spot someone in the audience whose work you have some familiarity with. Quickly rearrange your presentation so as to manage to mention some of that person’s work.”

Reciprocity is remarkably persuasive. Sometimes, how people respond to your work has as much to do with how you respond to theirs as it does with the work itself. If you want people to pay attention to your work, always give before you take and pay attention to theirs first. Show that you see them and appreciate them. Rota explains that “everyone in the audience has come to listen to your lecture with the secret hope of hearing their work mentioned.

The less acknowledgment someone’s work has received, the more of an impact your attention is likely to have. A small act of encouragement can be enough to deter someone from quitting. With characteristic humor, Rota recounts:

“I have always felt miffed after reading a paper in which I felt I was not being given proper credit, and it is safe to conjecture that the same happens to everyone else. One day I tried an experiment. After writing a rather long paper, I began to draft a thorough bibliography. On the spur of the moment I decided to cite a few papers which had nothing whatsoever to do with the content of my paper to see what might happen.

Somewhat to my surprise, I received letters from two of the authors whose papers I believed were irrelevant to my article. Both letters were written in an emotionally charged tone. Each of the authors warmly congratulated me for being the first to acknowledge their contribution to the field.”

***

Give people something to take home

“I often meet, in airports, in the street, and occasionally in embarrassing situations, MIT alumni who have taken one or more courses from me. Most of the time they admit that they have forgotten the subject of the course and all the mathematics I thought I had taught them. However, they will gladly recall some joke, some anecdote, some quirk, some side remark, or some mistake I made.”

When we have a conversation, read a book, or listen to a talk, the sad fact is that we are unlikely to remember much of it even a few hours later, let alone years after the event. Even if we enjoyed and valued it, only a small part will stick in our memory.

So when you’re communicating with people, try to be conscious about giving them something to take home. Choose a memorable line or idea, create a visual image, or use humor in your work.

For example, in The Righteous Mind, Jonathan Haidt repeats many times that the mind is like a tiny rider on a gigantic elephant. The rider represents controlled mental processes, while the elephant represents automatic ones. It’s a distinctive image, one readers are quite likely to take home with them.

***

Make sure the blackboard is spotless

“By starting with a spotless blackboard, you will subtly convey the impression that the lecture they are about to hear is equally spotless.”

Presentation matters. The way our work looks influences how people perceive it. Taking the time to clean our equivalent of a blackboard signals that we care about what we’re doing and consider it important.

In “How To Spot Bad Science,” we noted that one possible sign of bad science is that the research is presented in a thoughtless, messy way. Most researchers who take their work seriously will put in the extra effort to ensure it’s well presented.

***

Make it easy for people to take notes

“What we write on the blackboard should correspond to what we want an attentive listener to take down in his notebook. It is preferable to write slowly and in a large handwriting, with no abbreviations. Those members of the audience who are taking notes are doing us a favor, and it is up to us to help them with their copying.”

If a lecturer is using slides with writing on them instead of a blackboard, Rota adds that they should give people time to take notes. This might mean repeating themselves in a few different ways so each slide takes longer to explain (which ties in with the idea that every lecture should make only one point). Moving too fast with the expectation that people will look at the slides again later is “wishful thinking.”

When we present our work to people, we should make it simple for them to understand our ideas on the spot. We shouldn’t expect them to revisit it later. They might forget. And even if they don’t, we won’t be there to answer questions, take feedback, and clear up any misunderstandings.

***

Share the same work multiple times

Rota learned this lesson when he bought Collected Papers, a volume compiling the publications of mathematician Frederic Riesz. He noted that “the editors had gone out of their way to publish every little scrap Riesz had ever published.” Putting them all in one place revealed that he had published the same ideas multiple times:

Riesz would publish the first rough version of an idea in some obscure Hungarian journal. A few years later, he would send a series of notes to the French Academy’s Comptes Rendus in which the same material was further elaborated. A few more years would pass, and he would publish the definitive paper, either in French or in English.

Riesz would also develop his ideas while lecturing. Explaining the same subject again and again for years allowed him to keep improving it until he was ready to publish. Rota notes, “No wonder the final version was perfect.

In our work, we might feel as if we need to have fresh ideas all of the time and that anything we share with others needs to be a finished product. But sometimes we can do our best work through an iterative process.

For example, a writer might start by sharing an idea as a tweet. This gets a good response, and the replies help them expand it into a blog post. From there they keep reworking the post over several years, making it longer and more definite each time. They give a talk on the topic. Eventually, it becomes a book.

Award-winning comedian Chris Rock prepares for global tours by performing dozens of times in small venues for a handful of people. Each performance is an experiment to see which jokes land, which ones don’t, and which need tweaking. By the time he’s performed a routine forty or fifty times, making it better and better, he’s ready to share it with huge audiences.

Another reason to share the same work multiple times is that different people will see it each time and understand it in different ways:

“The mathematical community is split into small groups, each one with its own customs, notation, and terminology. It may soon be indispensable to present the same result in several versions, each one accessible to a specific group; the price one might have to pay otherwise is to have our work rediscovered by someone who uses a different language and notation, and who will rightly claim it as his own.”

Sharing your work multiple times thus has two benefits. The first is that the feedback allows you to improve and refine your work. The second is that you increase the chance of your work being definitively associated with you. If the core ideas are strong enough, they’ll shine through even in the initial incomplete versions.

***

You are more likely to be remembered for your expository work

“Allow me to digress with a personal reminiscence. I sometimes publish in a branch of philosophy called phenomenology. . . . It so happens that the fundamental treatises of phenomenology are written in thick, heavy philosophical German. Tradition demands that no examples ever be given of what one is talking about. One day I decided, not without serious misgivings, to publish a paper that was essentially an updating of some paragraphs from a book by Edmund Husserl, with a few examples added. While I was waiting for the worst at the next meeting of the Society for Phenomenology and Existential Philosophy, a prominent phenomenologist rushed towards me with a smile on his face. He was full of praise for my paper, and he strongly encouraged me to further develop the novel and original ideas presented in it.”

Rota realized that many of the mathematicians he admired the most were known more for their work explaining and building upon existing knowledge, as opposed to their entirely original work. Their extensive knowledge of their domain meant they could expand a little beyond their core specialization and synthesize charted territory.

For example, David Hilbert was best known for a textbook on integral equations which was “in large part expository, leaning on the work of Hellinger and several other mathematicians whose names are now forgotten.” William Feller was known for an influential treatise on probability, with few recalling his original work in convex geometry.

One of our core goals at Farnam Street is to share the best of what other people have already figured out. We all want to make original and creative contributions to the world. But the best ideas that are already out there are quite often much more useful than what we can contribute from scratch.

We should never be afraid to stand on the shoulders of giants.

***

Every mathematician has only a few tricks

“. . . mathematicians, even the very best, also rely on a few tricks which they use over and over.”

Upon reading the complete works of certain influential mathematicians, such as David Hilbert, Rota realized that they always used the same tricks again and again.

We don’t need to be amazing at everything to do high-quality work. The smartest and most successful people are often only good at a few things—or even one thing. Their secret is that they maximize those strengths and don’t get distracted. They define their circle of competence and don’t attempt things they’re not good at if there’s any room to double down further on what’s already going well.

It might seem as if this lesson contradicts the previous one (you are more likely to be remembered for your expository work), but there’s a key difference. If you’ve hit diminishing returns with improvements to what’s already inside your circle of competence, it makes sense to experiment with things you already have an aptitude for (or a strong suspicion you might) but you just haven’t made them your focus.

***

Don’t worry about small mistakes

“Once more let me begin with Hilbert. When the Germans were planning to publish Hilbert’s collected papers and to present him with a set on the occasion of one of his later birthdays, they realized that they could not publish the papers in their original versions because they were full of errors, some of them quite serious. Thereupon they hired a young unemployed mathematician, Olga Taussky-Todd, to go over Hilbert’s papers and correct all mistakes. Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. . . . At last, on Hilbert’s birthday, a freshly printed set of Hilbert’s collected papers was presented to the Geheimrat. Hilbert leafed through them carefully and did not notice anything.”

Rota goes on to say: “There are two kinds of mistakes. There are fatal mistakes that destroy a theory; but there are also contingent ones, which are useful in testing the stability of a theory.

Mistakes are either contingent or fatal. Contingent mistakes don’t completely ruin what you’re working on; fatal ones do. Building in a margin of safety (such as having a bit more time or funding that you expect to need) turns many fatal mistakes into contingent ones.

Contingent mistakes can even be useful. When details change, but the underlying theory is still sound, you know which details not to sweat.

***

Use Feynman’s method for solving problems

“Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say: ‘How did he do it? He must be a genius!’”

***

Write informative introductions

“Nowadays, reading a mathematics paper from top to bottom is a rare event. If we wish our paper to be read, we had better provide our prospective readers with strong motivation to do so. A lengthy introduction, summarizing the history of the subject, giving everybody his due, and perhaps enticingly outlining the content of the paper in a discursive manner, will go some of the way towards getting us a couple of readers.”

As with the lesson of don’t run over time, respect that people have limited time and attention. Introductions are all about explaining what a piece of work is going to be about, what its purpose is, and why someone should be interested in it.

A job posting is an introduction to a company. The description on a calendar invite to a meeting is an introduction to that meeting. An about page is an introduction to an author. The subject line on a cold email is an introduction to that message. A course curriculum is an introduction to a class.

Putting extra effort into our introductions will help other people make an accurate assessment of whether they want to engage with the full thing. It will prime their minds for what to expect and answer some of their questions.

***

If you’re interested in learning more, check out Rota’s “10 Lessons of an MIT Education.

The Best-Case Outcomes Are Statistical Outliers

There’s nothing wrong with hoping for the best. But the best-case scenario is rarely the one that comes to pass. Being realistic about what is likely to happen positions you for a range of possible outcomes and gives you peace of mind.

We dream about achieving the best-case outcomes, but they are rare. We can’t forget to acknowledge all the other possibilities of what may happen if we want to position ourselves for success.

“Hoping for the best, prepared for the worst, and unsurprised by anything in between.” —Maya Angelou

It’s okay to hope for the best—to look at whatever situation you’re in and say, “This time I have it figured out. This time it’s going to work.” First, having some degree of optimism is necessary for trying anything new. If we weren’t overconfident, we’d never have the guts to do something as risky and unlikely to succeed as starting a business, entering a new relationship, or sending that cold email. Anticipating that a new venture will work helps you overcome obstacles and make it work.

Second, sometimes we do have it figured out. Sometimes our solutions do make things better.

Even when the best-case scenario comes to pass, however, it rarely unfolds exactly as planned. Some choices create unanticipated consequences that we have to deal with. We may encounter unexpected roadblocks due to a lack of information. Or the full implementation of all our ideas and aspirations might take a lot longer than we planned for.

When you look back over history, we rarely find best-case outcomes.

Sure, sometimes they happen—maybe more than we think, given not every moment of the past is recorded. But let’s be honest: even historical wins, like developing the polio vaccine and figuring out how to produce clean drinking water, were not all smooth sailing. There are still people who are unable or unwilling to get the polio vaccine. And there are still many people in the world, even in developed countries like Canada, who don’t have access to clean drinking water.

The best-case outcomes in these situations—a world without polio and a world with globally available clean drinking water—have not happened, despite the existence of reliable, proven technology that can make these outcomes a reality.

There are a lot of reasons why, in these situations, we haven’t achieved the best-case outcomes. Furthermore, situations like these are not unusual. We rarely achieve the dream. The more complicated a situation, the more people it involves, the more variables and dependencies that exist, the more it’s unlikely that it’s all going to work out.

If we narrow our scope and say, for example, the best-case scenario for this Friday night is that we don’t burn the pizza, we can all agree on a movie, and the power doesn’t go out, it’s more likely we’ll achieve it. There are fewer variables, so there’s a greater chance that this specific scenario will come to pass.

The problem is that most of us plan as if we live in an easy-to-anticipate Friday night kind of world. We don’t.

There are no magic bullets for the complicated challenges facing society. There is only hard work, planning for the wide spectrum of human behavior, adjusting to changing conditions, and perseverance. There are many possible outcomes for any given endeavor and only one that we consider the best case.

That is why the best-case outcomes are statistical outliers—they are only one possibility in a sea of many. They might come to pass, but you’re much better off preparing for the likelihood that they won’t.

Our expectations matter. Anticipating a range of outcomes can make us feel better. If we expect the best and it happens, we’re merely satisfied. If we expect less and something better happens, we’re delighted.

Knowing that the future is probably not going to be all sunshine and roses allows you to prepare for a variety of more likely outcomes, including some of the bad ones. Sometimes, too, when the worst-case scenario happens, it’s actually a huge relief. We realize it’s not all bad, we didn’t die, and we can manage if it happens again. Preparation and knowing you can handle a wide spectrum of possible challenges is how you get the peace of mind to be unsurprised by anything in between the worst and the best.

The High Price of Mistrust

When we can’t trust each other, nothing works. As we participate in our communities less and less, we find it harder to feel other people are trustworthy. But if we can bring back a sense of trust in the people around us, the rewards are incredible.

There are costs to falling community participation. Rather than simply lamenting the loss of a past golden era (as people have done in every era), Harvard political scientist Robert D. Putnam explains these costs, as well as how we might bring community participation back.

First published twenty years ago, Bowling Alone is an exhaustive, hefty work. In its 544 pages, Putnam negotiated mountains of data to support his thesis that the previous few decades had seen Americans retreat en masse from public life. Putnam argued Americans had become disconnected from their wider communities, as evidenced by changes such as a decline in civic engagement and dwindling membership rates for groups such as bowling leagues and PTAs.

Though aspects of Bowling Alone are a little dated today (“computer-mediated communication” isn’t a phrase you’re likely to have heard recently), a quick glance at 2021’s social landscape would suggest many of the trends Putnam described have only continued and apply in other parts of the world too.

Right now, polarization and social distancing have forced us apart from any sense of community to a degree that can seem irresolvable.

Will we ever bowl in leagues alongside near strangers and turn them into friends again? Will we ever bowl again at all, even if alone, or will those gleaming aisles, too-tight shoes, and overpriced sodas fade into a distant memory we recount to our children?

The idea of going into a public space for a non-essential reason can feel incredibly out of reach for many of us right now. And who knows how spaces like bowling alleys will survive in the long run without the social scenes that fuelled them. Now is a perfect time to revisit Bowling Alone to see what it can still teach us, because many of its warnings and lessons are perhaps more relevant now than at its time of publication.

One key lesson we can derive from Bowling Alone is that the less we trust each other—something which is both a cause and consequence of declining community engagement—the more it costs us. Mistrust is expensive.

We need to trust the people around us in order to live happy, productive lives. If we don’t trust them, we end up having to find costly ways to formalize our relationships. Even if we’re not engaged with other people on a social or civic level, we still have to transact with them on an economic one. We still have to walk along the same streets, send our children to the same schools, and spend afternoons in the same parks.

To live our lives freely, we need to to find ways to trust that other people won‘t hurt us, rip us off, or otherwise harm us. Otherwise we may lose something too precious to put a price tag on.

***

No person is an island

As community engagement declines, Putnam refers to the thing we are losing as “social capital,” meaning the sum of our connections with other individuals and the benefits they bring us.

Being part of a social network gives you access to all sorts of value. Putnam explains, “Just as a screwdriver (physical capital) or a college education (human capital) can increase productivity (both individual and collective), so too can social contacts affect the productivity of individuals and groups.” For example, knowing the right people can help you find a job where your skills are well utilized. If you don’t know many people, you might struggle to find work and end up doing something you’re overqualified for or be unemployed for a while.

To give another example, if you’re friends with other parents in your local neighborhood, you can coordinate with them to share childcare responsibilities. If you’re not, you’re likely to end up paying for childcare or being more limited in what you can do when your kids are home from school.

Both individuals and groups have social capital. Putnam also explains that “social capital also can have externalities that affect the wider community, so that not all of the costs and benefits of social connections accrue to the person making the contact . . . even a poorly connected individual may derive some of the spillover benefits from living in a well-connected community.” A well-connected community is usually a safer community, and the safety extends, at least partly, to the least connected members.

For example, the more neighbors know each other, the more they notice when something on the street is out of the norm and potentially harmful. That observation benefits everyone on the street—especially the most vulnerable people.

Having social capital is valuable because it undergirds certain norms. Our connections to other people require and encourage us to behave in ways that maintain those connections. Being well-connected is both an outcome of following social norms and an incentive to follow them. We adhere to “rules of conduct” for the sake of our social capital.

Social capital enables us to trust other people. When we’re connected to many others, we develop a norm of “generalized reciprocity.” Putnam explains this as meaning “I’ll do this for you without expecting anything specific back from you, in the confident expectation that someone else will do something for me down the road.” We can go for the delayed payoff that comes from being nice without an agenda. Generalized reciprocity makes all of our interactions with other people easier. It’s a form of trust.

Putnam goes on to write, “A society characterized by generalized reciprocity is more efficient than a distrustful society, for the same reason that money is more efficient than barter. If we don’t have to balance every exchange instantly, we can get a lot more accomplished. Trustworthiness lubricates social life.” Trust requires that we interact with the same people more than once, or at least think that we might.

Generalized reciprocity as a norm also enables us to work together to do things that benefit the whole group or even that don’t benefit us personally at all, rather than focusing on ourselves. If you live in a neighborhood with a norm of generalized reciprocity, you can do things like mowing a neighbor’s lawn for free because you know that when you need similar help, someone will come through. You can do things that wouldn’t make sense in an “every person for themselves” area.

Societies and groups with a norm of generalized reciprocity maintain that norm through “gossip and other valuable ways of cultivating reputation.”

When people are linked to each other, they know that news will spread if they deviate from norms. If one member of a bowling league cheats and another member notices, they’re likely to discuss it with others, and everyone will know to trust that member a little less. Knowing gossip will spread enables us to trust our perceptions of others, because if something were amiss we would have surely heard about it. It also nudges us towards behaving well—if something is amiss about us, others are sure to hear of that, too.

But with the decline of community participation comes the decline of trust. If you don’t know the people around you, how can you trust them? The more disconnected we are from each other, the less we can rely on each other to be good and nice. Without repeated interactions with the same people, we become suspicious of each other. This suspicion carries heavy costs.

***

Rising transaction costs

In economics, a “transaction cost” refers to the cost of making some sort of trade within a market. Transaction costs are the price we pay in order to exchange value. They’re in addition to the cost of producing or otherwise providing that value.

For example, when you make a credit card purchase in a shop, the shop likely pays a processing fee to the card company. It’s part of the cost of doing business with you. Another cost is that the shop needs people working in it to ensure you pay. They can’t just rely on you popping the right money in the till then leaving.

Putnam explains later in the book that being able to trust people as a result of a norm of generalized reciprocity in our social lives leads to reduced transaction costs. It means we can relax around other people and not be distracted by “worrying whether you got back the right change from the clerk to double-checking that you locked the car door.We can easily be honest if we know others will do the same.

With the decline of social capital comes rising transaction costs. We can’t rely on other people to treat us as they would like to be treated because we don’t know them and haven’t built the opportunities to engage in reciprocal relationships.

Much like trusting trustworthy people has great benefits, trusting untrustworthy people has enormous costs. No one likes being exploited or ripped off because they assumed good faith in the wrong person.

If we’re uncertain, we default to mistrust. You can see the endpoint of a loss of trust in societies and groups which must rely on the use or threat of force to get anything done because everyone is out to rip off everyone else.

At a certain point, transaction costs can cancel out the benefits of transacting at all. If lending a leaf blower to a neighbor requires a lawyer to set up a contract stipulating the terms of its use, then borrowing it doesn’t save them any money. They might as well hire someone or buy their own.

We don’t try new things when we can’t trust other people. So we have to find additional ways of making transactions work. One way we do this is through “the rule of law—formal contracts, courts, litigation, adjudication, and enforcement by the state.” During the period since the 1970s when Putnam considers social capital to have declined, the ratio of lawyers to other professions increased more than any other profession: “After 1970 the legal profession grew three times faster than the other professions as a whole.”

While we can’t attribute that solely to a decline in social capital, it seems clear that mistrusting each other makes us more likely to prefer to get things in writing. We are “forced to rely increasingly on formal institutions, and above all the law, to accomplish what we used to accomplished through informal networks reinforced by generalized reciprocity—that is, through social capital.”

***

The high price of mistrust

The cost of mistrust doesn’t just show up in the form of bills from lawyers. It poisons everything we do and further drives us apart.

Mistrust drives us to install remote monitoring software on our employees’ laptops and ask them to fill in reports on every tiny task to prove they’re not skiving off. It drives us to make excuses when a friend asks for help moving or a lift to the airport because no one was available last time we needed that same help. It drives us to begrudgingly buy a household appliance or tool we’ll only use once because we don’t even consider borrowing it from a neighbor.

Mistrust nudges us to peek at the search history of a partner or to cross-reference what a child says. It causes us to keep our belongings close in public, to double-lock the doors, to not let our kids play in the street, and a million other tiny changes.

Mistrust costs us time and money, sure. But it also costs us a little bit of our humanity. We are sociable animals, and seeing the people around us as a potential threat, even a small one, wears on us. Constant vigilance is exhausting. So is being under constant suspicion.

One lesson we can take from Bowling Alone is that anything we can do to increase trust between people will have tremendous knock-on benefits. Trust allows us to relax, delay gratification, and generally be nicer to everyone. It makes for a nicer day-to-day existence. We don’t need to spend so much time and money checking up on others. Ultimately, it’s worth investing in trust whenever possible, as opposed to investing in more ways of monitoring and controlling people.

That’s not to say that there was ever a golden utopia when everyone trusted everyone. People have always abused the trust of others. And people on the fringes of society have always been unfairly mistrusted and struggled to trust that others would act in good faith. Nonetheless, whenever we go to install some mechanism intended to replace trust, it’s worth asking if there’s a different way.

The ingredients for trust are simple. We need to repeatedly interact with the same people, know that others will warn us about their bad behavior, and feel secure in the knowledge we’ll be helped when and if we need it. At the same time, we need to know others will be warned if we behave badly and that everything we give to others will come back to us, perhaps multiplied.

If you want people to trust you, the best place to start is by trusting them. That isn’t always easy to do, especially if you’ve paid the price for it in the past. But it’s the best place to start. Then you need to combine it with repeat interactions, or the possibility thereof. In the iterated Prisoner’s Dilemma, a game that reveals how cooperation works, the best strategy to adopt is tit for tat. In the first round you cooperate, then in subsequent rounds do whatever the other player did last.

How might that play out in real life? If you want your employees to trust you, then you might start by trusting them—while also making it clear that you’re not going to fire them suddenly and you want them to stick around.

Mistrust is expensive. But trusting the wrong people can sometimes seem too risky. The lesson we can take from Bowling Alone is that building trust is absolutely worthwhile—and that the only way to do it is by finding ways to get out there and engage with other people.

We can create trust by contributing to existing communities and creating new ones. The more we show up and are willing to have faith in others, the more we’ll get back in return.

Why You Should Practice Failure

We learn valuable lessons when we experience failure and setbacks. Most of us wait for those failures to happen to us, however, instead of seeking them out. But deliberately making mistakes can give us the knowledge we need to more easily overcome obstacles in the future.

We learn from our mistakes. When we screw up and fail, we learn how not to handle things. We learn what not to do.

Failing is a byproduct of trying to succeed. We do our research, make our plans, get the necessary ingredients, and try to put it all together. Often, things don’t go as we wish. If we’re smart, we reflect on what happened and make note of where we could do better next time.

But how many of us make deliberate mistakes? How often do we try to fail in order to learn from it?

If we want to avoid costly mistakes in the future when the stakes are high, then making some now might be excellent preparation.

***

Practicing failure is a common practice for pilots. In 1932, at the dawn of the aviation age, Amelia Earhart described the value for all pilots of learning through deliberate mistakes. “The fundamental stunts taught to students are slips, stalls, and spins,” she says in her autobiography The Fun of It. “A knowledge of some stunts is judged necessary to good flying. Unless a pilot has actually recovered from a stall, has actually put his plane into a spin and brought it out, he cannot know accurately what those acts entail. He should be familiar enough with abnormal positions of his craft to recover without having to think how.

For a pilot, stunting is a skill attained through practice. You go up in a plane and, for example, you change the angle of the wings to deliberately stall the craft. You prepare beforehand by learning what a stall is, what the critical variables you have to pay attention to are, and how other pilots address stalls. You learn the optimal response. But then you go up in the air and actually apply your knowledge. What’s easy and obvious on the ground, when you’re under little pressure, isn’t guaranteed to come to you when your plane loses lift and function at 10,000 feet. Deliberately stalling your plane, making a conscientious mistake when you have prepared to deal with it, gives you the experience to react when a stall happens in a less controlled situation.

The first time your plane unexpectedly stops working in mid-flight is scary for any pilot. But those who have practiced in similar situations are far more likely to react appropriately. “An individual’s life on the ground or in the air may depend on a split second,” Earhart writes. “The slow response which results from seldom, if ever, having accomplished the combination of acts required in a given circumstance may be the deciding factor.” You don’t want the first stall to come at night in poor weather when you have your family in the cabin. Much better to practice stalling in a variety of situations ahead of time—that way, when one happens unexpectedly, your reactions can be guided by successful experience and not panic.

Earhart advises that in advance, the solution to many problems can be worked out on paper, “but only experience counts when there is no time to think a process through. The pilot who hasn’t stalled a plane is less likely to be able to judge correctly the time and space necessary for recovery than one who has.

If you practice failing every so often, you increase your flexibility and adaptability when life throws obstacles in your way. Of course, no amount of preparation will get you through all possible challenges, and Earhart’s own story is the best example of that. But making deliberate mistakes in order to learn from them is one way to give ourselves optionality when our metaphorical engine stops in midair.

If we don’t practice failing, we can only safely fly on sunny days.