Random Posts

The Man Who Never Quit

When he was seven years old, his family was forced out of their home and off their farm. Like other boys his age, he was expected to work to help support the family.

When he was nine, his mother died.

At the age of 22, the company he worked for went bankrupt and he lost his job.

At 23, he ran for state legislature in a field of 13 candidates. He came in eighth.

At 24, he borrowed money to start a business with a friend. By the end of the year, the business failed. The local sheriff seized his possessions to pay off his debt. His partner soon died, penniless, and he assumed his partner’s share of debt as well. He spent the next several years of his life paying it off.

At 25, he ran for state legislature again. This time he won.

At 26, he was engaged to be married. But his fiancée died before the wedding. The next year he plunged into a depression and suffered a nervous breakdown.

At 29, he sought to become the speaker of the state legislature. He was defeated.

At 34, he campaigned for a U.S. congressional seat, representing his district. He lost.

At 35, he ran for Congress again. This time he won. He went to Washington and did a good job.

At 39, when his term ended, he was out of a job again. There was a one-term-limit rule in his party.

At 40, he tried to get a job as commissioner of the General Land Office. He was rejected.

At 45, he campaigned for the U.S. Senate, representing his state. He lost by six electoral votes.

At 47, he was one of the contenders for the vice-presidential nomination at his party’s national convention. He lost.

At 49, he ran for the same U.S. Senate seat a second time. And for the second time, he lost.

Two years later, at the age of 51, after a lifetime of failure, disappointment, and loss (and still relatively unknown outside of his home state of Illinois), Abraham Lincoln was elected the sixteenth president of the United States.

— Via Lead with a Story: A Guide to Crafting Business Narratives That Captivate, Convince, and Inspire.

Today we celebrate Lincoln’s Birthday.

The next time you consider giving up when faced with a setback, consider this story. Imagine how the world would be different today if Lincoln gave up after his first setback … or his second … or his tenth.

The Mind’s Search Algorithm: Sorting Mental Models

Mental models are tools for the mind.

In his talk: Academic Economics: Strengths and Weaknesses, after Considering Interdisciplinary Needs, at the University of California at Santa Barbara, in 2003, Charlie Munger honed in on why we like to specialize.

The big general objection to economics was the one early described by Alfred North Whitehead when he spoke of the fatal unconnectedness of academic disciplines, wherein each professor didn’t even know of the models of the other disciplines, much less try to synthesize those disciplines with his own … The nature of this failure is that it creates what I always call ‘man with a hammer’ syndrome. To a man with only a hammer, every problem looks pretty much like a nail. And that works marvellously to gum up all professions, and all departments of academia, and indeed most practical life. So, what do we do, Charlie? The only antidote for being an absolute klutz due to the presence of a man with a hammer syndrome is to have a full kit of tools. You don’t have just a hammer. You’ve got all the tools.

The more models you have from outside your discipline and the more you iterate through them when faced with a challenge in a checklist sort of fashion, the better you’ll be able to solve problems.

Models are additive. Like LEGO. The more you have the more things you can build, the more connections you can make between them and the more likely you are to be able to determine the relevant variables that govern the situation.

And when you learn these models you need to ask yourself under what conditions will this tool fail? That way you’re not only looking for situations where the tool is useful but also situations where something interesting is happening that might warrant further attention.

The Mind’s Search Engine

In Diaminds: Decoding the Mental Habits of Successful Thinkers, Roger Martin looks at our mental search engine.

Now for the final step in the design of the mentally choiceful stance: the search engine, as in ‘How did I solve these problems?’ ‘Obviously,’ you will answer yourself, ‘I was using a simple search engine in my mind to go through checklist style, and I was using some rough algorithms that work pretty well in many complex systems.’ What does a search engine do? It searches. And how do you organize an efficient search? Well, algorithm designers tell us you have to have an efficient organization of the contents of whatever it is you are searching. And a tree structure allows you to search more efficiently than most alternative structures.

How a tree structure helps simplify search: A detection algorithm for ‘Fox.’
How a tree structure helps simplify search: A detection algorithm for ‘Fox.’

So what’s Munger’s search algorithm?

(from an interview with Munger via Diaminds: Decoding the Mental Habits of Successful Thinkers:)

Extreme success is likely to be caused by some combination of the following factors: a) Extreme maximization or minimization of one or two variables. Example[:] Costco, or, [Berkshire Hathaway’s] furniture and appliance store. b) Adding success factors so that a bigger combination drives success, often in nonlinear fashion, as one is reminded of the concept of breakpoint or the concept of critical mass in physics. You get more mass, and you get a lollapalooza result. And of course I’ve been searching for lollapalooza results all my life, so I’m very interested in models that explain their occurrence. [Remember the Black Swan?] c) an extreme of good performance over many factors. Examples: Toyota or Les Schwab. d) Catching and riding some big wave.

Charlie Munger’s lollapalooza detection algorithm, represented as a tree search.
Charlie Munger’s lollapalooza detection algorithm represented as a tree search.

(via Diaminds: Decoding the Mental Habits of Successful Thinkers)

A good search algorithm allows you to make your mental choices clear. It makes it easier for you to be mentally choiceful and to understand the reasons why you’re making these mental choices.

Now, what should go on the branches of your tree of mental models? Well, how about basic mental models from a whole bunch of different disciplines? Such as: physics (non-linearity, criticality), economics (what Munger calls the ‘super-power’ of incentives), the multiplicative effects of several interacting causes (biophysics), and collective phenomena – or ‘catching the wave’ (plasma physics). How’s that for a science that rocks, by placing at the disposal of the mind a large library of forms created by thinkers across hundreds of years and marshalling them for the purpose of detecting, building, and profiting from Black Swans?

The ‘tree trick’ has one more advantage – a big one: it lets you quickly visualize interactions among the various models and identify cumulative effects. Go northwest in your search, starting from the ’0’ node, and the interactions double with every step. Go southwest, on the other hand, and the interactions decrease in number at the same rate. Seen in this rather sketchy way, Black Swan hunting is no longer as daunting a sport as it might seem at first sight.

12 History Books to Read

A reddit reader posed the question “I want to read 12 history books in one year to know ‘all the things’, what should be on the list?”

After much debate, the 12 below were chosen.

1. A History of the Modern World

…offers a wide-ranging survey that helps readers understand both the complexities of great events (e.g., the French Revolution, the First World War, or the collapse of great imperial systems) and the importance of historical analysis. It also provides a careful summary of the modern political changes that have affected the social and cultural development of all modern cultures.

2. Postwar: A History of Europe Since 1945

This is the best history we have of Europe in the postwar period and not likely to be surpassed for many years.

3. Walking Since Daybreak

Part history, part memoir, this unconventional account of the fate of the Baltic nations is also an important reassessment of WWII and its outcome.

4. A People’s Tragedy

Written in a narrative style that captures both the scope and detail of the Russian revolution, Orlando Figes’s history is certain to become one of the most important contemporary studies of Russia as it was at the beginning of the 20th century.

5. China: A History

Keay’s narrative spans 5,000 years, from the Three Dynasties (2000–220 BC) to Deng Xiaoping’s opening of China and the past three decades of economic growth. Broadly chronological, the book presents a history of all the Chinas—including regions (Yunnan, Tibet, Xinjiang, Mongolia, Manchuria) that account for two-thirds of the People’s Republic of China land mass but which barely feature in its conventional history.

6. The Arabs, A History

No better guide to the modern history of the Arab world could be found than Eugene Rogan. He is attentive as much to the insider accounts in Arab memoirs as to the imperial schemes hatched in drawing rooms in Paris and London, as concerned with popular movements and uprisings as with elite reformism, and unafraid to confront directly and with the best evidence and documentation available the vexed issues of colonialism, Orientalism, and the Arab-Israeli conflict.

7. Orientalism

Regardless of whether the reader agrees with it, anyone with an interest in the Middle East should have read this book at least once.

8. The First Total War: Napoleon’s Europe and the Birth of Warfare as We Know It

The twentieth century is usually seen as “the century of total war,” but as the historian David Bell argues in this landmark work, the phenomenon actually began much earlier, in the age of Napoleon. Bell takes us from campaigns of “extermination” in the blood-soaked fields of western France to savage street fighting in ruined Spanish cities to central European battlefields where tens of thousands died in a single day. Between 1792 and 1815, Europe plunged into an abyss of destruction, and our modern attitudes toward war were born.

9. The Problem of Slavery in the Age of Revolution

a work of majestic scale, written with great skill. It explores the growing consciousness, during a half century of revolutionary change, of the oldest and most extreme form of human exploitation. Concentrating on the Anglo-American experience, the historian also pursues his theme wherever it leads in western culture. His book is a distinguished example of historical scholarship and art.

10. Old World Encounters

…examines cross-cultural encounters before 1492, focusing in particular on the major cross-cultural influences that transformed Asia and Europe during this period: the ancient silk roads that linked China with the Roman Empire, the spread of the world religions, and the Mongol Empire of the thirteenth century. The author’s goal throughout the work is to examine the conditions–political, social, economic, or cultural–that enable one culture to influence, mix with, or suppress another. On the basis of its global analysis, the book identifies several distinctive pattern of conversion, conflict, and compromise that emerged from cross-cultural encounters.

11. Introduction to Medieval Europe, 300-1550: Age of Discretion

I believe this is the best medieval history textbook.

12. Alexander to Actium: The Historical Evolution of the Hellenistic Age

Green offers a particularly trenchant analysis of what has been seen as the conscious dissemination in the East of Hellenistic culture, and finds it largely a myth fueled by Victorian scholars seeking justification for a no longer morally respectable imperialism. His work leaves us with a final impression of the Hellenistic Age as a world with haunting and disturbing resemblances to our own. This lively, personal survey of a period as colorful as it is complex will fascinate the general reader no less than students and scholars.

Source

The Oracle, a Manual of the Art of Discretion

In The Art of Worldly Wisdom, Christopher Maurer translates this gem from Baltasar Gracián y Morales:

Know how to sell your wares, intrinsic quality isn’t enough. Not everyone bites at substance or looks for inner value. People like to follow the crowd; they go someplace because they see other people do so. It takes much skill to explain something’s value. You can use praise, for praise arouses desire. At other times you can give things a good name (but be sure to flee from affectation). Another trick is to offer something only to those in the know, for everyone believes himself an expert, and the person who isn’t will want to be one. Never praise things for being easy or common: you’ll make them seem vulgar and facile. Everybody goes for something unique. Uniqueness appeals both to the taste and to the intellect.

Still curious? While written in 1647, the book is packed full of wisdom.

The Positive Side of Shame

Recently, shame has gotten a bad rap. It’s been branded as toxic and destructive. But shame can be used as a tool to effect positive change.

***

A computer science PhD candidate uncovers significant privacy-violating security flaws in large companies, then shares them with the media to attract negative coverage. Google begins marking unencrypted websites as unsafe, showing a red cross in the URL bar. A nine-year-old girl posts pictures of her school’s abysmal lunches on a blog, leading the local council to step in.

What do each of the aforementioned stories have in common? They’re all examples of shame serving as a tool to encourage structural changes.

Shame, like all emotions, exists because it conferred a meaningful survival advantage for our ancestors. It is a universal experience. The body language associated with shame — inverted shoulders, averted eyes, pursed lips, bowed head, and so on — occurs across cultures. Even blind people exhibit the same body language, indicating it is innate, not learned. We would not waste our time and energy on shame if it wasn’t necessary for survival.

Shame enforces social norms. For our ancestors, the ability to maintain social cohesion was a matter of life or death. Take the almost ubiquitous social rule that states stealing is wrong. If a person is caught stealing, they are likely to feel some degree of shame. While this behavior may not threaten anyone’s survival today, in the past it could have been a sign that a group’s ability to cooperate was in jeopardy. Living in small groups in a harsh environment meant full cooperation was essential.

Through the lens of evolutionary biology, shame evolved to encourage adherence to beneficial social norms. This is backed up by the fact that shame is more prevalent in collectivist societies where people spend little to no time alone than it is in individualistic societies where people live more isolated lives.

Jennifer Jacquet argues in Is Shame Necessary?: New Uses For An Old Tool that we’re not quite through with shame yet. In fact, if we adapt it for the current era, it can help us to solve some of the most pressing problems we face. Shame gives the weak greater power. The difference is that we must shift shame from individuals to institutions, organizations, and powerful individuals. Jacquet states that her book “explores the origins and future of shame. It aims to examine how shaming—exposing a transgressor to public disapproval—a tool many of us find discomforting, might be retrofitted to serve us in new ways.”

Guilt vs. shame

Jacquet begins the book with the story of Sam LaBudde, a young man who in the 1980s became determined to target practices in the tuna-fishing industry leading to the deaths of dolphins. Tuna is often caught with purse seines, a type of large net that encloses around a shoal of fish. Seeing as dolphins tend to swim alongside tuna, they are easily caught in the nets. There, they either die or suffer serious injuries.

LaBudde got a job on a tuna-fishing boat and covertly filmed dolphins dying from their injuries. For months, he hid his true intentions from the crew, spending each day both dreading and hoping for the death of a dolphin. The footage went the 1980s equivalent of viral, showing up in the media all over the world and attracting the attention of major tuna companies.

Still a child at the time, Jacquet was horrified to learn of the consequences of the tuna her family ate. She recalls it as one of her first experiences of shame related to consumption habits. Jacquet persuaded her family to boycott canned tuna altogether. So many others did the same that companies launched the “dolphin-safe” label, which ostensibly indicated compliance with guidelines intended to reduce dolphin deaths. Jacquet returned to eating tuna and thought no more of it.

The campaign to end dolphin deaths in the tuna-fishing industry was futile, however, because it was built upon guilt rather than shame. Jacquet writes, “Guilt is a feeling whose audience and instigator is oneself, and its discomfort leads to self-regulation.” Hearing about dolphin deaths made consumers feel guilty about their fish-buying habits, which conflicted with their ethical values. Those who felt guilty could deal with it by purchasing supposedly dolphin-safe tuna—provided they had the means to potentially pay more and the time to research their choices. A better approach might have been for the videos to focus on tuna companies, giving the names of the largest offenders and calling for specific change in their policies.

But individuals changing their consumption habits did not stop dolphins from dying. It failed to bring about a structural change in the industry. This, Jacquet later realized, was part of a wider shift in environmental action. She explains that it became more about consumers’ choices:

As the focus shifted from supply to demand, shame on the part of corporations began to be overshadowed by guilt on the part of consumers—as the vehicle for solving social and environmental problems. Certification became more and more popular and its rise quietly suggested that responsibility should fall more to the individual consumer rather than to political society. . . . The goal became not to reform entire industries but to alleviate the consciences of a certain sector of consumers.

Shaming, as Jacquet defines it, is about the threat of exposure, whereas guilt is personal. Shame is about the possibility of an audience. Imagine someone were to send a print-out of your internet search history from the last month to your best friend, mother-in-law, partner, or boss. You might not have experienced any guilt making the searches, but even the idea of them being exposed is likely shame-inducing.

Switching the focus of the environmental movement from shame to guilt was, at best, a distraction. It put the responsibility on individuals, even though small actions like turning off the lights count for little. Guilt is a more private emotion, one that arises regardless of exposure. It’s what you feel when you’re not happy about something you did, whereas shame is what you feel when someone finds out. Jacquet writes, “A 2013 research paper showed that just ninety corporations (some of them state-owned) are responsible for nearly two-thirds of historic carbon dioxide and methane emissions; this reminds us that we don’t all share the blame for greenhouse gas emissions.” Guilt doesn’t work because it doesn’t change the system. Taking this into account, Jacquet believes it is time for us to bring back shame, “a tool that can work more quickly and at larger scales.”

The seven habits of effective shaming

So, if you want to use shame as a force for good, as an individual or as part of a group, how can you do so in an effective manner? Jacquet offers seven pointers.

Firstly, “The audience responsible for the shaming should be concerned with the transgression.” It should be something that impacts them so they are incentivized to use shaming to change it. If it has no effect on their lives, they will have little reason to shame. The audience must be the victim. For instance, smoking rates are shrinking in many countries. Part of this may relate to the tendency of non-smokers to shame smokers. The more the former group grows, the greater their power to shame. This works because second-hand smoke impacts their health too, as do indirect tolls like strain on healthcare resources and having to care for ill family members. As Jacquet says, “Shaming must remain relevant to the audience’s norms and moral framework.”

Second, “There should be a big gap between the desired and actual behavior.” The smaller the gap, the less effective the shaming will be. A mugger stealing a handbag from an elderly lady is one thing. A fraudster defrauding thousands of retirees out of their savings is quite another. We are predisposed to fairness in general and become quite riled up when unfairness is significant. In particular, Jacquet observes, we take greater offense when it is the fault of a small group, such as a handful of corporations being responsible for the majority of greenhouse gas emissions. It’s also a matter of contrast. Jacquet cites her own research, which finds that “the degree of ‘bad’ relative to the group matters when it comes to bad apples.” The greater the contrast between the behavior of those being shamed and the rest of the group, the stronger the annoyance will be. For instance, the worse the level of pollution for a corporation is, the more people will shame it.

Third, “Formal punishment should be missing.” Shaming is most effective when it is the sole possible avenue for punishment and the transgression would otherwise go ignored. This ignites our sense of fury at injustice. Jacquet points out that the reason shaming works so well in international politics is that it is often a replacement for formal methods of punishment. If a nation commits major human rights abuses, it is difficult for another nation to use the law to punish them, as they likely have different laws. But revealing and drawing attention to the abuses may shame the nation into stopping, as they do not want to look bad to the rest of the world. When shame is the sole tool we have, we use it best.

Fourth, “The transgressor should be sensitive to the source of shaming.” The shamee must consider themselves subject to the same social norms as the shamer. Shaming an organic grocery chain for stocking unethically produced meat would be far more effective than shaming a fast-food chain for the same thing. If the transgressor sees themselves as subject to different norms, they are unlikely to be concerned.

Fifth, “The audience should trust the source of the shaming.” The shaming must come from a respectable, trustworthy, non-hypocritical source. If it does not, its impact is likely to be minimal. A news outlet that only shames one side of the political spectrum on a cross-spectrum issue isn’t going to have much impact.

Sixth, “Shaming should be directed where possible benefits are greatest.” We all have a limited amount of attention and interest in shaming. It should only be applied where it can have the greatest possible benefits and used sparingly, on the most serious transgressions. Otherwise, people will become desensitized, and the shaming will be ineffective. Wherever possible, we should target shaming at institutions, not individuals. Effective shaming focuses on the powerful, not the weak.

Seventh, “Shaming should be scrupulously implemented” Shaming needs to be carried out consistently. The threat can be more useful than the act itself, hence why it may need implementing on a regular basis. For instance, an annual report on the companies guilty of the most pollution is more meaningful than a one-off one. Companies know to anticipate it and preemptively change their behavior. Jacquet explains that “shame’s performance is optimized when people reform their behavior in response to its threat and remain part of the group. . . . Ideally, shaming creates some friction but ultimately heals without leaving a scar.”

To summarize, Jacquet writes: “When shame works without destroying anyone’s life, when it leads to reform and reintegration rather than fight or flight, or, even better, when it acts as a deterrent against bad behavior, shaming is performing optimally.”

***

Due to our negative experiences with shame on a personal level, we may be averse to viewing it in the light Jacquet describes: as an important and powerful tool. But “shaming, like any tool, is on its own amoral and can be used to any end, good or evil.” The way we use it is what matters.

According to Jacquet, we should not use shame to target transgressions that have minimal impact or are the fault of individuals with little power. We should use it when the outcome will be a broader benefit for society and when formal means of punishment have been exhausted. It’s important the shaming be proportional and done intentionally, not as a means of vindication.

Is Shame Necessary? is a thought-provoking read and a reminder of the power we have as individuals to contribute to meaningful change to the world. One way is to rethink how we view shame.

“That’s as far as they go. They can’t take it any further. And why not? Because they won’t put in the effort”

A brilliant passage from Haruki Murakami’s Norwegian Wood on talent.

There just happen to be people like that. They’re blessed with this marvelous talent, but they can’t make the effort to systematize it. They end up squandering it in little bits and pieces. I’ve seen my share of people like that. At first you think they’re amazing. Like, they can sight-read some terrifically difficult piece and do a damn good job playing it all the way through. You see them do it, and you’re overwhelmed. You think, ‘I could never do that in a million years.’ But that’s as far as they go. They can’t take it any further. And why not? Because they won’t put in the effort. Because they haven’t had the discipline pounded into them. They’ve been spoiled. They have just enough talent so they’ve been able to play things well without any effort and they’ve had people telling them how great they are from the time they’re little, so hard work looks stupid to them. They’ll take some piece another kid has to work on for three weeks and polish it off in half the time, so the teacher figures they’ve put enough into it and lets them go to the next thing. And they do that in half the time and go on to the next piece. They never find out what it means to be hammered by the teacher; they lose out on a certain element required for character building. It’s a tragedy.

This sounds an awful lot like Carol Dweck:

In her influential research, (Carol) Dweck distinguishes between people with a fixed mindset — they tend to agree with statements such as “You have a certain amount of intelligence and cannot do much to change it” — and those with a growth mindset, who believe that we can get better at almost anything, provided we invest the necessary time and energy. While people with a fixed mindset see mistakes as a dismal failure — a sign that we aren’t talented enough for the task in question — those with a growth mindset see mistakes as an essential precursor of knowledge, the engine of education.

And this bit from The Art of Learning: An Inner Journey to Optimal Performance:

Children who are “entity theorists” … are prone to use language like ‘I am smart at this.’ And to attribute their success or failure to an ingrained and unalterable level of ability. They see their overall intelligence or skill level at a certain discipline to be a fixed entity, a thing that cannot evolve. Incremental theorists, who have picked up a different modality of learning, are more prone to describe their results with sentences like ‘I got it because I worked very hard at it’ or ‘I should have tried harder.’ A child with a learning theory of intelligence tends to sense that with hard work, difficult material can be grasped- step-by-step, incrementally, the novice can become the master.

This is the path of amateurs:

By nature, we humans shrink from anything that seems possibly painful or overtly difficult. We bring this natural tendency to our practice of any skill. Once we grow adept at some aspect of this skill, generally one that comes more easily to us, we prefer to practice this element over and over. Our skill becomes lopsided as we avoid our weaknesses. Knowing that in our practice we can let down our guard, since we are not being watched or under pressure to perform, we bring to this a kind of dispersed attention. We tend to also be quite conventional in our practice routines. We generally follow what others have done, performing the accepted exercises for these skills.

This is the path of amateurs. To attain mastery, you must adopt what we shall call Resistance Practice. The principle is simple—you go in the opposite direction of all of your natural tendencies when it comes to practice.

If you want to get better at anything, you need to practice.

First Principles: The Building Blocks of True Knowledge

First Principles

The Great Mental Models Volumes One and Two are out.
Learn more about the project here.

First-principles thinking is one of the best ways to reverse-engineer complicated problems and unleash creative possibility. Sometimes called “reasoning from first principles,” the idea is to break down complicated problems into basic elements and then reassemble them from the ground up. It’s one of the best ways to learn to think for yourself, unlock your creative potential, and move from linear to non-linear results.

This approach was used by the philosopher Aristotle and is used now by Elon Musk and Charlie Munger. It allows them to cut through the fog of shoddy reasoning and inadequate analogies to see opportunities that others miss.

“I don’t know what’s the matter with people: they don’t learn by understanding; they learn by some other way—by rote or something. Their knowledge is so fragile!”

— Richard Feynman

The Basics

A first principle is a foundational proposition or assumption that stands alone. We cannot deduce first principles from any other proposition or assumption.

Aristotle, writing[1] on first principles, said:

In every systematic inquiry (methodos) where there are first principles, or causes, or elements, knowledge and science result from acquiring knowledge of these; for we think we know something just in case we acquire knowledge of the primary causes, the primary first principles, all the way to the elements.

Later he connected the idea to knowledge, defining first principles as “the first basis from which a thing is known.”[2]

The search for first principles is not unique to philosophy. All great thinkers do it.

Reasoning by first principles removes the impurity of assumptions and conventions. What remains is the essentials. It’s one of the best mental models you can use to improve your thinking because the essentials allow you to see where reasoning by analogy might lead you astray.

The Coach and the Play Stealer

My friend Mike Lombardi (a former NFL executive) and I were having dinner in L.A. one night, and he said, “Not everyone that’s a coach is really a coach. Some of them are just play stealers.”

Every play we see in the NFL was at some point created by someone who thought, “What would happen if the players did this?” and went out and tested the idea. Since then, thousands, if not millions, of plays have been created. That’s part of what coaches do. They assess what’s physically possible, along with the weaknesses of the other teams and the capabilities of their own players, and create plays that are designed to give their teams an advantage.

The coach reasons from first principles. The rules of football are the first principles: they govern what you can and can’t do. Everything is possible as long as it’s not against the rules.

The play stealer works off what’s already been done. Sure, maybe he adds a tweak here or there, but by and large he’s just copying something that someone else created.

While both the coach and the play stealer start from something that already exists, they generally have different results. These two people look the same to most of us on the sidelines or watching the game on the TV. Indeed, they look the same most of the time, but when something goes wrong, the difference shows. Both the coach and the play stealer call successful plays and unsuccessful plays. Only the coach, however, can determine why a play was successful or unsuccessful and figure out how to adjust it. The coach, unlike the play stealer, understands what the play was designed to accomplish and where it went wrong, so he can easily course-correct. The play stealer has no idea what’s going on. He doesn’t understand the difference between something that didn’t work and something that played into the other team’s strengths.

Musk would identify the play stealer as the person who reasons by analogy, and the coach as someone who reasons by first principles. When you run a team, you want a coach in charge and not a play stealer. (If you’re a sports fan, you need only look at the difference between the Cleveland Browns and the New England Patriots.)

We’re all somewhere on the spectrum between coach and play stealer. We reason by first principles, by analogy, or a blend of the two.

Another way to think about this distinction comes from another friend, Tim Urban. He says[3] it’s like the difference between the cook and the chef. While these terms are often used interchangeably, there is an important nuance. The chef is a trailblazer, the person who invents recipes. He knows the raw ingredients and how to combine them. The cook, who reasons by analogy, uses a recipe. He creates something, perhaps with slight variations, that’s already been created.

The difference between reasoning by first principles and reasoning by analogy is like the difference between being a chef and being a cook. If the cook lost the recipe, he’d be screwed. The chef, on the other hand, understands the flavor profiles and combinations at such a fundamental level that he doesn’t even use a recipe. He has real knowledge as opposed to know-how.

Authority

So much of what we believe is based on some authority figure telling us that something is true. As children, we learn to stop questioning when we’re told “Because I said so.” (More on this later.) As adults, we learn to stop questioning when people say “Because that’s how it works.” The implicit message is “understanding be damned — shut up and stop bothering me.” It’s not intentional or personal. OK, sometimes it’s personal, but most of the time, it’s not.

If you outright reject dogma, you often become a problem: a student who is always pestering the teacher. A kid who is always asking questions and never allowing you to cook dinner in peace. An employee who is always slowing things down by asking why.

When you can’t change your mind, though, you die. Sears was once thought indestructible before Wal-Mart took over. Sears failed to see the world change. Adapting to change is an incredibly hard thing to do when it comes into conflict with the very thing that caused so much success. As Upton Sinclair aptly pointed out, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Wal-Mart failed to see the world change and is now under assault from Amazon.

If we never learn to take something apart, test the assumptions, and reconstruct it, we end up trapped in what other people tell us — trapped in the way things have always been done. When the environment changes, we just continue as if things were the same.

First-principles reasoning cuts through dogma and removes the blinders. We can see the world as it is and see what is possible.

When it comes down to it, everything that is not a law of nature is just a shared belief. Money is a shared belief. So is a border. So are bitcoins. The list goes on.

Some of us are naturally skeptical of what we’re told. Maybe it doesn’t match up to our experiences. Maybe it’s something that used to be true but isn’t true anymore. And maybe we just think very differently about something.

“To understand is to know what to do.”

— Wittgenstein

Techniques for Establishing First Principles

There are many ways to establish first principles. Let’s take a look at a few of them.

Socratic Questioning

Socratic questioning can be used to establish first principles through stringent analysis. This a disciplined questioning process, used to establish truths, reveal underlying assumptions, and separate knowledge from ignorance. The key distinction between Socratic questioning and normal discussions is that the former seeks to draw out first principles in a systematic manner. Socratic questioning generally follows this process:

  1. Clarifying your thinking and explaining the origins of your ideas (Why do I think this? What exactly do I think?)
  2. Challenging assumptions (How do I know this is true? What if I thought the opposite?)
  3. Looking for evidence (How can I back this up? What are the sources?)
  4. Considering alternative perspectives (What might others think? How do I know I am correct?)
  5. Examining consequences and implications (What if I am wrong? What are the consequences if I am?)
  6. Questioning the original questions (Why did I think that? Was I correct? What conclusions can I draw from the reasoning process?)

This process stops you from relying on your gut and limits strong emotional responses. This process helps you build something that lasts.

“Because I Said So” or “The Five Whys”

Children instinctively think in first principles. Just like us, they want to understand what’s happening in the world. To do so, they intuitively break through the fog with a game some parents have come to hate.

“Why?”

“Why?”

“Why?”

Here’s an example that has played out numerous times at my house:

“It’s time to brush our teeth and get ready for bed.”

“Why?”

“Because we need to take care of our bodies, and that means we need sleep.”

“Why do we need sleep?”

“Because we’d die if we never slept.”

“Why would that make us die?”

“I don’t know; let’s go look it up.”

Kids are just trying to understand why adults are saying something or why they want them to do something.

The first time your kid plays this game, it’s cute, but for most teachers and parents, it eventually becomes annoying. Then the answer becomes what my mom used to tell me: “Because I said so!” (Love you, Mom.)

Of course, I’m not always that patient with the kids. For example, I get testy when we’re late for school, or we’ve been travelling for 12 hours, or I’m trying to fit too much into the time we have. Still, I try never to say “Because I said so.”

People hate the “because I said so” response for two reasons, both of which play out in the corporate world as well. The first reason we hate the game is that we feel like it slows us down. We know what we want to accomplish, and that response creates unnecessary drag. The second reason we hate this game is that after one or two questions, we are often lost. We actually don’t know why. Confronted with our own ignorance, we resort to self-defense.

I remember being in meetings and asking people why we were doing something this way or why they thought something was true. At first, there was a mild tolerance for this approach. After three “whys,” though, you often find yourself on the other end of some version of “we can take this offline.”

Can you imagine how that would play out with Elon Musk? Richard Feynman? Charlie Munger? Musk would build a billion-dollar business to prove you wrong, Feynman would think you’re an idiot, and Munger would profit based on your inability to think through a problem.

“Science is a way of thinking much more than it is a body of knowledge.”

— Carl Sagan

Examples of First Principles in Action

So we can better understand how first-principles reasoning works, let’s look at four examples.

Elon Musk and SpaceX

Perhaps no one embodies first-principles thinking more than Elon Musk. He is one of the most audacious entrepreneurs the world has ever seen. My kids (grades 3 and 2) refer to him as a real-life Tony Stark, thereby conveniently providing a good time for me to remind them that by fourth grade, Musk was reading the Encyclopedia Britannica and not Pokemon.

What’s most interesting about Musk is not what he thinks but how he thinks:

I think people’s thinking process is too bound by convention or analogy to prior experiences. It’s rare that people try to think of something on a first principles basis. They’ll say, “We’ll do that because it’s always been done that way.” Or they’ll not do it because “Well, nobody’s ever done that, so it must not be good. But that’s just a ridiculous way to think. You have to build up the reasoning from the ground up—“from the first principles” is the phrase that’s used in physics. You look at the fundamentals and construct your reasoning from that, and then you see if you have a conclusion that works or doesn’t work, and it may or may not be different from what people have done in the past.[4]

His approach to understanding reality is to start with what is true — not with his intuition. The problem is that we don’t know as much as we think we do, so our intuition isn’t very good. We trick ourselves into thinking we know what’s possible and what’s not. The way Musk thinks is much different.

Musk starts out with something he wants to achieve, like building a rocket. Then he starts with the first principles of the problem. Running through how Musk would think, Larry Page said in an

interview, “What are the physics of it? How much time will it take? How much will it cost? How much cheaper can I make it? There’s this level of engineering and physics that you need to make judgments about what’s possible and interesting. Elon is unusual in that he knows that, and he also knows business and organization and leadership and governmental issues.”[5]

Rockets are absurdly expensive, which is a problem because Musk wants to send people to Mars. And to send people to Mars, you need cheaper rockets. So he asked himself, “What is a rocket made of? Aerospace-grade aluminum alloys, plus some titanium, copper, and carbon fiber. And … what is the value of those materials on the commodity market? It turned out that the materials cost of a rocket was around two percent of the typical price.”[6]

Why, then, is it so expensive to get a rocket into space? Musk, a notorious self-learner with degrees in both economics and physics, literally taught himself rocket science. He figured that the only reason getting a rocket into space is so expensive is that people are stuck in a mindset that doesn’t hold up to first principles. With that, Musk decided to create SpaceX and see if he could build rockets himself from the ground up.

In an interview with Kevin Rose, Musk summarized his approach:

I think it’s important to reason from first principles rather than by analogy. So the normal way we conduct our lives is, we reason by analogy. We are doing this because it’s like something else that was done, or it is like what other people are doing… with slight iterations on a theme. And it’s … mentally easier to reason by analogy rather than from first principles. First principles is kind of a physics way of looking at the world, and what that really means is, you … boil things down to the most fundamental truths and say, “okay, what are we sure is true?” … and then reason up from there. That takes a lot more mental energy.[7]

Musk then gave an example of how Space X uses first principles to innovate at low prices:

Somebody could say — and in fact people do — that battery packs are really expensive and that’s just the way they will always be because that’s the way they have been in the past. … Well, no, that’s pretty dumb… Because if you applied that reasoning to anything new, then you wouldn’t be able to ever get to that new thing…. you can’t say, … “oh, nobody wants a car because horses are great, and we’re used to them and they can eat grass and there’s lots of grass all over the place and … there’s no gasoline that people can buy….”

He then gives a fascinating example about battery packs:

… they would say, “historically, it costs $600 per kilowatt-hour. And so it’s not going to be much better than that in the future. … So the first principles would be, … what are the material constituents of the batteries? What is the spot market value of the material constituents? … It’s got cobalt, nickel, aluminum, carbon, and some polymers for separation, and a steel can. So break that down on a material basis; if we bought that on a London Metal Exchange, what would each of these things cost? Oh, jeez, it’s … $80 per kilowatt-hour. So, clearly, you just need to think of clever ways to take those materials and combine them into the shape of a battery cell, and you can have batteries that are much, much cheaper than anyone realizes.

BuzzFeed

After studying the psychology of virality, Jonah Peretti founded BuzzFeed in 2006. The site quickly grew to be one of the most popular on the internet, with hundreds of employees and substantial revenue.

Peretti figured out early on the first principle of a successful website: wide distribution. Rather than publishing articles people should read, BuzzFeed focuses on publishing those that people want to read. This means aiming to garner maximum social shares to put distribution in the hands of readers.

Peretti recognized the first principles of online popularity and used them to take a new approach to journalism. He also ignored SEO, saying, “Instead of making content robots like, it was more satisfying to make content humans want to share.”[8] Unfortunately for us, we share a lot of cat videos.

A common aphorism in the field of viral marketing is, “content might be king, but distribution is queen, and she wears the pants” (or “and she has the dragons”; pick your metaphor). BuzzFeed’s distribution-based approach is based on obsessive measurement, using A/B testing and analytics.

Jon Steinberg, president of BuzzFeed, explains the first principles of virality:

Keep it short. Ensure [that] the story has a human aspect. Give people the chance to engage. And let them react. People mustn’t feel awkward sharing it. It must feel authentic. Images and lists work. The headline must be persuasive and direct.

Derek Sivers and CD Baby

When Sivers founded his company CD Baby, he reduced the concept down to first principles. Sivers asked, What does a successful business need? His answer was happy customers.

Instead of focusing on garnering investors or having large offices, fancy systems, or huge numbers of staff, Sivers focused on making each of his customers happy. An example of this is his famous order confirmation email, part of which reads:

Your CD has been gently taken from our CD Baby shelves with sterilized contamination-free gloves and placed onto a satin pillow. A team of 50 employees inspected your CD and polished it to make sure it was in the best possible condition before mailing. Our packing specialist from Japan lit a candle and a hush fell over the crowd as he put your CD into the finest gold-lined box money can buy.

By ignoring unnecessary details that cause many businesses to expend large amounts of money and time, Sivers was able to rapidly grow the company to $4 million in monthly revenue. In Anything You Want, Sivers wrote:

Having no funding was a huge advantage for me.
A year after I started CD Baby, the dot-com boom happened. Anyone with a little hot air and a vague plan was given millions of dollars by investors. It was ridiculous. …
Even years later, the desks were just planks of wood on cinder blocks from the hardware store. I made the office computers myself from parts. My well-funded friends would spend $100,000 to buy something I made myself for $1,000. They did it saying, “We need the very best,” but it didn’t improve anything for their customers. …
It’s counterintuitive, but the way to grow your business is to focus entirely on your existing customers. Just thrill them, and they’ll tell everyone.

To survive as a business, you need to treat your customers well. And yet so few of us master this principle.

Employing First Principles in Your Daily Life

Most of us have no problem thinking about what we want to achieve in life, at least when we’re young. We’re full of big dreams, big ideas, and boundless energy. The problem is that we let others tell us what’s possible, not only when it comes to our dreams but also when it comes to how we go after them. And when we let other people tell us what’s possible or what the best way to do something is, we outsource our thinking to someone else.

The real power of first-principles thinking is moving away from incremental improvement and into possibility. Letting others think for us means that we’re using their analogies, their conventions, and their possibilities. It means we’ve inherited a world that conforms to what they think. This is incremental thinking.

When we take what already exists and improve on it, we are in the shadow of others. It’s only when we step back, ask ourselves what’s possible, and cut through the flawed analogies that we see what is possible. Analogies are beneficial; they make complex problems easier to communicate and increase understanding. Using them, however, is not without a cost. They limit our beliefs about what’s possible and allow people to argue without ever exposing our (faulty) thinking. Analogies move us to see the problem in the same way that someone else sees the problem.

The gulf between what people currently see because their thinking is framed by someone else and what is physically possible is filled by the people who use first principles to think through problems.

First-principles thinking clears the clutter of what we’ve told ourselves and allows us to rebuild from the ground up. Sure, it’s a lot of work, but that’s why so few people are willing to do it. It’s also why the rewards for filling the chasm between possible and incremental improvement tend to be non-linear.

Let’s take a look at a few of the limiting beliefs that we tell ourselves.

“I don’t have a good memory.” [10]
People have far better memories than they think they do. Saying you don’t have a good memory is just a convenient excuse to let you forget. Taking a first-principles approach means asking how much information we can physically store in our minds. The answer is “a lot more than you think.” Now that we know it’s possible to put more into our brains, we can reframe the problem into finding the most optimal way to store information in our brains.

“There is too much information out there.”
A lot of professional investors read Farnam Street. When I meet these people and ask how they consume information, they usually fall into one of two categories. The differences between the two apply to all of us. The first type of investor says there is too much information to consume. They spend their days reading every press release, article, and blogger commenting on a position they hold. They wonder what they are missing. The second type of investor realizes that reading everything is unsustainable and stressful and makes them prone to overvaluing information they’ve spent a great amount of time consuming. These investors, instead, seek to understand the variables that will affect their investments. While there might be hundreds, there are usually three to five variables that will really move the needle. The investors don’t have to read everything; they just pay attention to these variables.

“All the good ideas are taken.”
A common way that people limit what’s possible is to tell themselves that all the good ideas are taken. Yet, people have been saying this for hundreds of years — literally — and companies keep starting and competing with different ideas, variations, and strategies.

“We need to move first.”
I’ve heard this in boardrooms for years. The answer isn’t as black and white as this statement. The iPhone wasn’t first, it was better. Microsoft wasn’t the first to sell operating systems; it just had a better business model. There is a lot of evidence showing that first movers in business are more likely to fail than latecomers. Yet this myth about the need to move first continues to exist.

Sometimes the early bird gets the worm and sometimes the first mouse gets killed. You have to break each situation down into its component parts and see what’s possible. That is the work of first-principles thinking.

“I can’t do that; it’s never been done before.”
People like Elon Musk are constantly doing things that have never been done before. This type of thinking is analogous to looking back at history and building, say, floodwalls, based on the worst flood that has happened before. A better bet is to look at what could happen and plan for that.

“As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble.”

— Harrington Emerson

Conclusion

The thoughts of others imprison us if we’re not thinking for ourselves.

Reasoning from first principles allows us to step outside of history and conventional wisdom and see what is possible. When you really understand the principles at work, you can decide if the existing methods make sense. Often they don’t.

Reasoning by first principles is useful when you are (1) doing something for the first time, (2) dealing with complexity, and (3) trying to understand a situation that you’re having problems with. In all of these areas, your thinking gets better when you stop making assumptions and you stop letting others frame the problem for you.

Analogies can’t replace understanding. While it’s easier on your brain to reason by analogy, you’re more likely to come up with better answers when you reason by first principles. This is what makes it one of the best sources of creative thinking. Thinking in first principles allows you to adapt to a changing environment, deal with reality, and seize opportunities that others can’t see.

Many people mistakenly believe that creativity is something that only some of us are born with, and either we have it or we don’t. Fortunately, there seems to be ample evidence that this isn’t true.[11] We’re all born rather creative, but during our formative years, it can be beaten out of us by busy parents and teachers. As adults, we rely on convention and what we’re told because that’s easier than breaking things down into first principles and thinking for ourselves. Thinking through first principles is a way of taking off the blinders. Most things suddenly seem more possible.

“I think most people can learn a lot more than they think they can,” says Musk. “They sell themselves short without trying. One bit of advice: it is important to view knowledge as sort of a semantic tree — make sure you understand the fundamental principles, i.e., the trunk and big branches, before you get into the leaves/details or there is nothing for them to hang on to.”

***

Members can discuss this on the Learning Community Forum.

End Notes

[1] Aristotle, Physics 184a10–21

[2] Aristotle, Metaphysics 1013a14-15

[3] https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html

[4] Elon Musk, quoted by Tim Urban in “The Cook and the Chef: Musk’s Secret Sauce,” Wait But Why https://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html

[5] Vance, Ashlee. Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future (p. 354)

[6] https://www.wired.com/2012/10/ff-elon-musk-qa/all/

[7] https://www.youtube.com/watch?v=L-s_3b5fRd8

[8] David Rowan, “How BuzzFeed mastered social sharing to become a media giant for a new era,” Wired.com. 2 January 2014. https://www.wired.co.uk/article/buzzfeed

[9] https://www.quora.com/What-does-Elon-Musk-mean-when-he-said-I-think-it%E2%80%99s-important-to-reason-from-first-principles-rather-than-by-analogy/answer/Bruce-Achterberg

[10] https://www.scientificamerican.com/article/new-estimate-boosts-the-human-brain-s-memory-capacity-10-fold/

[11] Breakpoint and Beyond: Mastering the Future Today, George Land

[12] https://www.reddit.com/r/IAmA/comments/2rgsan/i_am_elon_musk_ceocto_of_a_rocket_company_ama/cnfre0a/

Yuval Noah Harari: Why We Dominate the Earth

Why did Homo sapiens diverge from the rest of the animal kingdom and go on to dominate the earth? Communication? Cooperation? According to best-selling author Yuval Noah Harari, that barely scratches the surface.

***

Yuval Noah Harari’s Sapiens is one of those uniquely breathtaking books that comes along but rarely. It’s broad, but scientific. It’s written for a popular audience, but never feels dumbed down. It’s new and fresh, but not based on any new primary research. Sapiens is pure synthesis.

Readers will easily recognize the influence of Jared Diamond, author of Guns, Germs, and Steel, The Third Chimpanzee, and other similarly broad-yet-scientific works with vast synthesis and explanatory power. It’s not surprising, then, that Harari, a history professor at the Hebrew University of Jerusalem, has noted Diamond’s contributions to his thinking. Harari says:

It [Guns, Germs, and Steel] made me realize that you can ask the biggest questions about history and try to give them scientific answers. But in order to do so, you have to give up the most cherished tools of historians. I was taught that if you’re going to study something, you must understand it deeply and be familiar with primary sources. But if you write a history of the whole world you can’t do this. That’s the trade-off.

Harari sought to understand the history of humankind’s domination of the earth and its development of complex modern societies. He applies ideas from evolutionary theory, forensic anthropology, genetics, and the basic tools of the historian to generate a new conception of our past: humankind’s success was due to our ability to create and sustain grand, collaborative myths.

To make the narrative more palatable and sensible, we must take a different perspective. Calling us humans keeps us too close to the story to have an accurate view. We’re not as unique as we would like to believe. In fact, we’re just another animal. We are Homo sapiens. Because of this, our history can be described just like that of any other species. Harari labels us like any other species, calling us “Sapiens” so we can depersonalize things and allow the author the room he needs to make some bold statements about the history of humanity.  Our successes, failures, flaws and credits are part of the makeup of the Sapiens.[1]

Sapiens existed long before there was recorded history. Biological history is a much longer stretch, beginning millions of years before the evolution of any forbears we can identify. When our earliest known ancestors formed, they were not at the top of the food chain. Rather, they were engaged in an epic battle of trench warfare with the other organisms that shared their habitat.[2]

“Ants and bees can also work together in huge numbers, but they do so in a very rigid manner and only with close relatives. Wolves and chimpanzees cooperate far more flexibly than ants, but they can do so only with small numbers of other individuals that they know intimately. Sapiens can cooperate in extremely flexible ways with countless numbers of strangers. That’s why Sapiens rule the world, whereas ants eat our leftovers and chimps are locked up in zoos and research laboratories.”

— Yuval Noah Harari, Sapiens

These archaic humans loved, played, formed close friendships and competed for status and power, but so did chimpanzees, baboons, and elephants. There was nothing special about humans. Nobody, least of all humans themselves, had any inkling their descendants would one day walk on the moon, split the atom, fathom the genetic code and write history books. The most important thing to know about prehistoric humans is that they were insignificant animals with no more impact on their environment than gorillas, fireflies or jellyfish.

For the same reason that our kids can’t imagine a world without Google, Amazon and iPhones, we can’t imagine a world in which have not been a privileged species right from the start. Yet we were just one species of smart, social ape trying to survive in the wild. We had cousins: Homo neanderthalensis, Homo erectus, Homo rudolfensis, and others, both our progenitors and our contemporaries, all considered human and with similar traits. If chimps and bonobos were our second cousins, these were our first cousins.

Eventually, things changed. About 70,000 or so years ago, our DNA showed a mutation (Harari claims we don’t know quite why) which allowed us to make a leap that no other species, human or otherwise, was able to make. We began to cooperate flexibly, in large groups, with an extremely complex and versatile language. If there is a secret to our success—and remember, success in nature is survival—It was that our brains developed to communicate.

Welcome to the Cognitive Revolution

Our newfound capacity for language allowed us to develop abilities that couldn’t be found among our cousins, or in any other species from ants to whales.

First, we could give detailed explanations of events that had transpired. We weren’t talking mental models or even gravity. At first, we were probably talking about things for survival. Food. Water. Shelter. It’s possible to imagine making a statement something like this: “I saw a large lion in the forest three days back, with three companions, near the closest tree to the left bank of the river and I think, but am not totally sure, they were hunting us. Why don’t we ask for help from a neighboring tribe, so we don’t all end up as lion meat?”[3]

Second, and maybe more importantly, we could also gossip about each other. Before religion, gossip created a social, even environmental pressure to conform to certain norms. Gossip allowed control of the individual for the aid of the group. It wouldn’t take much effort to imagine someone saying, “I noticed Frank and Steve have not contributed to the hunt in about three weeks. They are not holding up their end of the bargain, and I don’t think we should include them in distributing the proceeds of our next major slaughter.”[4]

Harari’s insight is that while the abilities to communicate about necessities and to pressure people to conform to social norms were certainly pluses, they were not the great leap. Surprisingly, it’s not our shared language or even our ability to dominate other species that defines us but rather, our shared fictions. The exponential leap happened because we could talk about things that were not real. Harari writes:

As far as we know, only Sapiens can talk about entire kinds of entities that they have never seen, touched, or smelled. Legends, myths, gods, and religions appeared for the first time with the Cognitive Revolution. Many animals and human species could previously say, “Careful! A lion!” Thanks to the Cognitive Revolution, Homo sapiens acquired the ability to say, “The lion is the guardian spirit of our tribe.” This ability to speak about fictions is the most unique feature of Sapiens language…. You could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven.

Predictably, Harari mentions religion as one of the important fictions. But just as important are fictions like the limited liability corporation; the nation-state; the concept of human rights, inalienable from birth; and even money itself.

Shared beliefs allow us to do the thing that other species cannot. Because we believe, we can cooperate effectively in large groups toward larger aims. Sure, other animals cooperate. Ants and bees work in large groups with close relatives but in a very rigid manner. Changes in the environment, as we are seeing today, put the rigidity under strain. Apes and wolves cooperate as well, and with more flexibility than ants. But they can’t scale.

If wild animals could have organized in large numbers, you might not be reading this. Our success is intimately linked to scale. In many systems and in all species but ours, as far as we know, there are hard limits to the number of individuals that can cooperate in groups in a flexible way.[5] As Harari puts it in the quotation at the beginning of this post, “Sapiens can cooperate in extremely flexible ways with countless numbers of strangers. That’s why Sapiens rule the world, whereas ants eat our leftovers and chimps are locked up in zoos and research laboratories.”

Sapiens diverged when they—or I should say we—hit on the ability of a collective myth to advance us beyond what we could do individually. As long as we shared some beliefs we could work toward something larger than ourselves—itself a shared fiction. With this in mind, there was almost no limit to the number of cooperating, believing individuals who could belong to a belief-group.

With that, it becomes easier to understand why we see different results from communication in human culture than in whale culture, or dolphin culture, or bonobo culture: a shared trust in something outside of ourselves, something larger. And the result can be extreme, lollapalooza even, when a lot of key things converge in one direction, from a combination of critical elements.

Any large-scale human cooperation—whether a modern state, a medieval church, an ancient city, or an archaic tribe—is rooted in common myths that exist only in people’s collective imagination. Churches are rooted in common religious myths. Two Catholics who have never met can nevertheless go together on crusade or pool funds to build a hospital because they both believe God was incarnated in human flesh and allowed Himself to be crucified to redeem our sins. States are rooted in common national myths. Two Serbs who have never met might risk their lives to save one another because both believe in the existence of the Serbian nation, the Serbian homeland and the Serbian flag. Judicial systems are rooted in common legal myths. Two lawyers who have never met can nevertheless combine efforts to defend a complete stranger because they both believe in the existence of laws, justice, human rights, and money paid out in fees.

Not only do we believe them individually, but we believe them collectively.

Shared fictions aren’t necessarily lies. Shared fictions can create literal truths. For example, if I trust that you believe in money as much as I do, we can use it as an exchange of value. Yet just as you can’t get a chimpanzee to forgo a banana today for infinite bananas in heaven, you also can’t get him to accept three apples today with the idea that if he invests them wisely in a chimp business, he’ll get six bananas from it in five years—no matter how many compound interest tables you show him. This type of collaborative and complex fiction is uniquely human, and capitalism is as much an imagined reality as religion.

Once you start to see the world as a collection of shared fictions, it never looks the same again.

This leads to the extremely interesting conclusion that comprises the bulk of Harari’s great work: If we collectively decide to alter the myths, we can relatively quickly and dramatically alter behavior.

For instance, we can decide slavery, one of the oldest institutions in human history, is no longer acceptable. We can declare monarchy an outdated form of governance. We can decide women should have the right to as much power as men, reversing the pattern of history. We can also decide all Sapiens must follow the same religious text and devote ourselves to slaughtering the resisters.

There is no parallel in other species for these quick, large-scale shifts. General behavior patterns in dogs or fish or ants change due to a change in environment, or to broad genetic evolution over a period of time. Lions will likely never sign a Declaration of Lion Rights and suddenly abolish the idea of an alpha male lion. Their hierarchies are rigid, primal even.

But humans can collectively change the narrative over a short span of time, and begin acting very differently with the same DNA and the same set of physical circumstances. If we all believe in Bitcoin, it becomes real for the same reason that gold becomes real.

Thus, we can conclude that Harari’s Cognitive Revolution is what happens when we decide that, while biology dictates what’s physically possible, we as a species decide norms. This is where biology enables and culture forbids.  “The Cognitive Revolution is accordingly the point when history declared its independence from biology,” Harari writes. These ever-shifting alliances, beliefs, myths—ultimately, cultures—define what we call human history.

A thorough reading of Sapiens is recommended to understand where Harari takes this idea, from the earliest humans to who we are today.

 

 

 

Sources:

[1] This biological approach to history is one we’ve looked at before with the work of Will and Ariel Durant. See The Lessons of History.

[2] It was only when Sapiens acquired weapons, fire, and most importantly a way to communicate so as to share and build knowledge, that we had the asymmetric weaponry necessary to step out of the trenches and dominate, at least for now, the organisms we co-exist with.

[3] My kids are learning to talk Caveman thanks to the guide at the back of Ook and Glook and it doesn’t sound like this at all.

[4] It’s unknown what role ego played, but we can assume people were not asking, “Oh, does this headdress make me look fat?”

[5] Ants can cooperate in great numbers with their relatives, but only based on simple algorithms. Charlie Munger has mentioned in The Psychology of Human Misjudgment that ants’ rules are so simplistic that if a group of ants starts walking in a circle, their “follow-the-leader” algorithm can cause them to literally march until their collective death.

Atul Gawande: Error in Medicine: What Have We Learned?

Over the past decade, it has become increasingly apparent that error in medicine is neither rare nor intractable. Traditionally, medicine has down- played error as a negligible factor in complications from medical intervention. But, as data on the magnitude of error aceumulate—and as the public learns more about them—medical leaders are taking the issue seriously. In particular, the recent publication of the Institute of Medicine report has resulted in an enormous increase in attention from the public, the government, and medical leadership.

Several books have been defining markers in this journey and highlight the issues that have emerged. Of particular note is Human Error in Medicine, edited by Marilyn Sue Bogner (2), published in 1994 (unfortunately, currently out of print) and written for those interested in error in medicine. Many of the thought leaders in the medical error field contributed chapters, and the contributions regarding human factors are especially strong. The book is a concise and clear introduction to the new paradigm of systems thinking in medical error.

Source

Dr. Atul Gawande, is the New York Times bestselling author of Better: A Surgeon’s Notes on Performance , Complications: A Surgeon’s Notes on an Imperfect Science, and The Checklist Manifesto: How to Get Things Right .

The Difference Between “Knowing That” and “Knowing How”

The focus right now in our education system is on a certain type of knowledge: “knowing that” as opposed to “knowing how.”

The difference is somewhat experiential.

Matthew Crawford explains in this excerpt from Shop Class as Soulcraft: An Inquiry into the Value of Work:

If you know that something is the case, then this proposition can be stated from anywhere. In fact such knowledge aspires to a view from nowhere. That is, it aspires to a view that gets at the true nature of things because it isn’t conditioned by the circumstances of the viewer. It can be transmitted throught speech or writing without loss of meaning, and expounded by a generic self that need not have any prerequisite experiences. Occupations based on universal, propositional knowledge are more prestigious, but they are also the kind that face competition from the whole world as book learning becomes more widely disseminated in the global economy. Practical know-how, on the other hand, is always tied to the experience of a particular person. It can’t be downloaded, it can only be lived.

If you think of the education system for a minute you understand that we’re trying to efficiently teach people to know things but not understand them. In this sense, we take a partial view of knowledge.

We take a very partial view of knowledge when we regard it as the sort of thing that can be gotten while suspended aloft in a basket. This is to separate knowing from doing, treating students like disembodied brains in jars, the better to become philosophers in baskets—these ridiculous images are merely exaggerations of the conception of knowledge that enjoys the greatest prestige.

To regard universal knowledge as the whole of knowledge is to take no account of embodiment and purposiveness, those features of thinkers who are always in particular situations.