Category: History

Muscular Bonding: How Dance Made Us Human

Do we dance simply for recreation? Or is there a primal urge that compels us to do it? Historian William McNeill claims it saved our species by creating community togetherness and transforming “me” into “we.”

*** 

“Let us read, and let us dance; these two amusements will never do any harm to the world.”  

— Voltaire

Why do we dance? To most, it might seem like a trivial topic. But if you contemplate the sheer pervasiveness of dance across all of human society, it becomes apparent that it is anything but.

It’s more useful to learn foundational principles that won’t go out of date than it is to go all in on the latest fad. When it comes to understanding people, we can learn a lot by studying human universals that exist across cultures and time. These universals give us insight into how to create connections in a way that fosters social cohesion and cooperation.

Once such universal is dance. At every point throughout history, all over the world, people from every walk of life have come together to dance; to move in unison alongside music, singing, and other rhythmic input, like drumming or stomping. The specifics and the names attached vary. But something akin to dance is an ever-present cultural feature throughout human history.

Soldiers perform military drills and march in time. People in rural communities carried out community dances at regular events, like harvests. Hunters in tribal communities dance before they go off to catch food and have likely done so for thousands of years. We dance during initiation rites, like coming-of-age ceremonies. We dance before going to war. We dance at weddings and religious festivals. Countercultural movements, like hippies in the United States, dance. Fanatical leaders force their followers to perform set movements together. Calisthenics and group exercise are popular worldwide, especially in parts of Asia.

The more you look for it, the more examples of dance-like activities appear everywhere. From a biological perspective, we know species-wide costly activities that are costly in terms of time, energy and other resources must have a worthwhile payoff. Thus, the energy expended in dance must aid our survival. In his 1995 book, Keeping Together in Time: Dance and Drill in Human History, historian William H. McNeill made a bold claim: he argued that we owe our success as a species to collective synchronized movements. In other words, we’re still here because we dance.

***

In the 1940s, the U.S. Army drafted William H. McNeill. With limited supplies, there was little to occupy him and his peers during training. So, whenever things got boring, they performed marching drills. For hours, they walked in circles under the hot Texas sun. On paper, it was dull and pointless. What were they even achieving? When McNeill reflected, it seemed strange that drills should be an integral part of training. It also seemed strange that he’d quite enjoyed it, as had most of his peers. McNeil writes:

Marching aimlessly about on the drill field, swaggering in conformity with prescribed military postures, conscious only of keeping in step so as to make the next move correctly and in time somehow felt good. Words are inadequate to describe the emotion aroused by the prolonged movement in unison that drilling involved . . . marching became an end in itself.

Upon further thought and study, McNeill came to identify the indescribable feeling he experienced during army drills as something “far older than language and critically important in human history, because the emotion it arouses constitutes an indefinitely expansible basis for social cohesion among any and every group that keeps together in time.”

What exactly did he experience? At the time, there was no term for it. But McNeill coined one: “muscular bonding.” This refers to a sense of euphoric connection that is sparked by performing rhythmic movements in unison to music or chanting. Few people are immune to the influence of muscular bonding. It played a role in the formation and maintenance of many of our key institutions, such as religion, the military, and politics. We can all relate to the endorphin hit that comes from strenuous dancing, as with other forms of exercise. If you’ve ever danced with a group of people, you may have also noticed a remarkable sense of connection and unity with them. This is the effect of muscular bonding.

Seeing as there has been little study into the phenomenon, McNeill puts forward a theory which is, by his own admission, unprovable. It nonetheless offers one perspective on muscular bonding. He argues that it works because “rhythmic input from muscles and voice, after gradually suffusing through the entire nervous system, may provoke echoes of the fetal condition when a major and perhaps principal external stimulus to the developing brain was the mother’s heartbeat.” In other words, through dancing and synchronized movement, we experience something akin to what we did at the earliest point of existence. While most likely impossible to prove or disprove, it’s an interesting proposition.

Since the publication of Keeping Together in Time, new research has lent greater support to McNeill’s theories about the effects of muscular bonding, although studies are still limited.

***

How exactly has muscular bonding aided us in more recent times? To explore the concept, let’s look at the type McNeill was closest acquainted with: the military drill. It enables collective organization through emotional connections facilitated by synchronous movement.

Drills have obvious, tangible benefits. They encourage obedience and compliance with orders, which are valuable attributes in the fog of war. They can fit in with maneuvers and similar group efforts on the battlefield. In ancient times, it helped units stay together on the field and work together cooperatively when communication was difficult, and all fighting took place on the ground.

But drills are also a powerful form of muscular bonding. According to McNeill’s theory, they assist in creating strong connections between soldiers, possibly because the physical movements promote the experience of being a small part of a large, cohesive unit.

While we cannot establish if it is causation or correlation, it is notable that many of the most successful armies throughout history emphasized drills. For example, the ancient Greeks and Romans both incorporated drills into their military training. And around the sixteenth century, drills became the standard in European armies. McNeill explains how this helped soldiers develop intense ties to each other and their cause:

The emotional resonance of daily and prolonged close order drill created such a lively esprit de corps among the poverty-stricken peasant recruits and urban outcasts who came to constitute the rank and file of European armies that other social ties faded into insignificance beside them.

These armies were cohesive, despite the different backgrounds of members. What made this possible was the allegiance soldiers had to each other. Loyalty to the army replaced former loyalties, such as prior alignments with the church or their families. Many soldiers report experiencing the sense that they fought for their peers, not for their leaders or their country or ideology. And it was moving together that helped break down barriers and allowed the group to reconstruct itself as a single unit with a shared goal.

***

“You can’t dance and be sad. You can listen to music and cry, you can read and cry, you can draw and cry but you can’t dance and cry. The body wont let you.”

Esther Perel

Today, a growing percentage of people find themselves alienated from any particular community, without strong bonds to any discernible group. Loneliness is on the rise. More people live alone, remain single or childless, move to new geographical locations on a regular basis, and otherwise fail to develop close ties. This is a shift that is unprecedented in human history.

What that means is that there is tremendous value in considering how we can bring connection back into our lives; we must figure out how to help alleviate the dangerous effects of isolation and alienation from each other. There is an incredible precedent in history for using dance to create a sense of community and intimacy. Physical movement helps us forge connections that can override our differences. For instance, countercultural movements of those people rejected by mainstream society have often danced to create their own distinct community, as was the case during the hippy movement in 1960s America.

Giving thought to what it takes to unify people is even more important now as we face problems that affect humanity as a whole and require wide-scale collaboration to resolve. Again and again, history has shown us that keeping together in time forms groups that have a power greater than the sum of their parts. The emergent properties of moving together can be achieved even if we are not physically in the same space. As long as we know we are moving in a way that is being done by others, the bonding effects happen.

McNeill writes: “It is and always has been a powerful force at work among humankind whether for good or ill. . . . Our future, like our past, depends on how we utilize these modes of coordinating common effort for agreed purposes.”

Muscular bonding is not a panacea. It cannot instantly heal deep rifts in society, nor can it save individuals from the effects of social isolation. But it will pay off for us to look at history and see the tools we have at our disposal for bringing people together. Dance is one such tool. Whether you’re able to attend a concert or club, or simply have a dance party in your living room with your kids or over video chat with loved ones you can’t be near, when we move together we have an experience that deepens our connection to one another and gives us the openings for unity and cooperation.

Standing on the Shoulders of Giants

Innovation doesn’t occur in a vacuum. Doers and thinkers from Shakespeare to Jobs, liberally “stole” inspiration from the doers and thinkers who came before. Here’s how to do it right.

***

“If I have seen further,” Isaac Newton wrote in a 1675 letter to fellow scientist Robert Hooke, “it is by standing on the shoulders of giants.”

It can be easy to look at great geniuses like Newton and imagine that their ideas and work came solely out of their minds, that they spun it from their own thoughts—that they were true originals. But that is rarely the case.

Innovative ideas have to come from somewhere. No matter how unique or unprecedented a work seems, dig a little deeper and you will always find that the creator stood on someone else’s shoulders. They mastered the best of what other people had already figured out, then made that expertise their own. With each iteration, they could see a little further, and they were content in the knowledge that future generations would, in turn, stand on their shoulders.

Standing on the shoulders of giants is a necessary part of creativity, innovation, and development. It doesn’t make what you do less valuable. Embrace it.

Everyone gets a lift up

Ironically, Newton’s turn of phrase wasn’t even entirely his own. The phrase can be traced back to the twelfth century, when the author John of Salisbury wrote that philosopher Bernard of Chartres compared people to dwarves perched on the shoulders of giants and said that “we see more and farther than our predecessors, not because we have keener vision or greater height, but because we are lifted up and borne aloft on their gigantic stature.”

Mary Shelley put it this way in the nineteenth century, in a preface for Frankenstein: “Invention, it must be humbly admitted, does not consist in creating out of void but out of chaos.”

There are giants in every field. Don’t be intimidated by them. They offer an exciting perspective. As the film director Jim Jarmusch advised, “Nothing is original. Steal from anywhere that resonates with inspiration or fuels your imagination. Devour old films, new films, music, books, paintings, photographs, poems, dreams, random conversations, architecture, bridges, street signs, trees, clouds, bodies of water, light, and shadows. Select only things to steal from that speak directly to your soul. If you do this, your work (and theft) will be authentic. Authenticity is invaluable; originality is non-existent. And don’t bother concealing your thievery—celebrate it if you feel like it. In any case, always remember what Jean-Luc Godard said: ‘It’s not where you take things from—it’s where you take them to.’”

That might sound demoralizing. Some might think, “My song, my book, my blog post, my startup, my app, my creation—surely they are original? Surely no one has done this before!” But that’s likely not the case. It’s also not a bad thing. Filmmaker Kirby Ferguson states in his TED Talk: “Admitting this to ourselves is not an embrace of mediocrity and derivativeness—it’s a liberation from our misconceptions, and it’s an incentive to not expect so much from ourselves and to simply begin.”

There lies the important fact. Standing on the shoulders of giants enables us to see further, not merely as far as before. When we build upon prior work, we often improve upon it and take humanity in new directions. However original your work seems to be, the influences are there—they might just be uncredited or not obvious. As we know from social proof, copying is a natural human tendency. It’s how we learn and figure out how to behave.

In Antifragile: Things That Gain from Disorder, Nassim Taleb describes the type of antifragile inventions and ideas that have lasted throughout history. He describes himself heading to a restaurant (the likes of which have been around for at least 2,500 years), in shoes similar to those worn at least 5,300 years ago, to use silverware designed by the Mesopotamians. During the evening, he drinks wine based on a 6,000-year-old recipe, from glasses invented 2,900 years ago, followed by cheese unchanged through the centuries. The dinner is prepared with one of our oldest tools, fire, and using utensils much like those the Romans developed.

Much about our societies and cultures has undeniably changed and continues to change at an ever-faster rate. But we continue to stand on the shoulders of those who came before in our everyday life, using their inventions and ideas, and sometimes building upon them.

Not invented here syndrome

When we discredit what came before or try to reinvent the wheel or refuse to learn from history, we hold ourselves back. After all, many of the best ideas are the oldest. “Not Invented Here Syndrome” is a term for situations when we avoid using ideas, products, or data created by someone else, preferring instead to develop our own (even if it is more expensive, time-consuming, and of lower quality.)

The syndrome can also manifest as reluctance to outsource or delegate work. People might think their output is intrinsically better if they do it themselves, becoming overconfident in their own abilities. After all, who likes getting told what to do, even by someone who knows better? Who wouldn’t want to be known as the genius who (re)invented the wheel?

Developing a new solution for a problem is more exciting than using someone else’s ideas. But new solutions, in turn, create new problems. Some people joke that, for example, the largest Silicon Valley companies are in fact just impromptu incubators for people who will eventually set up their own business, firm in the belief that what they create themselves will be better.

The syndrome is also a case of the sunk cost fallacy. If a company has spent a lot of time and money getting a square wheel to work, they may be resistant to buying the round ones that someone else comes out with. The opportunity costs can be tremendous. Not Invented Here Syndrome detracts from an organization or individual’s core competency, and results in wasting time and talent on what are ultimately distractions. Better to use someone else’s idea and be a giant for someone else.

Why Steve Jobs stole his ideas

“Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it. They just saw something. It seemed obvious to them after a while; that’s because they were able to connect experiences they’ve had and synthesize new things.” 

— Steve Jobs

In The Runaway Species: How Human Creativity Remakes the World, Anthony Brandt and David Eagleman trace the path that led to the creation of the iPhone and track down the giants upon whose shoulders Steve Jobs perched. We often hail Jobs as a revolutionary figure who changed how we use technology. Few who were around in 2007 could have failed to notice the buzz created by the release of the iPhone. It seemed so new, a total departure from anything that had come before. The truth is a little messier.

The first touchscreen came about almost half a century before the iPhone, developed by E.A. Johnson for air traffic control. Other engineers built upon his work and developed usable models, filing a patent in 1975. Around the same time, the University of Illinois was developing touchscreen terminals for students. Prior to touchscreens, light pens used similar technology. The first commercial touchscreen computer came out in 1983, soon followed by graphics boards, tablets, watches, and video game consoles. Casio released a touchscreen pocket computer in 1987 (remember, this is still a full twenty years before the iPhone.)

However, early touchscreen devices were frustrating to use, with very limited functionality, often short battery lives, and minimal use cases for the average person. As touchscreen devices developed in complexity and usability, they laid down the groundwork for the iPhone.

Likewise, the iPod built upon the work of Kane Kramer, who took inspiration from the Sony Walkman. Kramer designed a small portable music player in the 1970s. The IXI, as he called it, looked similar to the iPod but arrived too early for a market to exist, and Kramer lacked the marketing skills to create one. When pitching to investors, Kramer described the potential for immediate delivery, digital inventory, taped live performances, back catalog availability, and the promotion of new artists and microtransactions. Sound familiar?

Steve Jobs stood on the shoulders of the many unseen engineers, students, and scientists who worked for decades to build the technology he drew upon. Although Apple has a long history of merciless lawsuits against those they consider to have stolen their ideas, many were not truly their own in the first place. Brandt and Eagleman conclude that “human creativity does not emerge from a vacuum. We draw on our experience and the raw materials around us to refashion the world. Knowing where we’ve been, and where we are, points the way to the next big industries.”

How Shakespeare got his ideas

Nothing will come of nothing.”  

— William Shakespeare, King Lear

Most, if not all, of Shakespeare’s plays draw heavily upon prior works—so much so that some question whether he would have survived today’s copyright laws.

Hamlet took inspiration from Gesta Danorum, a twelfth-century work on Danish history by Saxo Grammaticus, consisting of sixteen Latin books. Although it is doubtful whether Shakespeare had access to the original text, scholars find the parallels undeniable and believe he may have read another play based on it, from which he drew inspiration. In particular, the accounts of the plight of Prince Amleth (which has the same letters as Hamlet) involves similar events.

Holinshed’s Chronicles, a co-authored account of British history from the late sixteenth century, tells stories that mimic the plot of Macbeth, including the three witches. Holinshed’s Chronicles itself was a mélange of earlier texts, which transferred their biases and fabrications to Shakespeare. It also likely inspired King Lear.

Parts of Antony and Cleopatra are copied verbatim from Plutarch’s Life of Mark Anthony. Arthur Brooke’s 1562 poem The Tragicall Historye of Romeus and Juliet was an undisguised template for Romeo and Juliet. Once again, there are more giants behind the scenes—Brooke copied a 1559 poem by Pierre Boaistuau, who in turn drew from a 1554 story by Matteo Bandello, who in turn drew inspiration from a 1530 work by Luigi da Porto. The list continues, with Plutarch, Chaucer, and the Bible acting as inspirations for many major literary, theatrical, and cultural works.

Yet what Shakespeare did with the works he sometimes copied, sometimes learned from, is remarkable. Take a look at any of the original texts and, despite the mimicry, you will find that they cannot compare to his plays. Many of the originals were dry, unengaging, and lacking any sort of poetic language. J.J. Munro wrote in 1908 that The Tragicall Historye of Romeus and Juliet “meanders on like a listless stream in a strange and impossible land; Shakespeare’s sweeps on like a broad and rushing river, singing and foaming, flashing in sunlight and darkening in cloud, carrying all things irresistibly to where it plunges over the precipice into a waste of waters below.”

Despite bordering on plagiarism at times, he overhauled the stories with an exceptional use of the English language, bringing drama and emotion to dreary chronicles or poems. He had a keen sense for the changes required to restructure plots, creating suspense and intensity in their stories. Shakespeare saw far further than those who wrote before him, and with their help, he ushered in a new era of the English language.

Of course, it’s not just Newton, Jobs, and Shakespeare who found a (sometimes willing, sometimes not) shoulder to stand upon. Facebook is presumed to have built upon Friendster. Cormac McCarthy’s books often replicate older history texts, with one character coming straight from Samuel Chamberlain’s My Confessions. John Lennon borrowed from diverse musicians, once writing in a letter to the New York Times that though the Beatles copied black musicians, “it wasn’t a rip off. It was a love in.”

In The Ecstasy of Influence, Jonathan Lethem points to many other instances of influences in classic works. In 1916, journalist Heinz von Lichberg published a story of a man who falls in love with his landlady’s daughter and begins a love affair, culminating in her death and his lasting loneliness. The title? Lolita. It’s hard to question that Nabokov must have read it, but aside from the plot and name, the style of language in his version is absent from the original.

The list continues. The point is not to be flippant about plagiarism but to cultivate sensitivity to the elements of value in a previous work, as well as the ability to build upon those elements. If we restrict the flow of ideas, everyone loses out.

The adjacent possible

What’s this about? Why can’t people come up with their own ideas? Why do so many people come up with a brilliant idea but never profit from it? The answer lies in what scientist Stuart Kauffman calls “the adjacent possible.” Quite simply, each new innovation or idea opens up the possibility of additional innovations and ideas. At any time, there are limits to what is possible, yet those limits are constantly expanding.

In Where Good Ideas Come From: The Natural History of Innovation, Steven Johnson compares this process to being in a house where opening a door creates new rooms. Each time we open the door to a new room, new doors appear and the house grows. Johnson compares it to the formation of life, beginning with basic fatty acids. The first fatty acids to form were not capable of turning into living creatures. When they self-organized into spheres, the groundwork formed for cell membranes, and a new door opened to genetic codes, chloroplasts, and mitochondria. When dinosaurs evolved a new bone that meant they had more manual dexterity, they opened a new door to flight. When our distant ancestors evolved opposable thumbs, dozens of new doors opened to the use of tools, writing, and warfare. According to Johnson, the history of innovation has been about exploring new wings of the adjacent possible and expanding what we are capable of.

A new idea—like those of Newton, Jobs, and Shakespeare—is only possible because a previous giant opened a new door and made their work possible. They in turn opened new doors and expanded the realm of possibility. Technology, art, and other advances are only possible if someone else has laid the groundwork; nothing comes from nothing. Shakespeare could write his plays because other people had developed the structures and language that formed his tools. Newton could advance science because of the preliminary discoveries that others had made. Jobs built Apple out of the debris of many prior devices and technological advances.

The questions we all have to ask ourselves are these: What new doors can I open, based on the work of the giants that came before me? What opportunities can I spot that they couldn’t? Where can I take the adjacent possible? If you think all the good ideas have already been found, you are very wrong. Other people’s good ideas open new possibilities, rather than restricting them.

As time passes, the giants just keep getting taller and more willing to let us hop onto their shoulders. Their expertise is out there in books and blog posts, open-source software and TED talks, podcast interviews, and academic papers. Whatever we are trying to do, we have the option to find a suitable giant and see what can be learned from them. In the process, knowledge compounds, and everyone gets to see further as we open new doors to the adjacent possible.

Yuval Noah Harari: Why We Dominate the Earth

Why did Homo sapiens diverge from the rest of the animal kingdom and go on to dominate the earth? Communication? Cooperation? According to best-selling author Yuval Noah Harari, that barely scratches the surface.

***

Yuval Noah Harari’s Sapiens is one of those uniquely breathtaking books that comes along but rarely. It’s broad, but scientific. It’s written for a popular audience, but never feels dumbed down. It’s new and fresh, but not based on any new primary research. Sapiens is pure synthesis.

Readers will easily recognize the influence of Jared Diamond, author of Guns, Germs, and Steel, The Third Chimpanzee, and other similarly broad-yet-scientific works with vast synthesis and explanatory power. It’s not surprising, then, that Harari, a history professor at the Hebrew University of Jerusalem, has noted Diamond’s contributions to his thinking. Harari says:

It [Guns, Germs, and Steel] made me realize that you can ask the biggest questions about history and try to give them scientific answers. But in order to do so, you have to give up the most cherished tools of historians. I was taught that if you’re going to study something, you must understand it deeply and be familiar with primary sources. But if you write a history of the whole world you can’t do this. That’s the trade-off.

Harari sought to understand the history of humankind’s domination of the earth and its development of complex modern societies. He applies ideas from evolutionary theory, forensic anthropology, genetics, and the basic tools of the historian to generate a new conception of our past: humankind’s success was due to our ability to create and sustain grand, collaborative myths.

To make the narrative more palatable and sensible, we must take a different perspective. Calling us humans keeps us too close to the story to have an accurate view. We’re not as unique as we would like to believe. In fact, we’re just another animal. We are Homo sapiens. Because of this, our history can be described just like that of any other species. Harari labels us like any other species, calling us “Sapiens” so we can depersonalize things and allow the author the room he needs to make some bold statements about the history of humanity.  Our successes, failures, flaws and credits are part of the makeup of the Sapiens.[1]

Sapiens existed long before there was recorded history. Biological history is a much longer stretch, beginning millions of years before the evolution of any forbears we can identify. When our earliest known ancestors formed, they were not at the top of the food chain. Rather, they were engaged in an epic battle of trench warfare with the other organisms that shared their habitat.[2]

“Ants and bees can also work together in huge numbers, but they do so in a very rigid manner and only with close relatives. Wolves and chimpanzees cooperate far more flexibly than ants, but they can do so only with small numbers of other individuals that they know intimately. Sapiens can cooperate in extremely flexible ways with countless numbers of strangers. That’s why Sapiens rule the world, whereas ants eat our leftovers and chimps are locked up in zoos and research laboratories.”

— Yuval Noah Harari, Sapiens

These archaic humans loved, played, formed close friendships and competed for status and power, but so did chimpanzees, baboons, and elephants. There was nothing special about humans. Nobody, least of all humans themselves, had any inkling their descendants would one day walk on the moon, split the atom, fathom the genetic code and write history books. The most important thing to know about prehistoric humans is that they were insignificant animals with no more impact on their environment than gorillas, fireflies or jellyfish.

For the same reason that our kids can’t imagine a world without Google, Amazon and iPhones, we can’t imagine a world in which have not been a privileged species right from the start. Yet we were just one species of smart, social ape trying to survive in the wild. We had cousins: Homo neanderthalensis, Homo erectus, Homo rudolfensis, and others, both our progenitors and our contemporaries, all considered human and with similar traits. If chimps and bonobos were our second cousins, these were our first cousins.

Eventually, things changed. About 70,000 or so years ago, our DNA showed a mutation (Harari claims we don’t know quite why) which allowed us to make a leap that no other species, human or otherwise, was able to make. We began to cooperate flexibly, in large groups, with an extremely complex and versatile language. If there is a secret to our success—and remember, success in nature is survival—It was that our brains developed to communicate.

Welcome to the Cognitive Revolution

Our newfound capacity for language allowed us to develop abilities that couldn’t be found among our cousins, or in any other species from ants to whales.

First, we could give detailed explanations of events that had transpired. We weren’t talking mental models or even gravity. At first, we were probably talking about things for survival. Food. Water. Shelter. It’s possible to imagine making a statement something like this: “I saw a large lion in the forest three days back, with three companions, near the closest tree to the left bank of the river and I think, but am not totally sure, they were hunting us. Why don’t we ask for help from a neighboring tribe, so we don’t all end up as lion meat?”[3]

Second, and maybe more importantly, we could also gossip about each other. Before religion, gossip created a social, even environmental pressure to conform to certain norms. Gossip allowed control of the individual for the aid of the group. It wouldn’t take much effort to imagine someone saying, “I noticed Frank and Steve have not contributed to the hunt in about three weeks. They are not holding up their end of the bargain, and I don’t think we should include them in distributing the proceeds of our next major slaughter.”[4]

Harari’s insight is that while the abilities to communicate about necessities and to pressure people to conform to social norms were certainly pluses, they were not the great leap. Surprisingly, it’s not our shared language or even our ability to dominate other species that defines us but rather, our shared fictions. The exponential leap happened because we could talk about things that were not real. Harari writes:

As far as we know, only Sapiens can talk about entire kinds of entities that they have never seen, touched, or smelled. Legends, myths, gods, and religions appeared for the first time with the Cognitive Revolution. Many animals and human species could previously say, “Careful! A lion!” Thanks to the Cognitive Revolution, Homo sapiens acquired the ability to say, “The lion is the guardian spirit of our tribe.” This ability to speak about fictions is the most unique feature of Sapiens language…. You could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven.

Predictably, Harari mentions religion as one of the important fictions. But just as important are fictions like the limited liability corporation; the nation-state; the concept of human rights, inalienable from birth; and even money itself.

Shared beliefs allow us to do the thing that other species cannot. Because we believe, we can cooperate effectively in large groups toward larger aims. Sure, other animals cooperate. Ants and bees work in large groups with close relatives but in a very rigid manner. Changes in the environment, as we are seeing today, put the rigidity under strain. Apes and wolves cooperate as well, and with more flexibility than ants. But they can’t scale.

If wild animals could have organized in large numbers, you might not be reading this. Our success is intimately linked to scale. In many systems and in all species but ours, as far as we know, there are hard limits to the number of individuals that can cooperate in groups in a flexible way.[5] As Harari puts it in the quotation at the beginning of this post, “Sapiens can cooperate in extremely flexible ways with countless numbers of strangers. That’s why Sapiens rule the world, whereas ants eat our leftovers and chimps are locked up in zoos and research laboratories.”

Sapiens diverged when they—or I should say we—hit on the ability of a collective myth to advance us beyond what we could do individually. As long as we shared some beliefs we could work toward something larger than ourselves—itself a shared fiction. With this in mind, there was almost no limit to the number of cooperating, believing individuals who could belong to a belief-group.

With that, it becomes easier to understand why we see different results from communication in human culture than in whale culture, or dolphin culture, or bonobo culture: a shared trust in something outside of ourselves, something larger. And the result can be extreme, lollapalooza even, when a lot of key things converge in one direction, from a combination of critical elements.

Any large-scale human cooperation—whether a modern state, a medieval church, an ancient city, or an archaic tribe—is rooted in common myths that exist only in people’s collective imagination. Churches are rooted in common religious myths. Two Catholics who have never met can nevertheless go together on crusade or pool funds to build a hospital because they both believe God was incarnated in human flesh and allowed Himself to be crucified to redeem our sins. States are rooted in common national myths. Two Serbs who have never met might risk their lives to save one another because both believe in the existence of the Serbian nation, the Serbian homeland and the Serbian flag. Judicial systems are rooted in common legal myths. Two lawyers who have never met can nevertheless combine efforts to defend a complete stranger because they both believe in the existence of laws, justice, human rights, and money paid out in fees.

Not only do we believe them individually, but we believe them collectively.

Shared fictions aren’t necessarily lies. Shared fictions can create literal truths. For example, if I trust that you believe in money as much as I do, we can use it as an exchange of value. Yet just as you can’t get a chimpanzee to forgo a banana today for infinite bananas in heaven, you also can’t get him to accept three apples today with the idea that if he invests them wisely in a chimp business, he’ll get six bananas from it in five years—no matter how many compound interest tables you show him. This type of collaborative and complex fiction is uniquely human, and capitalism is as much an imagined reality as religion.

Once you start to see the world as a collection of shared fictions, it never looks the same again.

This leads to the extremely interesting conclusion that comprises the bulk of Harari’s great work: If we collectively decide to alter the myths, we can relatively quickly and dramatically alter behavior.

For instance, we can decide slavery, one of the oldest institutions in human history, is no longer acceptable. We can declare monarchy an outdated form of governance. We can decide women should have the right to as much power as men, reversing the pattern of history. We can also decide all Sapiens must follow the same religious text and devote ourselves to slaughtering the resisters.

There is no parallel in other species for these quick, large-scale shifts. General behavior patterns in dogs or fish or ants change due to a change in environment, or to broad genetic evolution over a period of time. Lions will likely never sign a Declaration of Lion Rights and suddenly abolish the idea of an alpha male lion. Their hierarchies are rigid, primal even.

But humans can collectively change the narrative over a short span of time, and begin acting very differently with the same DNA and the same set of physical circumstances. If we all believe in Bitcoin, it becomes real for the same reason that gold becomes real.

Thus, we can conclude that Harari’s Cognitive Revolution is what happens when we decide that, while biology dictates what’s physically possible, we as a species decide norms. This is where biology enables and culture forbids.  “The Cognitive Revolution is accordingly the point when history declared its independence from biology,” Harari writes. These ever-shifting alliances, beliefs, myths—ultimately, cultures—define what we call human history.

A thorough reading of Sapiens is recommended to understand where Harari takes this idea, from the earliest humans to who we are today.

 

 

 

Sources:

[1] This biological approach to history is one we’ve looked at before with the work of Will and Ariel Durant. See The Lessons of History.

[2] It was only when Sapiens acquired weapons, fire, and most importantly a way to communicate so as to share and build knowledge, that we had the asymmetric weaponry necessary to step out of the trenches and dominate, at least for now, the organisms we co-exist with.

[3] My kids are learning to talk Caveman thanks to the guide at the back of Ook and Glook and it doesn’t sound like this at all.

[4] It’s unknown what role ego played, but we can assume people were not asking, “Oh, does this headdress make me look fat?”

[5] Ants can cooperate in great numbers with their relatives, but only based on simple algorithms. Charlie Munger has mentioned in The Psychology of Human Misjudgment that ants’ rules are so simplistic that if a group of ants starts walking in a circle, their “follow-the-leader” algorithm can cause them to literally march until their collective death.

Are Great Men and Women a Product of Circumstance?

Few books have ever struck us as much as Will Durant’s 100-page masterpiece The Lessons of History, a collection of essays which sum up the lifelong thoughts of a brilliant historian.

We recently dug up an interview with Durant and his wife Ariel — co-authors of the 11-volume masterpiece The Story of Civilization — and sent it to the members of the Farnam Street Learning Community. While the interview is full of wisdom in its entirety, we picked one interesting excerpt to share with you: Durant’s thoughts on the “Great Man” (and certainly, Great Woman) theory of history.

Has history been Theirs to dictate? Durant has a very interesting answer, one that’s hard to ignore once you think about it:

Interviewer: Haven’t certain individuals, the genius, great man, or hero, as Carlisle believed, been the prime determinants of human history?

Will Durant: There are many cases, I think, in which individual characters have had very significant results upon history. But basically, I think Carlisle was wrong. That the hero is a product of a situation rather than the result being a product of the hero. It is demand that brings out the exceptional qualities of man. What would Lenin have been if he had remained in, what was it, Geneva? He would have a little…. But he faced tremendous demands upon him, and something in him responded. I think those given us would have brought out capacity in many different types of people. They wouldn’t have to be geniuses to begin with.

Interviewer: Then what is the function or role of heroes?

Will Durant: They form the function of meeting a situation whose demands are always all his potential abilities.

Interviewer: What do you think is the important thing for us, in studying the course of history, to know about character? What is the role of character in history?

Will Durant: I suppose the role of character is for the individual to rise to a situation. If it were not for the situation, we would never have heard of him. So that you might say that character is the product of an exceptional demand by the situation upon human ability. I think the ability of the average man could be doubled if it were demanded, if the situation demanded. So, I think Lenin was made by the situation. Of course he brought ideas, and he had to abandon almost all those ideas. For example, he went back to private enterprise for a while.

One way we might corroborate Durant’s thoughts on Lenin is to ask another simple question: Which U.S. Presidents are considered the most admired?

Students of history have three easy answers pop in (and polls corroborate): George Washington – the first U.S. President and a Founding Father; Abraham Lincoln – the man who held the Union together; and finally Franklin Delano Roosevelt – unless the U.S. amends its Constitution, the longest serving U.S. President now and forever.

All great men, certainly. All three of which rose to the occasion. But what do they share?

They were the ones holding office at the time of (or in the case of Washington, immediately upon winning) the three major wars impacting American history: The American Revolution, the American Civil War, and World War II. 

It raises an interesting question: Would these men be remembered and held in the same esteem if they hadn’t been handed such situations? The answer pops in pretty quickly: Probably not. Their heroism was partly a product of their character and partly a product of their situation.

And thus Durant gives us a very interesting model to bring to reality: Greatness is in many of us, but only if we rise, with practical expediency, to the demands of life. Greatness arises only when tested.

For the rest of Durant’s interview, and a lot of other cool stuff, check out the Learning Community.

Frozen Accidents: Why the Future Is So Unpredictable

“Each of us human beings, for example, is the product of an enormously long
sequence of accidents,
any of which could have turned out differently.”
— Murray Gell-Mann

***

What parts of reality are the product of an accident? The physicist Murray Gell-Mann thought the answer was “just about everything.” And to Gell-Mann, understanding this idea was the key to understanding how complex systems work.

Gell-Mann believed two things caused what we see in the world:

  1. A set of fundamental laws
  2. Random “accidents” — the little blips that could have gone either way and had they, would have produced a very different kind of world.

Gell-Mann pulled the second part from Francis Crick, co-discoverer of the human genetic code, who argued that the code itself may well have been an “accident” of physical history rather than a uniquely necessary arrangement.

These accidents become “frozen” in time, and have a great effect on all subsequent developments; complex life itself is an example of something that did happen a certain way but probably could have happened other ways — we know this from looking at the physics.

This idea of fundamental laws plus accidents and the non-linear second-order effects they produce became the science of complexity and chaos theory.

Gell-Mann discussed the fascinating idea further in a 1996 essay on Edge:

Each of us human beings, for example, is the product of an enormously long sequence of accidents, any of which could have turned out differently. Think of the fluctuations that produced our galaxy, the accidents that led to the formation of the solar system, including the condensation of dust and gas that produced Earth, the accidents that helped to determine the particular way that life began to evolve on Earth, and the accidents that contributed to the evolution of particular species with particular characteristics, including the special features of the human species. Each of us individuals has genes that result from a long sequence of accidental mutations and chance matings, as well as natural selection.

Now, most single accidents make very little difference to the future, but others may have widespread ramifications, many diverse consequences all traceable to one chance event that could have turned out differently. Those we call frozen accidents.

These “frozen accidents” occur at every nested level of the world: As Gell-Mann points out, they are an outcome in physics (the physical laws we observe may be accidents of history); in biology (our genetic code is largely a byproduct of “advantageous accidents” as discussed by Crick); and in human history, as we’ll discuss. In other words, the phenomenon hits all three buckets of knowledge.

Gell-Mann gives a great example of how this plays out on the human scale:

For instance, Henry VIII became king of England because his older brother Arthur died. From the accident of that death flowed all the coins, all the charters, all the other records, all the history books mentioning Henry VIII; all the different events of his reign, including the manner of separation of the Church of England from the Roman Catholic Church; and of course the whole succession of subsequent monarchs of England and of Great Britain, to say nothing of the antics of Charles and Diana. The accumulation of frozen accidents is what gives the world its effective complexity.

The most important idea here is that the frozen accidents of history have a nonlinear effect on everything that comes after. The complexity we see comes from simple rules and many, many “bounces” that could have gone in any direction. Once they go a certain way, there is no return.

This principle is illustrated wonderfully in the book The Origin of Wealth by Eric Beinhocker. The first example comes from 19th-century history:

In the late 1800s, “Buffalo Bill” Cody created a show called Buffalo Bill’s Wild West Show, which toured the United States, putting on exhibitions of gun fighting, horsemanship, and other cowboy skills. One of the show’s most popular acts was a woman named Phoebe Moses, nicknamed Annie Oakley. Annie was reputed to have been able to shoot the head off of a running quail by age twelve, and in Buffalo Bill’s show, she put on a demonstration of marksmanship that included shooting flames off candles, and corks out of bottles. For her grand finale, Annie would announce that she would shoot the end off a lit cigarette held in a man’s mouth, and ask for a brave volunteer from the audience. Since no one was ever courageous enough to come forward, Annie hid her husband, Frank, in the audience. He would “volunteer,” and they would complete the trick together. In 1880, when the Wild West Show was touring Europe, a young crown prince (and later, kaiser), Wilhelm, was in the audience. When the grand finale came, much to Annie’s surprise, the macho crown prince stood up and volunteered. The future German kaiser strode into the ring, placed the cigarette in his mouth, and stood ready. Annie, who had been up late the night before in the local beer garden, was unnerved by this unexpected development. She lined the cigarette up in her sights, squeezed…and hit it right on the target.

Many people have speculated that if at that moment, there had been a slight tremor in Annie’s hand, then World War I might never have happened. If World War I had not happened, 8.5 million soldiers and 13 million civilian lives would have been saved. Furthermore, if Annie’s hand had trembled and World War I had not happened, Hitler would not have risen from the ashes of a defeated Germany, and Lenin would not have overthrown a demoralized Russian government. The entire course of twentieth-century history might have been changed by the merest quiver of a hand at a critical moment. Yet, at the time, there was no way anyone could have known the momentous nature of the event.

This isn’t to say that other big events, many bad, would not have precipitated in the 20th century. Almost certainly, there would have been wars and upheavals.

But the actual course of history was, in some part, determined by a small chance event which had no seeming importance when it happened. The impact of Wilhelm being alive rather than dead was totally non-linear. (A small non-event had a massively disproportionate effect on what happened later.)

This is why predicting the future, even with immense computing power, is an impossible task. The chaotic effects of randomness, with small inputs having disproportionate and massive effects, makes prediction a very difficult task. That’s why we must appreciate the role of randomness in the world and seek to protect against it.

Another great illustration from The Origin of Wealth is a famous story in the world of technology:

[In 1980] IBM approached a small company with forty employees in Bellevue, Washington. The company, called Microsoft, was run by a Harvard dropout named bill Gates and his friend Paul Allen. IBM wanted to talk to the small company about creating a version of the programming language BASIC for the new PC. At their meeting, IBM asked Gates for his advice on what operating systems (OS) the new machine should run. Gates suggested that IBM talk to Gary Kildall of Digital Research, whose CP/M operating system had become the standard in the hobbyist world of microcomputers. But Kildall was suspicious of the blue suits from IBM and when IBM tried to meet him, he went hot-air ballooning, leaving his wife and lawyer to talk to the bewildered executives, along with instructions not to sign even a confidentiality agreement. The frustrated IBM executives returned to Gates and asked if he would be interested in the OS project. Despite never having written an OS, Gates said yes. He then turned around and license a product appropriately named Quick and Dirty Operating System, or Q-DOS, from a small company called Seattle Computer Products for $50,000, modified it, and then relicensed it to IBM as PC-DOS. As IBM and Microsoft were going through the final language for the agreement, Gates asked for a small change. He wanted to retain the rights to sell his DOS on non-IBM machines in a version called MS-DOS. Gates was giving the company a good price, and IBM was more interested in PC hardware than software sales, so it agreed. The contract was signed on August 12, 1981. The rest, as they say, is history. Today, Microsoft is a company worth $270 billion while IBM is worth $140 billion.

At any point in that story, business history could have gone a much different way: Kildall could have avoided hot-air ballooning, IBM could have refused Gates’ offer, Microsoft could have not gotten the license for QDOS. Yet this little episode resulted in massive wealth for Gates and a long period of trouble for IBM.

Predicting the outcomes of a complex system must clear a pretty major hurdle: The prediction must be robust to non-linear “accidents” with a chain of unforeseen causation. In some situations, this is doable: We can confidently rule out that Microsoft will not go broke in the next 12 months; the chain of events needed to take it under quickly is so low as to be negligible, no matter how you compute it. (Even IBM made it through the above scenario, although not unscathed.)

But as history rolls on and more “accidents” accumulate year by year, a “Fog of the Future” rolls in to obscure our view. To operate in such a world, we must learn that predicting is inferior to building systems that don’t require prediction, as Mother Nature does. And if we must predict, we must confine our predictions to areas with few variables that lie in our circle of competence, and understand the consequences if we’re wrong.

If this topic is interesting to you, try exploring the rest of the Origin of Wealth, which discusses complexity in the economic realm in great (but readable) detail; also check out the rest of Murray Gell-Mann’s essay on Edge. Gell-Mann also wrote a book on the topic called The Quark and the Jaguar which is worth checking out. The best writer on randomness and robustness in the face of an uncertain future is of course Nassim Taleb, whom we have written about many times.

Using Multidisciplinary Thinking to Approach Problems in a Complex World

Complex outcomes in human systems are a tough nut to crack when it comes to deciding what’s really true. Any phenomena we might try to explain will have a host of competing theories, many of them seemingly plausible.

So how do we know what to go with?

One idea is to take a nod from the best. One of the most successful “explainers” of human behavior has been the cognitive psychologist Steven Pinker. His books have been massively influential, in part because they combine scientific rigor, explanatory power, and plainly excellent writing.

What’s unique about Pinker is the range of sources he draws on. His book The Better Angels of Our Nature, a cogitation on the decline in relative violence in recent human history, draws on ideas from evolutionary psychology, forensic anthropology, statistics, social history, criminology, and a host of other fields. Pinker, like Vaclav Smil and Jared Diamond, is the opposite of the man with a hammer, ranging over much material to come to his conclusions.

In fact, when asked about the progress of social science as an explanatory arena over time, Pinker credited this cross-disciplinary focus:

Because of the unification with the sciences, there are more genuinely explanatory theories, and there’s a sense of progress, with more non-obvious things being discovered that have profound implications.

But, even better, Pinker gives out an outline for how a multidisciplinary thinker should approach problems in a complex world.

***

Here’s the issue at stake: When we’re viewing a complex phenomena—say, the decline in certain forms of violence in human history—it can be hard to come with up a rigorous explanation. We can’t just set up repeated lab experiments and vary the conditions of human history to see what pops out, as with physics or chemistry.

So out of necessity, we must approach the problem in a different way.

In the above referenced interview, Pinker gives a wonderful example how to do it: Note how he carefully “cross-checks” from a variety of sources of data, developing a 3D view of the landscape he’s trying to assess:

Pinker: Absolutely, I think most philosophers of science would say that all scientific generalizations are probabilistic rather than logically certain, more so for the social sciences because the systems you are studying are more complex than, say, molecules, and because there are fewer opportunities to intervene experimentally and to control every variable. But the exis­tence of the social sciences, including psychology, to the extent that they have discovered anything, shows that, despite the uncontrollability of human behavior, you can make some progress: you can do your best to control the nuisance variables that are not literally in your control; you can have analogues in a laboratory that simulate what you’re interested in and impose an experimental manipulation.

You can be clever about squeezing the last drop of causal information out of a correlational data set, and you can use converging evi­dence, the qualitative narratives of traditional history in combination with quantitative data sets and regression analyses that try to find patterns in them. But I also go to traditional historical narratives, partly as a sanity check. If you’re just manipulating numbers, you never know whether you’ve wan­dered into some preposterous conclusion by taking numbers too seriously that couldn’t possibly reflect reality. Also, it’s the narrative history that provides hypotheses that can then be tested. Very often a historian comes up with some plausible causal story, and that gives the social scientists something to do in squeezing a story out of the numbers.

Warburton: I wonder if you’ve got an example of just that, where you’ve combined the history and the social science?

Pinker: One example is the hypothesis that the Humanitarian Revolution during the Enlightenment, that is, the abolition of slavery, torture, cruel punishments, religious persecution, and so on, was a product of an expansion of empathy, which in turn was fueled by literacy and the consumption of novels and journalis­tic accounts. People read what life was like in other times and places, and then applied their sense of empathy more broadly, which gave them second thoughts about whether it’s a good idea to disembowel someone as a form of criminal punish­ment. So that’s a historical hypothesis. Lynn Hunt, a historian at the University of California–Berkeley, proposed it, and there are some psychological studies that show that, indeed, if people read a first-person account by someone unlike them, they will become more sympathetic to that individual, and also to the category of people that that individual represents.

So now we have a bit of experimental psychology supporting the historical qualita­tive narrative. And, in addition, one can go to economic histo­rians and see that, indeed, there was first a massive increase in the economic efficiency of manufacturing a book, then there was a massive increase in the number of books pub­lished, and finally there was a massive increase in the rate of literacy. So you’ve got a story that has at least three vertices: the historian’s hypothesis; the economic historians identifying exogenous variables that changed prior to the phenomenon we’re trying to explain, so the putative cause occurs before the putative effect; and then you have the experimental manipulation in a laboratory, showing that the intervening link is indeed plausible.

Pinker is saying, Look we can’t just rely on “plausible narratives” generated by folks like the historians. There are too many possibilities that could be correct.

Nor can we rely purely on correlations (i.e., the rise in literacy statistically tracking the decline in violence) — they don’t necessarily offer us a causative explanation. (Does the rise in literacy cause less violence, or is it vice versa? Or, does a third factor cause both?)

However, if we layer in some other known facts from areas we can experiment on — say, psychology or cognitive neuroscience — we can sometimes establish the causal link we need or, at worst, a better hypothesis of reality.

In this case, it would be the finding from psychology that certain forms of literacy do indeed increase empathy (for logical reasons).

Does this method give us absolute proof? No. However, it does allow us to propose and then test, re-test, alter, and strengthen or ultimately reject a hypothesis. (In other words, rigorous thinking.)

We can’t stop here though. We have to take time to examine competing hypotheses — there may be a better fit. The interviewer continues on asking Pinker about this methodology:

Warburton: And so you conclude that the de-centering that occurs through novel-reading and first-person accounts probably did have a causal impact on the willingness of people to be violent to their peers?

Pinker: That’s right. And, of course, one has to rule out alternative hypotheses. One of them could be the growth of affluence: perhaps it’s simply a question of how pleasant your life is. If you live a longer and healthier and more enjoyable life, maybe you place a higher value on life in general, and, by extension, the lives of others. That would be an alternative hypothesis to the idea that there was an expansion of empathy fueled by greater literacy. But that can be ruled out by data from eco­nomic historians that show there was little increase in afflu­ence during the time of the Humanitarian Revolution. The increase in affluence really came later, in the 19th century, with the advent of the Industrial Revolution.

***

Let’s review the process that Pinker has laid out, one that we might think about emulating as we examine the causes of complex phenomena in human systems:

  1. We observe an interesting phenomenon in need of explanation, one we feel capable of exploring.
  2. We propose and examine competing hypotheses that would explain the phenomena (set up in a falsifiable way, in harmony with the divide between science and pseudoscience laid out for us by the great Karl Popper).
  3. We examine a cross-section of: Empirical data relating to the phenomena; sensible qualitative inference (from multiple fields/disciplines, the more fundamental the better), and finally;  “Demonstrable” aspects of nature we are nearly certain about, arising from controlled experiment or other rigorous sources of knowledge ranging from engineering to biology to cognitive neuroscience.

What we end up with is not necessarily a bulletproof explanation, but probably the best we can do if we think carefully. A good cross-disciplinary examination with quantitative and qualitative sources coming into equal play, and a good dose of judgment, can be far more rigorous than the gut instinct or plausible nonsense type stories that many of us lazily spout.

A Word of Caution

Although Pinker’s “multiple vertices” approach to problem solving in complex domains can be powerful, we always have to be on guard for phenomena that we simply cannot explain at our current level of competence: We must have a “too hard” pile when competing explanations come out “too close to call” or we otherwise feel we’re outside of our circle of competence. Always tread carefully and be sure to follow Darwin’s Golden Rule: Contrary facts are more important than confirming ones. Be ready to change your mind, like Darwin, when the facts don’t go your way.

***

Still Interested? For some more Pinker goodness check out our prior posts on his work, or check out a few of his books like How the Mind Works or The Blank Slate: The Modern Denial of Human Nature.