Tag: Evolution

What Can Chain Letters Teach us about Natural Selection?

“It is important to understand that none of these replicating entities is consciously interested in getting itself duplicated. But it will just happen that the world becomes filled with replicators that are more efficient.”

***

In 1859, Charles Darwin first described his theory of evolution through natural selection in The Origin of Species. Here we are, 157 years later, and although it has become an established fact in the field of biology, its beauty is still not that well understood among the populace. I think that’s because it’s slightly counter-intuitive. Unlike string theory or quantum mechanics, the theory of evolution through natural selection is pretty easily obtainable by most.

So, is there a way we can help ourselves understand the theory in an intuitive way, so we can better go on applying it to other domains? I think so, and it comes from an interesting little volume released in 1995 by the biologist Richard Dawkins called River Out of Eden. But first, let’s briefly head back to the Origin of Species, so we’re clear on what we’re trying to understand.

***

In the fourth chapter of the book, entitled “Natural Selection,” Darwin describes a somewhat cold and mechanistic process for the development of species: If species had heritable traits and variation within their population, they would survive in different numbers, and those most adapted to survival would thrive and pass on those traits to successive generations. Eventually, new species would arise, slowly, as enough variation and differential reproduction acted on the population to create a de facto branch in the family tree.

Here’s the original description.

Let it be borne in mind how infinitely complex and close-fitting are the mutual relations of all organic beings to each other and to their physical conditions of life. Can it, then, be thought improbable, seeing that variations useful to man have undoubtedly occurred, that other variations useful in some way to each being in the great and complex battle of life, should sometimes occur in the course of thousands of generations? If such do occur, can we doubt (remembering that many more individuals are born than can possibly survive) that individuals having any advantage, however slight, over others, would have the best chance of surviving and of procreating their kind? On the other hand, we may feel sure that any variation in the least degree injurious would be rigidly destroyed. This preservation of favourable variations and the rejection of injurious variations, I call Natural Selection.

[…]

In such case, every slight modification, which in the course of ages chanced to arise, and which in any way favored the individuals of any species, by better adapting them to their altered conditions, would tend to be preserved; and natural selection would thus have free scope for the work of improvement.

[…]

It may be said that natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejection that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. 

The beauty of the theory is in its simplicity. The mechanism of evolution is, at root, a simple one. An unguided one. Better descendants outperform lesser ones in a competitive world and are more successful at replicating. Traits that improve the survival of their holder in its current environment tend to be preserved and amplified over time. This is hard to see in real-time, although some examples are helpful in understanding the concept, e.g. antibiotic resistance.

Darwin’s idea didn’t take as quickly as we might like to think. In The Reluctant Mr. Darwin, David Quammen talks about the period after the release of the groundbreaking work, in which the world had trouble coming to grips with Darwin’s theory. It was not the case, as it might seem today, that the world simply threw up its hands and accepted Darwin as a genius. This is a lesson in and of itself. It was quite the contrary:

By the 1890s, natural selection as Darwin had defined it–that is, differential reproductive success resulting from small, undirected variations and serving as the chief mechanism of adaption and divergence–was considered by many evolutionary biologists to have been a wrong guess.

It wasn’t until Gregor Mendel’s peas showed how heritability worked that Darwin’s ideas were truly vindicated against his rivals’. So if we have trouble coming to terms with evolution by natural selection in the modern age, we’re not alone: So did Darwin’s peers.

***

What’s this all got to do with chain letters? Well, in Dawkins’ River Out of Eden, he provides an analogy for the process of evolution through natural selection that is quite intuitive, and helpful in understanding the simple power of the idea. How would a certain type of chain letter come to dominate the population of all chain letters? It would work the same way.

A simple example is the so-called chain letter. You receive in the mail a postcard on which is written: “Make six copies of this card and send them to six friends within a week. If you do not do this, a spell will be cast upon you and you will die in horrible agony within a month.” If you are sensible you will throw it away. But a good percentage of people are not sensible; they are vaguely intrigued, or intimidated by the threat, and send six copies of it to other people. Of these six, perhaps two will be persuaded to send it on to six other people. If, on average, 1/3 of the people who receive the card obey the instructions written on it, the number of cards in circulation will double every week. In theory, this means that the number of cards in circulation after one year will be 2 to the power of 52, or about four thousand trillion. Enough post cards to smother every man, woman, and child in the world.

Exponential growth, if not checked by the lack of resources, always leads to startlingly large-scale results in a surprisingly short time. In practice, resources are limited and other factors, too, serve to limit exponential growth. In our hypothetical example, individuals will probably start to balk when the same chain letter comes around to them for the second time. In the competition for resources, variants of the same replicator may arise that happen to be more efficient at getting themselves duplicated. These more efficient replicators will tend to displace their less efficient rivals. It is important to understand that none of these replicating entities is consciously interested in getting itself duplicated. But it will just happen that the world becomes filled with replicators that are more efficient.

In the case of the chain letter, being efficient may consist in accumulating a better collection of words on the paper. Instead of the somewhat implausible statement that “if you don’t obey the words on the card you will die in horrible agony within a month,” the message might change to “Please, I beg of you, to save your soul and mine, don’t take the risk: if you have the slightest doubt, obey the instructions and send the letter to six more people.”

Such “mutations” happen again and again, and the result will eventually be a heterogenous population of messages all in circulation, all descended from the same original ancestor but differing in detailed wording and in the strength and nature of the blandishments they employ. The variants that are more successful will increase in frequency at the expense of less successful rivals. Success is simply synonymous with frequency in circulation. 

The chain letter contains all of the elements of biological natural selection except one: Someone had to write the first chain letter. The first replicating biological entity, on the other hand, seems to have sprung up from an early chemical brew.

Consider this analogy an intermediate mental “step” towards the final goal. Because we know and appreciate the power of reasoning by analogy and metaphor, we can deduce that finding an appropriate analogy is one of the best ways to pound an idea into your head–assuming it is a correct idea that should be pounded in.

And because evolution through natural selection is one of the more powerful ideas a human being has ever had, it seems worth our time to pound this one in for good and start applying it elsewhere if possible. (For example, in his talk, A Lesson on Worldly Wisdom, Munger uncovers how business evolves in a manner such that competitive results are frequently similar to biological outcomes.)

Read Dawkins’ book in full for a deeper look at his views on replication and natural selection. It’s shorter than some of his other works but worth the time.

How Darwin Thought: The Golden Rule of Thinking

In his 1986 speech at the commencement of Harvard-Westlake School in Los Angeles (found in Poor Charlie’s Almanack) Charlie Munger gave a short Johnny Carson-like speech on the things to avoid to end up with a happy and successful life. One of his most salient prescriptions comes from the life of Charles Darwin:

It is my opinion, as a certified biography nut, that Charles Robert Darwin would have ranked in the middle of the Harvard School graduating class if 1986. Yet he is now famous in the history of science. This is precisely the type of example you should learn nothing from if bent on minimizing your results from your own endowment.

Darwin’s result was due in large measure to his working method, which violated all my rules for misery and particularly emphasized a backward twist in that he always gave priority attention to evidence tending to disconfirm whatever cherished and hard-won theory he already had. In contrast, most people early achieve and later intensify a tendency to process new and disconfirming information so that any original conclusion remains intact. They become people of whom Philip Wylie observed: “You couldn’t squeeze a dime between what they already know and what they will never learn.”

The life of Darwin demonstrates how a turtle may outrun a hare, aided by extreme objectivity, which helps the objective person end up like the only player without a blindfold in a game of Pin the Tail on the Donkey.

Charles Darwin (Via Wikipedia)

The great Harvard biologist E.O. Wilson agreed. In his book, Letters to a Young Scientist, Wilson argued that Darwin would have probably scored in the 130 range on a standard IQ test. And yet there he is, buried next to the calculus-inventing genius Isaac Newton in Westminster Abbey. (As Munger often notes.)

I had, also, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from memory than favorable ones.

What can we learn from the working and thinking habits of Darwin?

Extreme Focus Combined with Attentive Energy

The first clue comes from his own autobiography. Darwin was a hoover of information related to a topic he was interested in. After describing some of his specific areas of study while aboard the H.M.S. Beagle, Darwin concludes in his Autobiography:

The above various special studies were, however, of no importance compared with the habit of energetic industry and of concentrated attention to whatever I was engaged in, which I then acquired. Everything about which I thought or read was made to bear directly on what I had seen and was likely to see; and this habit of mind was continued during the five years of the voyage. I feel sure that it was this training which has enabled me to do whatever I have done in science.

This habit of pure and attentive focus to the task at hand is, of course, echoed in many of our favorite thinkers, from Sherlock Holmes, to E.O. Wilson, Feynman, Einstein, and others. Munger himself remarked that “I did not succeed in life by intelligence. I succeeded because I have a long attention span.”

In Darwin’s quest, there was almost nothing relevant to his task at hand — the problem of understanding the origin and development of species — which might have escaped his attention. He had an extremely broad antenna. Says David Quammen in his fabulous The Reluctant Mr. Darwin:

One of Darwin’s great strengths as a scientist was also, in some ways, a disadvantage: his extraordinary breadth of curiosity. From his study at Down House he ranged widely and greedily, in his constant search for data, across distances (by letter) and scientific fields. He read eclectically and kept notes like a pack rat. Over the years he collected an enormous quantity of interconnected facts. He looked for patterns but was intrigued equally by exceptions to the patterns, and exceptions to the exceptions. He tested his ideas against complicated groups of organisms with complicated stories, such as the barnacles, the orchids, the social insects, the primroses, and the hominids.

Not only was Darwin thinking broadly, taking in facts at all turns and on many subjects, but he was thinking carefully, This is where Munger’s admiration comes in: Darwin wanted to look at the exceptions. The exceptions to the exceptions. He was on the hunt for truth and not necessarily to confirm some highly-loved idea. Simply put, he didn’t want to be wrong about the nature of reality. To get the theory whole and correct would take lots of detail and time, as we will see.

***

The habit of study and observation didn’t stop at the plant and animal kingdom for Darwin. In a move that might seem strange by today’s standards, Darwin even opened a notebook to study the development of his own newborn son, William. This is from one of his notebooks:

Natural History of Babies

Do babies start (i.e., useless sudden movement of muscles) very early in life. Do they wink, when anything placed before their eyes, very young, before experience can have taught them to avoid danger. Do they know frown when they first see it?

From there, as his child grew and developed, Darwin took close notes. How did he figure out that the reflection in the mirror was him? How did he then figure out it was only an image of him, and that any other images that showed up (say, Dad standing behind him) were mere images too – not reality? These were further data in Darwin’s mental model of the accumulation of gradual changes, but more importantly, displayed his attention to detail. Everything eventually came to “bear directly on what I had seen and what I was likely to see.”

And in a practical sense, Darwin was a relentless note-taker. Notebook A, Notebook B, Notebook C, Notebook M, Notebook N…all filled with observations from his study of journals and texts, his own scientific work, his travels, and his life. Once he sat down to write, he had an enormous amount of prior written thought to draw on. He could also see gaps in his understanding, which he diligently filled in.

Become an Expert

You can learn much about Darwin (and truthfully about anyone) by who he studied and admired. If Darwin held anyone in high esteem, it was Charles Lyell, whose Principles of Geology was his faithful companion on the H.M.S. Beagle. Here is his description of Lyell from his autobiography, which tells us something of the traits Darwin valued and sought to emulate:

I saw more of Lyell than of any other man before and after my marriage. His mind was characterized, as it appeared to me, by clearness, caution, sound judgment and a good deal of originality. When I made any remark to him on Geology, he never rested until he saw the whole case clearly and often made me see it more clearly than I had done before. He would advance all possible objections to my suggestions, and even after these were exhausted would long remain dubious. A second characteristic was his hearty sympathy with the work of other scientific men.

Studying Lyell and geology enhanced Darwin’s (probably natural) suspicion that careful, detailed, and objective work was required to create scientific breakthroughs. And once Darwin had expertise and grounding in the level of expertise required by Lyell to understand and explain the theory of geology, he had a basis for the rest of his scientific work. From his autobiography:

After my return to England, it appeared to me that by following the example of Lyell in Geology, and by collecting all facts which bore in any way on the variation of animals and plants under domestication and nature, some light might perhaps be thrown on the whole subject.

In fact, it was Darwin’s study and understanding of geology itself that gave him something to lean on conceptually. Lyell’s, and his own, theory of geology was of a slow-moving process that accumulated massive gradual changes over time. This seems like common knowledge today, but at the time, people weren’t so sure that the mountains and the islands could have been created by such slow moving and incremental processes.

Wallace & Gruber’s book Creative People at Work, an analysis of a variety of thinkers and artists, argues that this basic mental model carried Darwin pretty far:

Why was the acquisition of expert knowledge in geology so important to the development of Darwin’s overall thinking? Because in learning geology Darwin ground a conceptual lens — a device for bringing into focus and clarifying the problems to which he turned his attention. When his attention shifted to problems beyond geology, the lens remained and Darwin used it in exploring new problems.

[…]

(Darwin’s) coral reef theory shows that he had become an expert in one field…(and) the central idea in Darwin’s understanding of geology was “gradualism” — that great things could be produced by long, continued accumulation of very small effects. The next phase in the development of this thought-form would involve his use of it as the basis for the construction of analogies between geology and new, unfamiliar subjects.

[…]

Darwin wrote his most explicit and concise statement of the nature and utility of his gradualism thought-form: “This multiplication of little means and brinigng the mind to grapple with great effect produced is a most laborious & painful effort of the mind.” He recognized that it took patience and discipline to discover the “little means” that were responsible for great effects. With the necessary effort, however, this gradualism thought-form could become the vehicle for explaining many remarkable phenomena in geology, biology, and even psychology.

It is amazing to note that Darwin did not write The Origin of Species until 1859 even though his notebooks show he had been pretty close to the correct idea at least 15 or 20 years prior. What was he doing in all that time? Well, for eight years at least, he was studying barnacles.

***

One of the reasons Darwin went on a crusade of classifying and studying the barnacles in minute detail was his concern that if he wasn’t a primary expert on some portion of the natural world, his work on a larger and more general thesis would not be taken seriously, and that it would probably have holes. He said as much to his friend Frederic Gerard, a French botanist, before he had begun his barnacle work: “How painfully (to me) true is your remark that no one has hardly a right to examine the question of species who has not minutely described many.” And, of course, Darwin being Darwin, he spent eight years remedying that unfathomable situation.

It seemed like extraordinarily tedious work, unrelated to anything a scientist would consider important on a grand scale. It was taxonomy. Classification. Even Darwin admitted later on that he doubted it was worth the years he spent on it. Yet, in his detail-oriented journey for expertise on barnacles, he hit upon some key ideas that would make his theory of natural selection complete. Says Quammen:

He also found notable differences on another categorical level; within species. Contrary to what he’d believed all along about the rarity of variation in the wild, barnacles turned out to be highly variable. A species wasn’t a Platonic essence or a metaphysical type. A species was a population of differing individuals.

He wouldn’t have seen that if he hadn’t assigned himself the trick job of drawing lines between one species and another. He wouldn’t have seen it if he hadn’t used his network of contacts and his good reputation as a naturalist to gather barnacle specimens, in quantity, from all over the world. The truth of variation only reveals itself in crowds. He wouldn’t have seen it if he hadn’t examined multiple individuals, not just single representatives, of as many species as possible….Abundant variation among barnacles filled a crucial role in his theory. Here they were, the minor differences on which natural selection works.

Darwin was so diligent it could be breathtaking at times. Quammen describes him gathering up various species to assess the data about their development and their variation. Birds, dead or alive, as many as possible. Foxes, dogs, ducks, pigeons, rabbits, cats…nothing escaped his purview. As many specimens as he could get his hands on. All while living in a secluded house in Victorian England, beset by constant illness. He was Big Data before Big Data was a thing, trying to suss out conclusions from a mass of observation.

The Golden Rule

Eventually, his work led him to something new: Species are not immutable, they are all part of the same family tree. They evolve through a process of variation — he didn’t know how; that took years for others to figure out through the study of genetics — and differential survival through natural selection.

Darwin was able to put his finger on why it took so long for humanity to come to this correct theory: It was extremely counter-intuitive to how one would naturally see the world. He admitted as much in the Origin of Species‘ concluding chapter:

The chief cause of our natural unwillingness to admit that one species has given birth to other and distinct species, is that we are always slow in admitting any great changes of which we do not see the steps. The difficulty is the same as that felt by so many geologists, when Lyell first insisted that long lines of inland cliffs had been formed, and great valleys excavated, by the agencies which we still see at work. The mind cannot possibly grasp the full meaning of the term of even a million years; it cannot add up and perceive the full effects of many slight variations, accumulated during an almost infinite number of generations.

Counter-intuition was Darwin’s specialty. And the reason he was so good was he had a very simple habit of thought, described in the autobiography and so cherished by Charlie Munger: He paid special attention to collecting facts which did not agree with his prior conceptions. He called this a golden rule.

I had, also, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from memory than favorable ones. Owing to this habit, very few objections were raised against my views which I had not at least noticed and attempted to answer.

So we see that Darwin’s great success, by his own analysis, owed to his ability to see, note, and learn from objections to his cherished thoughts. The Origin of Species has stood up in the face of 157 years of subsequent biological research because Darwin was so careful to make sure the theory was nearly impossible to refute. Later scientists would find the book slightly incomplete, but not incorrect.

This passage reminds one of, and probably influenced, Charlie Munger‘s prescription on the work required to hold an opinion: You must understand the opposite side of the argument better than the person holding that side does. It’s a very difficult way to think, tremendously unnatural in the face of our genetic makeup (the more typical response is to look for as much confirming evidence as possible). Harnessed properly, though, it is a powerful way to beat your own shortcomings and become a seeing man amongst the blind.

Thus, we can deduce that, in addition to good luck and good timing, it was Darwin’s habits of completeness, diligence, accuracy, and habitual objectivity which ultimately led him to make his greatest breakthroughs. It was tedious. There was no spark of divine insight that gave him his edge. He just started with the right basic ideas and the right heroes, and then worked for a long time and with extreme focus and objectivity, always keeping his eye on reality.

In the end, you can do worse than to read all you can find on Charles Darwin and try to copy his mental habits. They will serve you well over a long life.

Claude Shannon: The Man Who Turned Paper Into Pixels

“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning.”
— Claude Shannon (1948)

***

Claude Shannon is the most important man you’ve probably never heard of. If Alan Turing is to be considered the father of modern computing, then the American mathematician Claude Shannon is the architect of the Information Age.

The video, created by the British filmmaker Adam Westbrook, echoes the thoughts of Nassim Taleb that boosting the signal does not mean you remove the noise, in fact, just the opposite: you amplify it.

Any time you try to send a message from one place to another something always gets in the way. The original signal is always distorted. Where ever there is signal there is also noise.

So what do you do? Well, the best anyone could do back then was to boost the signal. But then all you do is boost the noise.

Thing is we were thinking about information all wrong. We were obsessed with what a message meant.

A Renoir and a receipt? They’re different, right? Was there a way to think of them in the same way? Like so many breakthroughs the answer came from an unexpected place. A brilliant mathematician with a flair for blackjack.

***

The transistor was invented in 1948, at Bell Telephone Laboratories. This remarkable achievement, however, “was only the second most significant development of that year,” writes James Gleick in his fascinating book: The Information: A History, a Theory, a Flood. The most important development of 1948 and what still underscores modern technology is the bit.

An invention even more profound and more fundamental came in a monograph spread across seventy-nine pages of The Bell System Technical Journal in July and October. No one bothered with a press release. It carried a title both simple and grand “A Mathematical Theory of Communication” and the message was hard to summarize. But it was a fulcrum around which the world began to turn. Like the transistor, this development also involved a neologism: the word bit, chosen in this case not by committee but by the lone author, a thirty-two-year -old named Claude Shannon. The bit now joined the inch, the pound, the quart, and the minute as a determinate quantity— a fundamental unit of measure.

But measuring what? “A unit for measuring information,” Shannon wrote, as though there were such a thing, measurable and quantifiable, as information.

[…]

Shannon’s theory made a bridge between information and uncertainty; between information and entropy; and between information and chaos. It led to compact discs and fax machines, computers and cyberspace, Moore’s law and all the world’s Silicon Alleys. Information processing was born, along with information storage and information retrieval. People began to name a successor to the Iron Age and the Steam Age.

Gleick also recounts the relationship between Turing and Shannon:

In 1943 the English mathematician and code breaker Alan Turing visited Bell Labs on a cryptographic mission and met Shannon sometimes over lunch, where they traded speculation on the future of artificial thinking machines. (“ Shannon wants to feed not just data to a Brain, but cultural things!” Turing exclaimed. “He wants to play music to it!”)

Commenting on vitality of information, Gleick writes:

(Information) pervades the sciences from top to bottom, transforming every branch of knowledge. Information theory began as a bridge from mathematics to electrical engineering and from there to computing. … Now even biology has become an information science, a subject of messages, instructions, and code. Genes encapsulate information and enable procedures for reading it in and writing it out. Life spreads by networking. The body itself is an information processor. Memory resides not just in brains but in every cell. No wonder genetics bloomed along with information theory. DNA is the quintessential information molecule, the most advanced message processor at the cellular level— an alphabet and a code, 6 billion bits to form a human being. “What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life,’” declares the evolutionary theorist Richard Dawkins. “It is information, words, instructions.… If you want to understand life, don’t think about vibrant, throbbing gels and oozes, think about information technology.” The cells of an organism are nodes in a richly interwoven communications network, transmitting and receiving, coding and decoding. Evolution itself embodies an ongoing exchange of information between organism and environment.

The bit is the very core of the information age.

The bit is a fundamental particle of a different sort: not just tiny but abstract— a binary digit, a flip-flop, a yes-or-no. It is insubstantial, yet as scientists finally come to understand information, they wonder whether it may be primary: more fundamental than matter itself. They suggest that the bit is the irreducible kernel and that information forms the very core of existence.

In the words of John Archibald Wheeler, the last surviving collaborator of both Einstein and Bohr, information gives rise to “every it— every particle, every field of force, even the spacetime continuum itself.”

This is another way of fathoming the paradox of the observer: that the outcome of an experiment is affected, or even determined, when it is observed. Not only is the observer observing, she is asking questions and making statements that must ultimately be expressed in discrete bits. “What we call reality,” Wheeler wrote coyly, “arises in the last analysis from the posing of yes-no questions.” He added: “All things physical are information-theoretic in origin, and this is a participatory universe.” The whole universe is thus seen as a computer —a cosmic information-processing machine.

The greatest gift of Prometheus to humanity was not fire after all: “Numbers, too, chiefest of sciences, I invented for them, and the combining of letters, creative mother of the Muses’ arts, with which to hold all things in memory .”

Information technologies are both relative in the time they were created and absolute in terms of the significance. Gleick writes:

The alphabet was a founding technology of information. The telephone, the fax machine, the calculator, and, ultimately, the computer are only the latest innovations devised for saving, manipulating, and communicating knowledge. Our culture has absorbed a working vocabulary for these useful inventions. We speak of compressing data, aware that this is quite different from compressing a gas. We know about streaming information, parsing it, sorting it, matching it, and filtering it. Our furniture includes iPods and plasma displays, our skills include texting and Googling, we are endowed, we are expert, so we see information in the foreground. But it has always been there. It pervaded our ancestors’ world, too, taking forms from solid to ethereal, granite gravestones and the whispers of courtiers. The punched card, the cash register, the nineteenth-century Difference Engine, the wires of telegraphy all played their parts in weaving the spiderweb of information to which we cling. Each new information technology, in its own time, set off blooms in storage and transmission. From the printing press came new species of information organizers: dictionaries, cyclopaedias, almanacs— compendiums of words, classifiers of facts, trees of knowledge. Hardly any information technology goes obsolete. Each new one throws its predecessors into relief. Thus Thomas Hobbes, in the seventeenth century, resisted his era’s new-media hype: “The invention of printing, though ingenious, compared with the invention of letters is no great matter.” Up to a point, he was right. Every new medium transforms the nature of human thought. In the long run, history is the story of information becoming aware of itself.

The Information: A History, a Theory, a Flood is a fascinating read.

Just Babies: The Origins of Good and Evil

Morality is hard to define, but all non-psychopaths experience strong gut reactions to certain moral violations. One way to understand it is from an evolutionary perspective. Our sense of morality is inherent and a fundamental part of being human.

***

Morality fascinates us. The stories we enjoy the most, whether fictional (as in novels, television shows, and movies) or real (as in journalism and historical accounts), are tales of good and evil. We want the good guys to be rewarded— and we really want to see the bad guys suffer.

So writes Paul Bloom in the first pages of Just Babies: The Origins of Good and Evil. His work, proposes that “certain moral foundations are not acquired through learning. They do not come from the mother’s knee … ”

What is morality?

Even philosophers don’t agree on morality. In fact, a lot of people don’t believe in morality at all.

To settle on some working terminology, Bloom writes:

Arguments about terminology are boring; people can use words however they please. But what I mean by morality—what I am interested in exploring, whatever one calls it— includes a lot more than restrictions on sexual behavior. Here is a simple example (of morality):

A car full of teenagers drives slowly past an elderly woman waiting at a bus stop. One of the teenagers leans out the window and slaps the woman, knocking her down. They drive away laughing.

Unless you are a psychopath, you will feel that the teenagers did something wrong. And it is a certain type of wrong. It isn’t a social gaffe like going around with your shirt inside out or a factual mistake like thinking that the sun revolves around the earth. It isn’t a violation of an arbitrary rule, such as moving a pawn three spaces forward in a chess game. And it isn’t a mistake in taste, like believing that the Matrix sequels were as good as the original.

As a moral violation, it connects to certain emotions and desires. You might feel sympathy for the woman and anger at the teenagers; you might want to see them punished. They should feel bad about what they did; at the very least, they owe the woman an apology. If you were to suddenly remember that one of the teenagers was you, many years ago, you might feel guilt or shame.

Punching someone in the face.

Hitting someone is a very basic moral violation. Indeed, the philosopher and legal scholar John Mikhail has suggested that the act of intentionally striking someone without their permission— battery is the legal term —has a special immediate badness that all humans respond to. Here is a good candidate for a moral rule that transcends space and time: If you punch someone in the face, you’d better have a damn good reason for it.

Not all morality has to do with what is wrong. “Morality,” Bloom says, “also encompasses questions of rightness.”

Morality from an Evolutionary Perspective

If you think of evolution solely in terms of “survival of the fittest” or “nature red in tooth and claw,” then such universals cannot be part of our natures. Since Darwin, though, we’ve come to see that evolution is far more subtle than a Malthusian struggle for existence. We now understand how the amoral force of natural selection might have instilled within us some of the foundation for moral thought and moral action.

Actually, one aspect of morality , kindness to kin, has long been a no-brainer from an evolutionary point of view. The purest case here is a parent and a child: one doesn’t have to do sophisticated evolutionary modeling to see that the genes of parents who care for their children are more likely to spread through the population than those of parents who abandon or eat their children.

We are also capable of acting kindly and generously toward those who are not blood relatives. At first, the evolutionary origin of this might seem obvious: clearly, we thrive by working together— in hunting, gathering, child care, and so on— and our social sentiments make this coordination possible.

Adam Smith pointed this out long before Darwin: “All the members of human society stand in need of each others assistance, and are likewise exposed to mutual injuries. Where the necessary assistance is reciprocally afforded from love, from gratitude, from friendship, and esteem, the society flourishes and is happy.”

This creates a tragedy of the commons problem.

But there is a wrinkle here; for society to flourish in this way, individuals have to refrain from taking advantage of others. A bad actor in a community of good people is the snake in the garden; it’s what the evolutionary biologist Richard Dawkins calls “subversion from within.” Such a snake would do best of all, reaping the benefits of cooperation without paying the costs. Now, it’s true that the world as a whole would be worse off if the demonic genes proliferated, but this is the problem, not the solution— natural selection is insensitive to considerations about “the world as a whole.” We need to explain what kept demonic genes from taking over the population, leaving us with a world of psychopaths.

Darwin’s theory was that cooperative traits could prevail if societies containing individuals who worked together peacefully would tend to defeat other societies with less cooperative members— in other words, natural selection operating at the group, rather than individual, level.

Writing of a hypothetical conflict between two imaginary tribes, Darwin wrote (in The Descent of Man): “If the one tribe included … courageous, sympathetic and faithful members who were always ready to warn each other of danger, to aid and defend each other, this tribe would without doubt succeed best and conquer the other.”

“An alternative theory,” Bloom writes, “more consistent with individual-level natural selection:”

is that the good guys might punish the bad guys. That is, even without such conflict between groups, altruism could evolve if individuals were drawn to reward and interact with kind individuals and to punish— or at least shun —cheaters, thieves, thugs, free riders, and the like.

The Difference Between Compassion and Empathy

there is a big difference between caring about a person (compassion) and putting yourself in the person’s shoes (empathy).

How can we best understand our moral natures?

Many would agree … that this is a question of theology, while others believe that morality is best understood through the insights of novelists, poets, and playwrights. Some prefer to approach morality from a philosophical perspective, looking not at what people think and how people act but at questions of normative ethics (roughly, how one should act) and metaethics (roughly, the nature of right and wrong).

Another lens is science.

We can explore our moral natures using the same methods that we use to study other aspects of our mental life, such as language or perception or memory. We can look at moral reasoning across societies or explore how people differ within a single society— liberals versus conservatives in the United States, for instance. We can examine unusual cases, such as cold-blooded psychopaths. We might ask whether creatures such as chimpanzees have anything that we can view as morality, and we can look toward evolutionary biology to explore how a moral sense might have evolved. Social psychologists can explore how features of the environment encourage kindness or cruelty, and neuroscientists can look at the parts of the brain that are involved in moral reasoning.

What are we born with?

Bloom argues that Thomas Jefferson was right when he wrote in a letter to his friend Peter Carr: “The moral sense, or conscience, is as much a part of man as his leg or arm. It is given to all human beings in a stronger or weaker degree, as force of members is given them in a greater or less degree.” This view, that we have an ingrained moral sense, was shared by enlightenment philosophers of the Jefferson period, including Adam Smith. While Smith is best known for his book, An Inquiry into the Nature and Causes of the Wealth of Nations, he himself favored his first book: The Theory of Moral Sentiments. The pages contain insight into “the relationship between imagination and empathy, the limits of compassion, our urge to punish others’ wrongdoing,” and more.

Bloom quotes Smith’s work to what he calls an “embarrassing degree.”

What aspects of morality are natural to us?

Our natural endowments include:

  • a moral sense— some capacity to distinguish between kind and cruel actions
  • empathy and compassion— suffering at the pain of those around us and the wish to make this pain go away
  • a rudimentary sense of fairness— a tendency to favor equal divisions of resources
  • a rudimentary sense of justice— a desire to see good actions rewarded and bad actions punished

Bloom argues that our goodness, however, is limited. This is perhaps best explained by Thomas Hobbes, who in 1651, argued that man “in the state of nature” is wicked and self-interested.

We have a moral sense that enables us to judge others and that guides our compassion and condemnation. We are naturally kind to others, at least some of the time. But we possess ugly instincts as well, and these can metastasize into evil. The Reverend Thomas Martin wasn’t entirely wrong when he wrote in the nineteenth century about the “native depravity” of children and concluded that “we bring with us into the world a nature replete with evil propensities.”

In The End …

We’re born with some elements of morality and others take time to emerge because, they require a capacity for reasoning. “The baby lacks a grasp of impartial moral principles—prohibitions or requirements that apply equally to everyone within a community. Such principles are at the foundation of systems of law and justice.”

There is a popular view that we are slaves of the passions …

that our moral judgments and moral actions are the product of neural mechanisms that we have no awareness of and no conscious control over. If this view of our moral natures were true, we would need to buck up and learn to live with it. But it is not true; it is refuted by everyday experience, by history, and by the science of developmental psychology.

It turns out instead that the right theory of our moral lives has two parts. It starts with what we are born with, and this is surprisingly rich: babies are moral animals. But we are more than just babies. A critical part of our morality—so much of what makes us human—emerges over the course of human history and individual development. It is the product of our compassion, our imagination, and our magnificent capacity for reason.

***

Still Curious? Just Babies: The Origins of Good and Evil goes on to explore some of the ways that Hobbes was right, among them: our indifference to strangers and our instinctive emotional responses.

Breakpoint: When Bigger is Not Better

Jeff Stibel’s book Breakpoint: Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain is an interesting read. The book is about “understanding what happens after a breakpoint. Breakpoints can’t and shouldn’t be avoided, but they can be identified.”

What is missing—what everyone is missing—is that the unit of measure for progress isn’t size, it’s time.

In any system continuous growth is impossible. Everything reaches a breakpoint. The real question is how the system responds to this breakpoint. “A successful network has only a small collapse, out of which a stronger network emerges wherein it reaches equilibrium, oscillating around an ideal size.”

The book opens with an interesting example.

In 1944, the United States Coast Guard brought 29 reindeer to St. Matthew Island, located in the Bering Sea just off the coast of Alaska. Reindeer love eating lichen, and the island was covered with it, so the reindeer gorged, grew large, and reproduced exponentially. By 1963, there were over 6,000 reindeer on the island, most of them fatter than those living in natural reindeer habitats.

There were no human inhabitants on St. Matthew Island, but in May 1965 the United States Navy sent an airplane over the island, hoping to photograph the reindeer. There were no reindeer to be found, and the flight crew attributed this to the fact that the pilot didn’t want to fly very low because of the mountainous landscape. What they didn’t realize was that all of the reindeer, save 42 of them, had died. Instead of lichen, the ground was covered with reindeer skeletons.

The network of St. Matthew Island reindeer had collapsed: the result of a population that grew too large and consumed too much. The reindeer crossed a pivotal point , a breakpoint, when they began consuming more lichen than nature could replenish. Lacking any awareness of what was happening to them, they continued to reproduce and consume. The reindeer destroyed their environment and, with it, their ability to survive. Within a few short years, the remaining 42 reindeer were dead. Their collapse was so extreme that for these reindeer there was no recovery.

In the wild, of course, reindeer can move if they run out of lichen, which allows lichen in the area to be replenished before they return.

Nature rarely allows the environment to be pushed so far that it collapses. Ecosystems generally keep life balanced. Plants create enough oxygen for animals to survive, and the animals, in turn, produce carbon dioxide for the plants. In biological terms, ecosystems create homeostasis.

We evolved to reproduce and consume whatever food is available.

Back when our ancestors started climbing down from the trees, this was a good thing: food was scarce so if we found some, the right thing to do was gorge. As we ate more, our brains were able to grow, becoming larger than those of any other primates. This was a very good thing. But brains consume disproportionately large amounts of energy and, as a result, can only grow so big relative to body size. After that point, increased calories are actually harmful. This presents a problem for humanity, sitting at the top of the food pyramid. How do we know when to stop eating? The answer, of course, is that we don’t. People in developed nations are growing alarmingly obese, morbidly so. Yet we continue to create better food sources, better ways to consume more calories with less bite.

Mother Nature won’t help us because this is not an evolutionary issue: most of the problems that result from eating too much happen after we reproduce, at which point we are no longer evolutionarily important. We are on our own with this problem. But that is where our big brains come in. Unlike reindeer, we have enough brainpower to understand the problem, identify the breakpoint, and prevent a collapse.

We all know that physical things have limits. But so do the things we can’t see or feel. Knowledge is an example. “Our minds can only digest so much. Sure, knowledge is a good thing. But there is a point at which even knowledge is bad.” This is information overload.

We have been conditioned to believe that bigger is better and this is true across virtually every domain. When we try to build artificial intelligence, we start by shoveling as much information into a computer as possible. Then we stare dumbfounded when the machine can’t figure out how to tie its own shoes. When we don’t get the results we want, we just add more data. Who doesn’t believe that the smartest person is the one with the biggest memory and the most degrees, that the strongest person has the largest muscles, that the most creative person has the most ideas?

Growth is great until it goes too far.

[W]e often destroy our greatest innovations by the constant pursuit of growth. An idea emerges, takes hold, crosses the chasm, hits a tipping point, and then starts a meteoric rise with seemingly limitless potential. But more often than not, it implodes, destroying itself in the process.

Growth isn’t bad. It’s just not as good as we think.

Nature has a lesson for us if we care to listen: the fittest species are typically the smallest. The tinest insects often outlive the largest lumbering animals. Ants, bees, and cockroaches all outlived the dinosaurs and will likely outlive our race. … The deadliest creature is the mosquito, not the lion. Bigger is rarely better in the long run. What is missing—what everyone is missing—is that the unit of measure for progress isn’t size, it’s time.

Of course, “The world is a competitive place, and the best way to stomp out potential rivals is to consume all the available resources necessary for survival.”

Otherwise, the risk is that someone else will come along and use those resources to grow and eventually encroach on the ones we need to survive.

Networks rarely approach limits slowly “… they often don’t know the carrying capacity of their environments until they’ve exceeded it. This is a characteristic of limits in general: the only way to recognize a limit is to exceed it. ” This is what happened with MySpace. It grew too quickly. Pages became cluttered and confusing. There was too much information. It “grew too far beyond its breakpoint.”

There is an interesting paradox here though: unless you want to keep small social networks, the best way to keep the site clean is actually to use a filter that prevents you from seeing a lot of information, which creates a filter bubble.

Stibel offers three phases to any successful network.

first, the network grows and grows and grows exponentially; second, the network hits a breakpoint, where it overshoots itself and overgrows to a point where it must decline, either slightly or substantially; finally, the network hits equilibrium and grows only in the cerebral sense, in quality rather than in quantity.

He offers some advice:

Rather than endless growth, the goal should be to grow as quickly as possible—what technologists call hypergrowth—until the breakpoint is reached. Then stop and reap the benefits of scale alongside stability.

Breakpoint goes on to predict the fall of Facebook.

Evolution is Blind but We’re Not

The first thing we do is try to figure out what went wrong. When people in organizations evaluate poor outcomes, determining what went wrong, and why is one of the first steps.

Once we have a cause, whether accurate or (often) not, we distribute this information around the organization with the hopes that the knowledge of why we made a mistake will prevent us from repeating that mistake.

We attempt to eliminate the mistake from happening again.

In his masterful book, Seeing What Others Don’t: The Remarkable Ways We Gain Insights, Gary Klein writes:

“Organizations have lots of reasons to dislike errors: they can pose severe safety risks, they disrupt coordination, they lead to waste, they reduce the chance for project success, they erode the culture, and they can result in lawsuits and bad publicity. … In your job as a manager, you find yourself spending most of your time flagging and correcting errors. You are continually checking to see if workers meet their performance standards. If you find deviations, you quickly respond to get everything back on track. It’s much easier and less frustrating to manage by reducing errors than to try to boost insights. You know how to spot errors.”

We hate errors, and we make every effort not to repeat them.

Here’s an idea that I’ve been toying around with recently — we can’t repeat the same error twice, in part because things are always changing.

In his wonderful book of Fragments, Heraclitus writes:

No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.

The river changes, and so does the person.

Evolution is blind to failure.

Evolution doesn’t have intent. When the DNA copy of a species creates a variation—say a shorter beak or sweeter taste—it does so without realizing these traits might have been tried before. These traits are not purposeful; evolution is blind to previous failures and cares, not whether a mutation that failed 8 years ago occurs again. This is not a conscious process. What failed to become an advantaged trait two generations ago may become one today. It may be that the environment changed, and where there was once a preference for a shorter beak, a longer one now offers an advantage, however slight.

By repeating errors, evolution adapts. This is why natural selection works. Artificial selection, on the other hand, makes us fragile because the selection isn’t blind anymore.

So why do we fail? One of the reasons for failure is our own ignorance.

“We may err because science has given us only a partial understanding of the world and how it works,” writes Atul Gawande in The Checklist Manifesto. “There are skyscrapers we do not yet know how to build, snowstorms we cannot predict, heart attacks we still haven’t learned how to stop.”

These things are within our grasp, but we are not quite there yet. Human knowledge grows by the day. Knowledge, in this case, can be positive ‘what works’ and negative ‘what doesn’t work.’ For example, we can now build skyscrapers hundreds of stories; this knowledge didn’t exist 100 years ago. Thanks to computers and technology, we can now model more variables, and we’re better able to predict the weather.

(In these endeavors we’re improving quickly in terms of knowledge and technology, while the environment changes slower.)

The same water doesn’t cross your foot. The world is always changing. What used to be a tailwind is now a headwind and vice versa.

Excusing Ignorance

We can excuse ignorance when we only have limited understanding, but we cannot excuse ineptitude. Failures when the knowledge exists, and we act contrary to it, become hard to forgive. This is important in the context of organizations because we tend to forgive someone who makes a ‘mistake’ for the first time but punishes the person who makes the same ‘mistake’ again. This is a form of artificial selection.

So we punish a person, who, whether intentionally or not, is mimicking evolution. Yet we can never really make the ‘same mistake’ twice because the same exact conditions do not exist again. We’re not the same, and neither is the world. (Of course, they are only punished if the outcome is negative.)

I’m not trying to say learning from mistakes is bad, only that it is limited (and a form of artificial selection). It’s a piece to the puzzle of knowledge. But if your process for learning from mistakes doesn’t account for changing knowledge/technology and environments, you have a blind spot. Things change.

Improving our ability to learn from mistakes involves more than simply determining what went wrong and trying to avoid that again in the future. We need a deeper understanding of the key variables that govern the situation (and their relation to the environment), the decision-making process, and our knowledge at the time of the decision.

Sometimes it’s smart to attempt things without knowledge of previous mistakes and sometimes it’s not.