Tag: Biology

Moving the Finish Line: The Goal Gradient Hypothesis

Imagine a sprinter running an Olympic race. He’s competing in the 1600 meter run.

The first two laps he runs at a steady but hard pace, trying to keep himself consistently near the head, or at least the middle, of the pack, hoping not to fall too far behind while also conserving energy for the whole race.

About 800 meters in, he feels himself start to fatigue and slow. At 1000 meters, he feels himself consciously expending less energy. At 1200, he’s convinced that he didn’t train enough.

Now watch him approach the last 100 meters, the “mad dash” for the finish. He’s been running what would be an all-out sprint to us mortals for 1500 meters, and yet what happens now, as he feels himself neck and neck with his competitors, the finish line in sight?

He speeds up. That energy drag is done. The goal is right there, and all he needs is one last push. So he pushes.

This is called the Goal Gradient Effect, or more precisely, the Goal Gradient Hypothesis. Its effect on biological creatures is not just a feeling, but a real and measurable thing.

***

The first person to try explaining the goal gradient hypothesis was an early behavioural psychologist named Clark L. Hull.

As with other animals, when it came to humans, Hull was a pretty hardcore “behaviourist”, thinking that human behaviour could eventually be reduced to mathematical prediction based on rewards and conditioning. As insane as this sounds now, he had a neat mathematical formula for human behaviour:

screen-shot-2016-10-14-at-12-34-26-pm

Some of his ideas eventually came to be seen as extremely limiting Procrustean Bed type models of human behavior, but the Goal Gradient Hypothesis was replicated many times over the years.

Hull himself wrote papers with titles like The Goal-Gradient Hypothesis and Maze Learning to explore the effect of the idea in rats. As Hull put it, “...animals in traversing a maze will move at a progressively more rapid pace as the goal is approached.” Just like the runner above.

Most of the work Hull focused on were animals rather than humans, showing somewhat unequivocally that in the context of approaching a reward, the animals did seem to speed up as the goal approached, enticed by the end of the maze. The idea was, however, resurrected in the human realm in 2006 with a paper entitled The Goal-Gradient Hypothesis Resurrected: Purchase Acceleration, Illusionary Goal Progress, and Customer Retention. (link)

The paper examined consumer behaviour in the “goal gradient” sense and found, alas, it wasn’t just rats that felt the tug of the “end of the race” — we do too. Examining a few different measurable areas of human behaviour, the researchers found that consumers would work harder to earn incentives as the goal came in sight, and that after the reward was earned, they’d slow down their efforts:

We found that members of a café RP accelerated their coffee purchases as they progressed toward earning a free coffee. The goal-gradient effect also generalized to a very different incentive system, in which shorter goal distance led members to visit a song-rating Web site more frequently, rate more songs during each visit, and persist longer in the rating effort. Importantly, in both incentive systems, we observed the phenomenon of post-reward resetting, whereby customers who accelerated toward their first reward exhibited a slowdown in their efforts when they began work (and subsequently accelerated) toward their second reward. To the best of our knowledge, this article is the first to demonstrate unequivocal, systematic behavioural goal gradients in the context of the human psychology of rewards.

Fascinating.

***

If we’re to take the idea seriously, the Goal Gradient Hypothesis has some interesting implications for leaders and decision-makers.

The first and most important is probably that incentive structures should take the idea into account. This is a fairly intuitive (but often unrecognized) idea: Far-away rewards are much less motivating than near term ones. Given the chance to earn $1,000 at the end of this month, and each thereafter, or $12,000 at the end of the year, which would you be more likely to work hard for?

What if I pushed it back even more but gave you some “interest” to compensate: Would you work harder for the potential to earn $90,000 five years from now or to earn $1,000 this month, followed by $1,000 the following month, and so on, every single month during five year period?

Companies like Nucor take the idea seriously: They pay bonuses to lower-level employees based on monthly production, not letting it wait until the end of the year. Essentially, the end of the maze happens every 30 days rather than once per year. The time between doing the work and the reward is shortened.

The other takeaway comes to consumer behaviour, as referenced in the marketing paper. If you’re offering rewards for a specific action from your customer, do you reward them sooner, or later?

The answer is almost always going to be “sooner”. In fact, the effect may be strong enough that you can get away with less total rewards by increasing their velocity.

Lastly, we might be able to harness the Hypothesis in our personal lives.

Let’s say we want to start reading more. Do we set a goal to read 52 books this year and hold ourselves accountable, or to read 1 book a week? What about 25 pages per day?

Not only does moving the goalposts forward tend to increase our motivation, but we repeatedly prove to ourselves that we’re capable of accomplishing them. This is classic behavioural psychology: Instant rewards rather than delayed. (Even if they’re psychological.) Not only that, but it forces us to avoid procrastination — leaving 35 books to be read in the last two months of the year, for example.

Those three seem like useful lessons, but here’s a challenge: Try synthesizing a new rule or idea of your own, combining the Goal Gradient Effect with at least one other psychological principle, and start testing it out in your personal life or in your organization. Don’t let useful nuggets sit around; instead, start eating the broccoli.

How To Mentally Overachieve — Charles Darwin’s Reflections On His Own Mind

We’ve written quite a bit about the marvelous British naturalist Charles Darwin, who with his Origin of Species created perhaps the most intense intellectual debate in human history, one which continues up to this day.

Darwin’s Origin was a courageous and detailed thought piece on the nature and development of biological species. It’s the starting point for nearly all of modern biology.

But, as we’ve noted before, Darwin was not a man of pure IQ. He was not Issac Newton, or Richard Feynman, or Albert Einstein — breezing through complex mathematical physics at a young age.

Charlie Munger thinks Darwin would have placed somewhere in the middle of a good private high school class. He was also in notoriously bad health for most of his adult life and, by his son’s estimation, a terrible sleeper. He really only worked a few hours a day in the many years leading up to the Origin of Species.

Yet his “thinking work” outclassed almost everyone. An incredible story.

In his autobiography, Darwin reflected on this peculiar state of affairs. What was he good at that led to the result? What was he so weak at? Why did he achieve better thinking outcomes? As he put it, his goal was to:

“Try to analyse the mental qualities and the conditions on which my success has depended; though I am aware that no man can do this correctly.”

In studying Darwin ourselves, we hope to better appreciate our own strengths and weaknesses and, not to mention understand the working methods of a “mental overachiever.

Let’s explore what Darwin saw in himself.

***

1. He did not have a quick intellect or an ability to follow long, complex, or mathematical reasoning. He may have been a bit hard on himself, but Darwin realized that he wasn’t a “5 second insight” type of guy (and let’s face it, most of us aren’t). His life also proves how little that trait matters if you’re aware of it and counter-weight it with other methods.

I have no great quickness of apprehension or wit which is so remarkable in some clever men, for instance, Huxley. I am therefore a poor critic: a paper or book, when first read, generally excites my admiration, and it is only after considerable reflection that I perceive the weak points. My power to follow a long and purely abstract train of thought is very limited; and therefore I could never have succeeded with metaphysics or mathematics. My memory is extensive, yet hazy: it suffices to make me cautious by vaguely telling me that I have observed or read something opposed to the conclusion which I am drawing, or on the other hand in favour of it; and after a time I can generally recollect where to search for my authority. So poor in one sense is my memory, that I have never been able to remember for more than a few days a single date or a line of poetry.

2. He did not feel easily able to write clearly and concisely. He compensated by getting things down quickly and then coming back to them later, thinking them through again and again. Slow, methodical….and ridiculously effective: For those who haven’t read it, the Origin of Species is extremely readable and clear, even now, 150 years later.

I have as much difficulty as ever in expressing myself clearly and concisely; and this difficulty has caused me a very great loss of time; but it has had the compensating advantage of forcing me to think long and intently about every sentence, and thus I have been led to see errors in reasoning and in my own observations or those of others.

There seems to be a sort of fatality in my mind leading me to put at first my statement or proposition in a wrong or awkward form. Formerly I used to think about my sentences before writing them down; but for several years I have found that it saves time to scribble in a vile hand whole pages as quickly as I possibly can, contracting half the words; and then correct deliberately. Sentences thus scribbled down are often better ones than I could have written deliberately.

3. He forced himself to be an incredibly effective and organized collector of information. Darwin’s system of reading and indexing facts in large portfolios is worth emulating, as is the habit of taking down conflicting ideas immediately.

As in several of my books facts observed by others have been very extensively used, and as I have always had several quite distinct subjects in hand at the same time, I may mention that I keep from thirty to forty large portfolios, in cabinets with labelled shelves, into which I can at once put a detached reference or memorandum. I have bought many books, and at their ends I make an index of all the facts that concern my work; or, if the book is not my own, write out a separate abstract, and of such abstracts I have a large drawer full. Before beginning on any subject I look to all the short indexes and make a general and classified index, and by taking the one or more proper portfolios I have all the information collected during my life ready for use.

4. He had possibly the most valuable trait in any sort of thinker: A passionate interest in understanding reality and putting it in useful order in his headThis “Reality Orientation” is hard to measure and certainly does not show up on IQ tests, but probably determines, to some extent, success in life.

On the favourable side of the balance, I think that I am superior to the common run of men in noticing things which easily escape attention, and in observing them carefully. My industry has been nearly as great as it could have been in the observation and collection of facts. What is far more important, my love of natural science has been steady and ardent.

This pure love has, however, been much aided by the ambition to be esteemed by my fellow naturalists. From my early youth I have had the strongest desire to understand or explain whatever I observed,–that is, to group all facts under some general laws. These causes combined have given me the patience to reflect or ponder for any number of years over any unexplained problem. As far as I can judge, I am not apt to follow blindly the lead of other men. I have steadily endeavoured to keep my mind free so as to give up any hypothesis, however much beloved (and I cannot resist forming one on every subject), as soon as facts are shown to be opposed to it.

Indeed, I have had no choice but to act in this manner, for with the exception of the Coral Reefs, I cannot remember a single first-formed hypothesis which had not after a time to be given up or greatly modified. This has naturally led me to distrust greatly deductive reasoning in the mixed sciences. On the other hand, I am not very sceptical—a frame of mind which I believe to be injurious to the progress of science. A good deal of scepticism in a scientific man is advisable to avoid much loss of time, but I have met with not a few men, who, I feel sure, have often thus been deterred from experiment or observations, which would have proved directly or indirectly serviceable.

[…]

Therefore my success as a man of science, whatever this may have amounted to, has been determined, as far as I can judge, by complex and diversified mental qualities and conditions. Of these, the most important have been—the love of science—unbounded patience in long reflecting over any subject—industry in observing and collecting facts—and a fair share of invention as well as of common sense.

5. Most inspirational to us of average intellect, he outperformed his own mental aptitude with these good habits, surprising even himself with the results.

With such moderate abilities as I possess, it is truly surprising that I should have influenced to a considerable extent the belief of scientific men on some important points.

***

Still Interested? Read his autobiography, his The Origin of Species, or check out David Quammen’s wonderful short biography of the most important period of Darwin’s life. Also, if you missed it, check out our prior post on Darwin’s Golden Rule.

The Need for Biological Thinking to Solve Complex Problems

“Biological thinking and physics thinking are distinct, and often complementary, approaches to the world, and ones that are appropriate for different kinds of systems.”

***

How should we think about complexity? Should we use a biological or physics system? The answer, of course, is that it depends. It’s important to have both tools available at your disposal.

These are the questions that Samuel Arbesman explores in his fascinating book Overcomplicated: Technology at the Limits of Comprehension.

[B]iological systems are generally more complicated than those in physics. In physics, the components are often identical—think of a system of nothing but gas particles, for example, or a single monolithic material, like a diamond. Beyond that, the types of interactions can often be uniform throughout an entire system, such as satellites orbiting a planet.

Biology is different and there is something meaningful to be learned from a biological approach to thinking.

In biology, there are a huge number of types of components, such as the diversity of proteins in a cell or the distinct types of tissues within a single creature; when studying, say, the mating behavior of blue whales, marine biologists may have to consider everything from their DNA to the temperature of the oceans. Not only is each component in a biological system distinctive, but it is also a lot harder to disentangle from the whole. For example, you can look at the nucleus of an amoeba and try to understand it on its own, but you generally need the rest of the organism to have a sense of how the nucleus fits into the operation of the amoeba, how it provides the core genetic information involved in the many functions of the entire cell.

Arbesman makes an interesting point here when it comes to how we should look at technology. As the interconnections and complexity of technology increases, it increasingly resembles a biological system rather than a physics one. There is another difference.

[B]iological systems are distinct from many physical systems in that they have a history. Living things evolve over time. While the objects of physics clearly do not emerge from thin air—astrophysicists even talk about the evolution of stars—biological systems are especially subject to evolutionary pressures; in fact, that is one of their defining features. The complicated structures of biology have the forms they do because of these complex historical paths, ones that have been affected by numerous factors over huge amounts of time. And often, because of the complex forms of living things, where any small change can create unexpected effects, the changes that have happened over time have been through tinkering: modifying a system in small ways to adapt to a new environment.

Biological systems are generally hacks that evolved to be good enough for a certain environment. They are far from pretty top-down designed systems. And to accommodate an ever-changing environment they are rarely the most optimal system on a mico-level, preferring to optimize for survival over any one particular attribute. And it’s not the survival of the individual that’s optimized, it’s the survival of the species.

Technologies can appear robust until they are confronted with some minor disturbance, causing a catastrophe. The same thing can happen to living things. For example, humans can adapt incredibly well to a large array of environments, but a tiny change in a person’s genome can cause dwarfism, and two copies of that mutation invariably cause death. We are of a different scale and material from a particle accelerator or a computer network, and yet these systems have profound similarities in their complexity and fragility.

Biological thinking, with a focus on details and diversity, is a necessary tool to deal with complexity.

The way biologists, particularly field biologists, study the massively complex diversity of organisms, taking into account their evolutionary trajectories, is therefore particularly appropriate for understanding our technologies. Field biologists often act as naturalists— collecting, recording, and cataloging what they find around them—but even more than that, when confronted with an enormously complex ecosystem, they don’t immediately try to understand it all in its totality. Instead, they recognize that they can study only a tiny part of such a system at a time, even if imperfectly. They’ll look at the interactions of a handful of species, for example, rather than examine the complete web of species within a single region. Field biologists are supremely aware of the assumptions they are making, and know they are looking at only a sliver of the complexity around them at any one moment.

[…]

When we’re dealing with different interacting levels of a system, seemingly minor details can rise to the top and become important to the system as a whole. We need “Field biologists” to catalog and study details and portions of our complex systems, including their failures and bugs. This kind of biological thinking not only leads to new insights, but might also be the primary way forward in a world of increasingly interconnected and incomprehensible technologies.

Waiting and observing isn’t enough.

Biologists will often be proactive, and inject the unexpected into a system to see how it reacts. For example, when biologists are trying to grow a specific type of bacteria, such as a variant that might produce a particular chemical, they will resort to a process known as mutagenesis. Mutagenesis is what it sounds like: actively trying to generate mutations, for example by irradiating the organisms or exposing them to toxic chemicals.

When systems are too complex for human understanding, often we need to insert randomness to discover the tolerances and limits of the system. One plus one doesn’t always equal two when you’re dealing with non-linear systems. For biologists, tinkering is the way to go.

As Stewart Brand noted about legacy systems, “Teasing a new function out of a legacy system is not done by command but by conducting a series of cautious experiments that with luck might converge toward the desired outcome.”

When Physics and Biology Meet

This doesn’t mean we should abandon the physics approach, searching for underlying regularities in complexity. The two systems complement one another rather than compete.

Arbesman recommends asking the following questions:

When attempting to understand a complex system, we must determine the proper resolution, or level of detail, at which to look at it. How fine-grained a level of detail are we focusing on? Do we focus on the individual enzyme molecules in a cell of a large organism, or do we focus on the organs and blood vessels? Do we focus on the binary signals winding their way through circuitry, or do we examine the overall shape and function of a computer program? At a larger scale, do we look at the general properties of a computer network, and ignore the individual machines and decisions that make up this structure?

When we need to abstract away a lot of the details we lean on physics thinking more. Think about it from an organizational perspective. The new employee at the lowest level is focused on the specific details of their job whereas the executive is focused on systems, strategy, culture, and flow — how things interact and reinforce one another. The details of the new employee’s job are lost on them.

We can’t use one system, whether biological or physics, exclusively. That’s a sure way to fragile thinking. Rather, we need to combine them.

In Cryptonomicon, a novel by Neal Stephenson, he makes exactly this point talking about the structure of the pantheon of Greek gods:

And yet there is something about the motley asymmetry of this pantheon that makes it more credible. Like the Periodic Table of the Elements or the family tree of the elementary particles, or just about any anatomical structure that you might pull up out of a cadaver, it has enough of a pattern to give our minds something to work on and yet an irregularity that indicates some kind of organic provenance—you have a sun god and a moon goddess, for example, which is all clean and symmetrical, and yet over here is Hera, who has no role whatsoever except to be a literal bitch goddess, and then there is Dionysus who isn’t even fully a god—he’s half human—but gets to be in the Pantheon anyway and sit on Olympus with the Gods, as if you went to the Supreme Court and found Bozo the Clown planted among the justices.

There is a balance and we need to find it.

Lewis Thomas on our Social Nature and “Getting the Air Right”


“What it needs is for the air to be made right. If you want a bee to make honey, you do not issue protocols on solar navigation or carbohydrate chemistry, you put him together with other bees (and you’d better do this quickly, for solitary bees do not stay alive) and you do what you can to arrange the general environment around the hive. If the air is right, the science will come in its own season, like pure honey.”
— Lewis Thomas

***

In his wonderful collection of essays, The Lives of a Cell, the biologist Lewis Thomas displays a fairly pronounced tendency to compare humans to the “social insects” — primarily bees and ants. It’s not unfair to wonder: Looked at from a properly high perch, are humans simply doing the equivalent of hive-building and colony-building?

In a manner Yuval Harari would later echo in his book Sapiens, Thomas concludes that, while we’re similar, there are some pretty essential differences. He wonders aloud in the essay titled Social Talk:

Nobody wants to think that the rapidly expanding mass of mankind, spreading out over the surface of the earth, blackening the ground, bears any meaningful resemblance to the life of an anthill or a hive. Who would consider for a moment that the more than 3 billion of us are a sort of stupendous animal when we become linked together? We are not mindless, nor is our day-to-day behavior coded out to the last detail by our genomes, nor do we seem to be engaged together, compulsively, in any single, universal, stereotyped task analogous to the construction of a nest. If we were ever to put all our brains together in fact, to make a common mind the way the ants do, it would be an unthinkable thought, way over our heads.

Social animals tend to keep at a particular thing, generally something huge for their size; they work at it ceaselessly under genetic instructions and genetic compulsion, using it to house the species and protect it, assuring permanence.

There are, to be sure, superficial resemblance’s in some of the things we do together, like building glass and plastic cities on all the land and farming under the sea, or assembling in armies, or landing samples of ourselves on the moon, or sending memoranda into the next galaxy. We do these together without being quite sure why, but we can stop doing one thing and move to another whenever we like. We are not committed or bound by our genes to stick to one activity forever, like the wasps.

Today’s behavior is no more fixed than when we tumbled out over Europe to build cathedrals in the twelfth century. At that time we were convinced that it would go on forever, that this was the way to live, but it was not; indeed, most of us have already forgotten what it was all about. Anything we do in this transient, secondary social way, compulsively and with all our energies but only for a brief period of our history, cannot be counted as social behavior in the biological sense. If we can turn it on and off, on whims, it isn’t likely that our genes are providing the detailed instructions. Constructing Charters was good for our minds, but we found that our lives went on, and it is no more likely that we will find survival in Rome plows or laser bombs, or rapid mass transport or a Mars lander, or solar power, or even synthetic protein. We do tend to improvise things like this as we go along, but it is clear that we can pick and choose.

With our basic nature as a backdrop, human beings “pick and choose,” in Thomas’s words, among the possible activities we might engage in. These can range from pyramid building to art and music, from group campfire songs to extreme and brutal war. The wide range, the ability to decide to be a warring society sometimes and a peaceful society sometimes, might be seen as evidence that there are major qualitative differences between what humans do as a group and what the social insects are up to. Maybe we’re not just hive-builders after all.

What causes the difference then? Thomas thought it might well be our innate capacity for language, and the information it allows us to share:

It begins to look, more and more disturbingly, as if the gift of language is the single human trait that marks us all genetically, setting us apart from all the rest of life. Language is, like nest building or hive making, the universal and biologically specific activity of human beings. We engage in it communally, compulsively, and automatically. We cannot be human without it; if we were to be separated from it our minds would die, as surely as bees lost from the hive.

We are born knowing how to use language. The capacity to recognize syntax, to organize and deploy words into intelligible sentences, is innate in the human mind. We are programmed to identify patterns and generate grammar. There are invariant and variable structures in speech that are common to all of us. As chicks are endowed with an innate capacity to read information in the shapes of overhanging shadows, telling hawk from other birds, we can identify the meaning of grammar in a string of words, and we are born this way. According to Chomsky, who has examined it as a biologist looks at live tissue, language “must simply be a biological property of the human mind.” The universal attributes of language are genetically set; we do not learn them, or make them up as we go along.

We work at this all our lives, and collectively we give it life, but we do not exert the least control over language, not as individuals or committees or academies or governments. Language, once it comes alive, behaves like an active, motile organism. Parts of it are always being changed, by a ceaseless activity to which all of us are committed; new words are invented and inserted, old ones have their meaning altered or abandoned. New ways of stringing words and sentences together come into fashion and vanish again, but the underlying structure simply grows, enriches itself, and expands. Individual languages age away and seem to die, but they leave progeny all over the place. Separate languages can exist side by side for centuries without touching each other, maintaining their integrity with the vigor of incompatible tissues. At other times, two languages may come together, fuse, replicate, and give rise to nests of new tongues.

The thing about the development of language is its unplannedness. There’s no language committee directing the whole enterprise. Not only is language innate, as Noam Chomsky and his student Steven Pinker have so well proven, but it’s extremely flexible based on the needs of its users. All the strange things about our language that seem so poorly drawn up were never drawn up at all. (What kind of masochist would put an “s” in the word lisp?)

***

One commonality to the social insects that Thomas does see is something he calls Getting the Air Right – his description of a really productive human group as a direct reflection of a really productive bee colony. In this case, he’s talking about getting great science done, but the application to other human endeavors seems clear.

The following piece, pulled from his essay titled Natural Science, is worth reading and re-reading closely when you’re tempted to “command and control” others around you.

I don’t know of any other human occupation, even including what I have seen of art, in which the people engaged in it are so caught up, so totally preoccupied, so driven beyond their strength and resources.

Scientists at work have the look of creatures following genetic instructions; they seem to be under the influence of a deeply placed human instinct. They are, despite their efforts at dignity, rather like young animals engaged in savage play. When they are near to an answer their hair stands on end, they sweat, they are awash in their own adrenalin. To grab the answer, and grab it first, is for them a more powerful drive than feeding or breeding or protecting themselves against the elements.

It sometimes looks like a lonely activity, but it is as much the opposite of lonely as human behavior can be. There is nothing so social, so communal, and so interdependent. An active field of science is like an immense intellectual anthill; the individual almost vanishes into the mass of minds tumbling over each other, carrying information from place to place, passing it around at the speed of light.

There are special kinds of information that seem to be chemotactic. As soon as a trace is released, receptors at the back of the neck are caused to tremble, there is a massive convergence of motile minds flying upwind on a gradient of surprise, crowding around the source. It is an infiltration of intellects, an inflammation.

There is nothing to touch the spectacle. In the midst of what seems a collective derangement of minds in total disorder, with bits of information being scattered about, torn to shreds, disintegrated, deconstituted, engulfed, in a kind of activity that seems as random and agitated as that of bees in a disturbed part of the hive, there suddenly emerges, with the purity of a slow phrase of music, a single new piece of truth about nature.

In short, it works. It is the most powerful and productive of the things human beings have learned to do together in many centuries, more effective than farming, or hunting and fishing, or building cathedrals, or making money. It is instinctive behavior, in my view, and I do not understand how it works.

It cannot be prearranged in any precise way; the minds cannot be lined up in tidy rows and given directions from printed sheets. You cannot get it done by instructing each mind to make this or that piece, for central committees to fit with the pieces made by the other instructed minds. It does not work this way.

What it needs is for the air to be made right. If you want a bee to make honey, you do not issue protocols on solar navigation or carbohydrate chemistry, you put him together with other bees (and you’d better do this quickly, for solitary bees do not stay alive) and you do what you can to arrange the general environment around the hive. If the air is right, the science will come in its own season, like pure honey.

Still Interested? Check out another great biologist, E.O. Wilson, writing about his experiences in science, or yet another, Richard Dawkins, writing about why chain letters work as a method for understanding natural selection.

What Can the Three Buckets of Knowledge Teach Us About History?

Every statistician knows that a large, relevant sample size is their best friend. What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history, you can pick your own number, I picked 20,000 years of recorded human behavior. Those are the three largest sample sizes we can access and the most relevant. — Peter Kaufman

8091682612_546765b9d6_k

 

When we seek to understand the world, we’re faced with a basic question: Where do I start? Which sources of knowledge are the most useful and the most fundamental?

Farnam Street takes its lead here from Charlie Munger, who argued that the “base” of your intellectual pyramid should be the great ideas from the big academic disciplines. Mental models. Similarly, Mr. Kaufman’s idea, presented above, is that we can learn the most fundamental knowledge from the three oldest and most invariant forms of knowledge: Physics and math, from which we derive the rules the universe plays by; biology, from which we derive the rules life on Earth plays by; and human history, from which we derive the rules humans have played by.

With that starting point, we’ve explored a lot of ideas and read a lot of books, looking for connections amongst the big, broad areas of useful knowledge. Our search led us to a wonderful book called The Lessons of History, which we’ve posted about before. The book is a hundred-page distillation of the lessons learned in 50 years of work by two brilliant historians, Will and Ariel Durant. The Durants spent those years writing a sweeping 11-book, 10,000-page synthesis of the major figures and periods in human history, with an admitted focus on Western civilization.(Although they admirably tackle Eastern civilization up to 1930 or so in the epic Our Oriental Heritage.) With The Lessons of History, the pair sought to derive a few major lessons learned from the long pull.

Let’s explore a few ways in which Durants’ brilliant work interplays with the three buckets of human knowledge that help us understand the world at a deep level.

***

Lessons of Geologic Time

Durant has a classic introduction for this kind of “big synthesis” historical work:

Since man is a moment in astronomic time, a transient guest of the earth, a spore of his species, a scion of his race, a composite of body, character, and mind, a member of a family and a community, a believer or doubter of a faith, a unit in an economy, perhaps a citizen in a state or a soldier in an army, we may ask the corresponding heads — astronomy, geology, geography, biology, ethnology, psychology, morality, religion, economics, politics, and war — what history has to say about the nature, conduct, and prospects of man. It is a precarious enterprise, and only a fool would try to compress a hundred centuries into a hundred pages of hazardous conclusions. We proceed.

The first topic Durant approaches is our relationship to the physical Earth, a group of knowledge we can place in the second bucket, in Kaufman’s terms. We must recognize that the varieties of geology and physical climate we live in have to a large extent determined the course of human history. (Jared Diamond would agree, that being a major component of his theory of human history.)

History is subject to geology. Every day the sea encroaches somewhere upon the land, or the land upon the sea; cities disappear under the water, and sunken cathedrals ring their melancholy bells. Mountains rise and fall in the rhythm of emergence and erosion; rivers swell and flood, or dry up, or change their course; valleys become deserts, and isthmuses become straits. To the geologic eye all of the surface of the earth is a fluid form, and man moves upon it as insecurely as Peter walking on the waves to Christ.

There are some big, useful lessons we can draw from studying geologic time. The most obvious might be the concept of gradualism, or slow incremental change over time. This was most well-understood by Darwin, who applied that form of reasoning to understand the evolution of species. His hero was Charles Lyell, whose Principles of Geology created our understanding of a slow, move-ahead process on the long scale of geology.

And of course, that model is quite practically useful to us today — it is through slow, incremental, grinding change, punctuated at times by large-scale change when necessary and appropriate, that things move ahead most reliably. We might be reminded in the modern corporate world of General Electric, which ground ahead from an electric lamp company to an industrial giant, step-by-step over a long period which destroyed many thousands of lesser companies with less adaptive cultures.

We can also use this model to derive the idea of human nature as nearly fixed; it changes in geologic time, not human time. This explains why the fundamental problems of history tend to recur. We’re basically the same as we’ve always been:

History repeats itself in the large because human nature changes with geological leisureliness, and man is equipped to respond in stereotyped ways to frequently occurring situations and stimuli like hunger, danger, and sex. But in a developed and complex civilization individuals are more differentiated and unique than in a primitive society, and many situations contain novel circumstances requiring modifications of instinctive response; custom recedes, reasoning spreads; the results are less predictable. There is no certainty that the future will repeat the past. Every year is an adventure.

Lastly, Mother Nature’s long history also teaches us something of resilience, which is connected to the idea of grind-ahead change. Studying evolution helps us understand that what is fragile will eventually break under the stresses of competition: Most importantly, fragile relationships break, but strong win-win relationships have super glue that keeps parties together. We also learn that weak competitive positions are eventually rooted out due to competition and new environments, and that a lack of adaptiveness to changing reality is a losing strategy when the surrounding environment shifts enough. These and others are fundamental knowledge and work the same in human organizations as in Nature.

The Biology of History

Durant moves from geology into the realm of human biology: Our nature determines the “arena” in which the human condition can play out. Human biology gives us the rules of the chessboard, and the Earth and its inhabitants provide the environment in which we play the game. The variety of outcomes approaches infinity from this starting point. That’s why this “bucket” of human knowledge is such a crucial one to study. We need to know the rules.

Thinking with the first “bucket” of knowledge — the mathematics and physics that drive all things in the universe — it’s easy to derive that compounding multiplication can take a small population and make it a very large one over a comparatively short time. 2 becomes 4 becomes 8 becomes 16, and so on. But because we also know that the spoils of the physical world are finite, the “Big Model” of Darwinian natural selection flows naturally from the compounding math: As populations grow but their surroundings offer limitations, there must be a way to derive who gets the spoils.

Not only does this provide the basis for biological competition over resources, a major lesson in the second bucket, it also provides the basis for the political and economic systems in bucket three of human history: Our various systems of political and economic organization are fundamentally driven by decisions on how to give order and fairness to the brutal reality created by human competition.

In this vein, we have previously discussed Durant’s three lessons of biological history: Life is Competition. Life is Selection. Life must Breed. (Head over to that post for the full scope of that idea from Durant’s book.) These simple precepts lead to the interesting results in biology, and most relevant to us, to similar interesting results in human culture itself:

Like other departments of biology, history remains at bottom a natural selection of the fittest individuals and groups in a struggle wherein goodness receives no favors, misfortunes abound, and the final test is the ability to survive.

***

We do, however, need to be careful to think with the right “bucket” at the right time. Durant offers us a cautionary tale here: The example of the growth and decay of societies shows an area where the third bucket, human culture, offers a different reality than what a simple analogy from physics or biology might show. Cultural decay is not inevitable, as it might be with an element or a physical organism:

If these are the sources of growth, what are the causes of decay? Shall we suppose, with Spengler and many others, that each civilization is an organism, naturally and yet mysteriously endowed with the power of development and the fatality of death? It is temping to explain the behavior of groups through analogy with physiology or physics, and to ascribe the deterioration of a society to some inherent limit in its loan and tenure of life, or some irreparable running down of internal force. Such analogies may offer provisional illumination, as when we compare the association of individuals with an aggregation of cells, or the circulation of money from banker back to banker with the systole and diastole of the heart.

But a group is no organism physically added to its constituent individuals; it has no brain or stomach of its own; it must think or feel with the brains and nerves of its members. When the group or a civilization declines, it is through no mystic limitation of a corporate life, but through the failure of its political or intellectual leaders to meet the challenges of change.

[…]

But do civilizations die? Again, not quite. Greek civilization is not really dead; on its frame is gone and its habitat has changed and spread; it survives in the memory of the race, and in such abundance that no one life, however full and long, could absorb it all. Homer has more readers now than in his own day and land. The Greek pets and philosophers are in every library and college; at this moment Plato is being studied by a hundred thousand discovers of the dear delight of philosophy overspread life with understanding thought. This selective survival of creative minds is the most real and beneficent of immortalities.

In this sense, the ideas that thrive in human history are not bound by the precepts of physics. Knowledge — the kind which can be passed from generation to generation in an accumulative way — is a unique outcome in the human culture bucket. Other biological creatures only pass down DNA, not accumulated learning. (Yuval Harari similarly declared that “The Cognitive Revolution is accordingly the point when history declared its independence from biology.”)

***

With that caveat in mind, the concept of passed-down ideas does have some predictable overlap with major mental models of the first two buckets of physics/math and biology.

The first is compounding: Ideas and knowledge compound in the same mathematical way that money or population does. If I have an idea and tell my idea to you, we both have the idea. If we each take that idea and recombine it with another idea we already had, we now have three ideas from a starting point of only one. If we can each connect that one idea to two ideas we had, we now have five ideas between us. And so on — you can see how compounding would take place as we told our friends about the five ideas and they told theirs. So the Big Model of compound interest works on ideas too.

The second interplay is to see that human ideas go through natural selection in the same way biological life does.

Intellect is therefore a vital force in history, but it can also be a dissolvent and destructive power. Out of every hundred new ideas ninety-nine or more will probably be inferior to the traditional responses which they propose to replace. No one man, however brilliant or well-informed, can come in one lifetime to such fullness of understanding as to safely judge and dismiss the customs or institutions of society, for these are the wisdom of generations after centuries of experiment in the laboratory of history.

This doesn’t tell us that the best ideas survive any more than natural selection tells us that the best creatures survive. It just means, at the risk of being circular, that the ideas most fit for propagation are the ones that survive for a long time. Most truly bad ideas tend to get tossed out in the vicissitudes of time either through the early death of their proponents or basic social pressure. But any idea that strikes a fundamental chord in humanity can last a very long time, even if it’s wrong or harmful. It simply has to be memorable and have at least a kernel of intuitive truth.

For more, start thinking about the three buckets of knowledge, read Durant, and start getting to work on synthesizing as much as possible.

What Can Chain Letters Teach us about Natural Selection?

“It is important to understand that none of these replicating entities is consciously interested in getting itself duplicated. But it will just happen that the world becomes filled with replicators that are more efficient.”

***

In 1859, Charles Darwin first described his theory of evolution through natural selection in The Origin of Species. Here we are, 157 years later, and although it has become an established fact in the field of biology, its beauty is still not that well understood among the populace. I think that’s because it’s slightly counter-intuitive. Unlike string theory or quantum mechanics, the theory of evolution through natural selection is pretty easily obtainable by most.

So, is there a way we can help ourselves understand the theory in an intuitive way, so we can better go on applying it to other domains? I think so, and it comes from an interesting little volume released in 1995 by the biologist Richard Dawkins called River Out of Eden. But first, let’s briefly head back to the Origin of Species, so we’re clear on what we’re trying to understand.

***

In the fourth chapter of the book, entitled “Natural Selection,” Darwin describes a somewhat cold and mechanistic process for the development of species: If species had heritable traits and variation within their population, they would survive in different numbers, and those most adapted to survival would thrive and pass on those traits to successive generations. Eventually, new species would arise, slowly, as enough variation and differential reproduction acted on the population to create a de facto branch in the family tree.

Here’s the original description.

Let it be borne in mind how infinitely complex and close-fitting are the mutual relations of all organic beings to each other and to their physical conditions of life. Can it, then, be thought improbable, seeing that variations useful to man have undoubtedly occurred, that other variations useful in some way to each being in the great and complex battle of life, should sometimes occur in the course of thousands of generations? If such do occur, can we doubt (remembering that many more individuals are born than can possibly survive) that individuals having any advantage, however slight, over others, would have the best chance of surviving and of procreating their kind? On the other hand, we may feel sure that any variation in the least degree injurious would be rigidly destroyed. This preservation of favourable variations and the rejection of injurious variations, I call Natural Selection.

[…]

In such case, every slight modification, which in the course of ages chanced to arise, and which in any way favored the individuals of any species, by better adapting them to their altered conditions, would tend to be preserved; and natural selection would thus have free scope for the work of improvement.

[…]

It may be said that natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejection that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. 

The beauty of the theory is in its simplicity. The mechanism of evolution is, at root, a simple one. An unguided one. Better descendants outperform lesser ones in a competitive world and are more successful at replicating. Traits that improve the survival of their holder in its current environment tend to be preserved and amplified over time. This is hard to see in real time, although some examples are helpful in understanding the concept, e.g. antibiotic resistance.

Darwin’s idea didn’t take as quickly as we might like to think. In The Reluctant Mr. Darwin, David Quammen talks about the period after the release of the groundbreaking work, in which the world had trouble coming to grips with Darwin’s theory. It was not the case, as it might seem today, that the world simply threw up its hands and accepted Darwin as a genius. This is a lesson in and of itself. It was quite the contrary:

By the 1890s, natural selection as Darwin had defined it–that is, differential reproductive success resulting from small, undirected variations and serving as the chief mechanism of adaption and divergence–was considered by many evolutionary biologists to have been a wrong guess.

It wasn’t until Gregor Mendel’s peas showed how heritability worked that Darwin’s ideas were truly vindicated against his rivals’. So if we have trouble coming to terms with evolution by natural selection in the modern age, we’re not alone: So did Darwin’s peers.

***

What’s this all got to do with chain letters? Well, in Dawkins’ River Out of Eden, he provides an analogy for the process of evolution through natural selection that is quite intuitive, and helpful in understanding the simple power of the idea. How would a certain type of chain letter come to dominate the population of all chain letters? It would work the same way.

A simple example is the so-called chain letter. You receive in the mail a postcard on which is written: “Make six copies of this card and send them to six friends within a week. If you do not do this, a spell will be cast upon you and you will die in horrible agony within a month.” If you are sensible you will throw it away. But a good percentage of people are not sensible; they are vaguely intrigued, or intimidated by the threat, and send six copies of it to other people. Of these six, perhaps two will be persuaded to send it on to six other people. If, on average, 1/3 of the people who receive the card obey the instructions written on it, the number of cards in circulation will double every week. In theory, this means that the number of cards in circulation after one year will be 2 to the power of 52, or about four thousand trillion. Enough post cards to smother every man, woman, and child in the world.

Exponential growth, if not checked by the lack of resources, always leads to startlingly large-scale results in a surprisingly short time. In practice, resources are limited and other factors, too, serve to limit exponential growth. In our hypothetical example, individuals will probably start to balk when the same chain letter comes around to them for the second time. In the competition for resources, variants of the same replicator may arise that happen to be more efficient at getting themselves duplicated. These more efficient replicators will tend to displace their less efficient rivals. It is important to understand that none of these replicating entities is consciously interested in getting itself duplicated. But it will just happen that the world becomes filled with replicators that are more efficient.

In the case of the chain letter, being efficient may consist in accumulating a better collection of words on the paper. Instead of the somewhat implausible statement that “if you don’t obey the words on the card you will die in horrible agony within a month,” the message might change to “Please, I beg of you, to save your soul and mine, don’t take the risk: if you have the slightest doubt, obey the instructions and send the letter to six more people.”

Such “mutations” happen again and again, and the result will eventually be a heterogenous population of messages all in circulation, all descended from the same original ancestor but differing in detailed wording and in the strength and nature of the blandishments they employ. The variants that are more successful will increase in frequency at the expense of less successful rivals. Success is simply synonymous with frequency in circulation. 

The chain letter contains all of the elements of biological natural selection except one: Someone had to write the first chain letter. The first replicating biological entity, on the other hand, seems to have sprung up from an early chemical brew.

Consider this analogy an intermediate mental “step” towards the final goal. Because we know and appreciate the power of reasoning by analogy and metaphor, we can deduce that finding an appropriate analogy is one of the best ways to pound an idea into your head–assuming it is a correct idea that should be pounded in.

And because evolution through natural selection is one of the more powerful ideas a human being has ever had, it seems worth our time to pound this one in for good and start applying it elsewhere if possible. (For example, Munger has talked about how business evolves in a manner such that competitive results are frequently similar to biological outcomes.)

Read Dawkins’ book in full for a deeper look at his views on replication and natural selection. It’s shorter than some of his other works, but worth the time.