Category: Science

The Founder Principle: A Wonderful Idea from Biology

We’ve all been taught natural selection; the mechanism by which species evolve through differential reproductive success. Most of us are familiar with the idea that random mutations in DNA cause variances in offspring, some of which survive more frequently than others. However, this is only part of the story.

Sometimes other situations cause massive changes in species populations, and they’re often more nuanced and tough to spot.

One such concept comes from one of the most influential biologists in history, Ernst Mayr. He called it The Founder Principle, a mechanism by which new species are created by a splintered population; often with lower genetic diversity and an increased risk of extinction.

In the brilliant The Song of the Dodo: Island Biography in an Age of ExtinctionDavid Quammen gives us not only the stories of many brilliant biological naturalists including Mayr, but we also get a deep dive into the core concepts of evolution and extinction, including the founder principle.

Quammen begins by outlining the basic idea:

When a new population is founded in an isolated place, the founders usually constitute a numerically tiny group – a handful of lonely pioneers, or just a pair, or maybe no more than one pregnant female. Descending from such a small number of founders, the new population will carry only a minuscule and to some extent random sample of the gene pool of the base population. The sample will most likely be unrepresentative, encompassing less genetic diversity than the larger pool. This effect shows itself whenever a small sample is taken from a large aggregation of diversity; whether the aggregation consists of genes, colored gum balls, M&M’s, the cards of a deck, or any other collection of varied items, a small sample will usually contain less diversity than the whole.

Why does the founder principle happen? It’s basically applied probability. Perhaps an example will help illuminate the concept.

Think of yourself playing a game of poker (five card draw) with a friend. The deck of cards is separated into four suits: Diamonds, hearts, clubs and spades, each suit having 13 cards for a total of 52 cards.

Now look at your hand of five cards. Do you have one card from each suit? Maybe. Are all five cards from the same suit? Probably not, but it is possible. Will you get the ace of spades? Maybe, but not likely.

This is a good metaphor for how the founder principle works. The gene pool carried by a small group of founders is unlikely to be precisely representative of the gene pool of the larger group. In some rare cases it will be very unrepresentative, like you getting dealt a straight flush.

It starts to get interesting when this founder population starts to reproduce, and genetic drift causes the new population to diverge significantly from its ancestors. Quammen explains:

Already isolated geographically from its base population, the pioneer population now starts drifting away genetically. Over the course of generations, its gene pool becomes more and more different from the gene pool of the base population – different both as to the array of alleles (that is, the variant forms of a given gene) and as to the commonness of each allele.

The founder population, in some cases, will become so different that it can no longer mate with the original population. This new species may even be a competitor for resources if the two populations are ever reintroduced. (Say, if a land bridge is created between two islands, or humans bring two species back in contact.)

Going back to our card metaphor, let’s pretend that you and your friend are playing with four decks of cards — 208 total cards. Say we randomly pulled out forty cards from those decks. If there are absolutely no kings in the forty cards you are playing with, you will never be able to create a royal flush (ace+king+queen+jack+10 of the same suit). It doesn’t matter how the cards are dealt, you can never make a royal flush with no kings.

Thus it is with species: If a splintered-off population isn’t carrying a specific gene variant (allele), that variant can never be represented in the newly created population, no matter how prolific that gene may have been in the original population. It’s gone. And as the rarest variants disappear, the new population becomes increasingly unlike the old one, especially if the new population is small.

Some alleles are common within a population, some are rare. If the population is large, with thousands or millions of parents producing thousands or millions of offspring, the rare alleles as well as the common ones will usually be passed along. Chance operation at high numbers tends to produce stable results, and the proportions of rarity and commonness will hold steady. If the population is small, though, the rare alleles will most likely disappear […] As it loses its rare alleles by the wayside, a small pioneer population will become increasingly unlike the base population from which it derived.

Some of this genetic loss may be positive (a gene that causes a rare disease may be missing), some may be negative (a gene for a useful attribute may be missing) and some may be neutral.

The neutral ones are the most interesting: A neutral gene at one point in time may become a useful gene at another point. It’s like playing a round of poker where 8’s are suddenly declared “wild,” and that card suddenly becomes much more important than it was the hand before. The same goes for animal traits.

Take a mammal population living on an island, having lost all of its ability to swim. That won’t mean much if all is well and it is never required to swim. But the moment there is a natural disaster such as a fire, having the ability to swim the short distance to the mainland could be the difference between survival or extinction.

That’s why the founder principle is so dangerous: The loss of genetic diversity often means losing valuable survival traits. Quammen explains:

Genetic drift compounds the founder-effect problem, stripping a small population of the genetic variation that it needs to continue evolving. Without that variation, the population stiffens toward uniformity. It becomes less capable of adaptive response. There may be no manifest disadvantages in uniformity so long as environmental circumstances remain stable; but when circumstances are disrupted, the population won’t be capable of evolutionary adjustment. If the disruption is drastic, the population may go extinct.

This loss of adaptability is one of the two major issues caused by the founder principle, the second being inbreeding depression. A founder population may have no choice but to only breed within its population and a symptom of too much inbreeding is the manifestation of harmful genetic variants among inbred individuals. (One reason humans consider incest a dangerous activity.) This too increases the fragility of species and decreases their ability to evolve.

The founder principle is just one of many amazing ideas in The Song of the Dodo. In fact, we at Farnam Street feel the book is so important that it made our list of books we recommend to improve your general knowledge of the world and it was the first book we picked for our learning community reading group.

If you have already read this book and want more we suggest Quammen’s The Reluctant Mr. Darwin or his equally thought provoking Spillover: Animal Infections and the Next Human Pandemic. Another wonderful and readable book on species evolution is The Beak of the Finch, by Jonathan Weiner.

The Island of Knowledge: Science and the Meaning of Life

“As the Island of Knowledge grows, so do the shores of our ignorance—the boundary between the known and unknown. Learning more about the world doesn’t lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.”

***

Common across human history is our longing to better understand the world we live in, and how it works. But how much can we actually know about the world?

In his book, The Island of Knowledge: The Limits of Science and the Search for Meaning, Physicist Marcelo Gleiser traces our progress of modern science in the pursuit to the most fundamental questions on existence, the origin of the universe, and the limits of knowledge.

What we know of the world is limited by what we can see and what we can describe, but our tools have evolved over the years to reveal ever more pleats into our fabric of knowledge. Gleiser celebrates this persistent struggle to understand our place in the world and travels our history from ancient knowledge to our current understanding.

While science is not the only way to see and describe the world we live in, it is a response to the questions on who we are, where we are, and how we got here. “Science speaks directly to our humanity, to our quest for light, ever more light.

To move forward, science needs to fail, which runs counter to our human desire for certainty. “We are surrounded by horizons, by incompleteness.” Rather than give up, we struggle along a scale of progress. What makes us human is this journey to understand more about the mysteries of the world and explain them with reason. This is the core of our nature.

While the pursuit is never ending, the curious journey offers insight not just into the natural world, but insight into ourselves.

“What I see in Nature is a magnificent structure that we can comprehend only
very imperfectly,
and that must fill a thinking person with a feeling of humility.”
— Albert Einstein

We tend to think that what we see is all there is — that there is nothing we cannot see. We know it isn’t true when we stop and think, yet we still get lulled into a trap of omniscience.

Science is thus limited, offering only part of the story — the part we can see and measure. The other part remains beyond our immediate reach.

What we see of the world,” Gleiser begins, “is only a sliver of what’s out there.”

There is much that is invisible to the eye, even when we augment our sensorial perception with telescopes, microscopes, and other tools of exploration. Like our senses, every instrument has a range. Because much of Nature remains hidden from us, our view of the world is based only on the fraction of reality that we can measure and analyze. Science, as our narrative describing what we see and what we conjecture exists in the natural world, is thus necessarily limited, telling only part of the story. … We strive toward knowledge, always more knowledge, but must understand that we are, and will remain, surrounded by mystery. This view is neither antiscientific nor defeatist. … Quite the contrary, it is the flirting with this mystery, the urge to go beyond the boundaries of the known, that feeds our creative impulse, that makes us want to know more.

While we may broadly understand the map of what we call reality, we fail to understand its terrain. Reality, Gleiser argues, “is an ever-shifting mosaic of ideas.”

However…

The incompleteness of knowledge and the limits of our scientific worldview only add to the richness of our search for meaning, as they align science with our human fallibility and aspirations.

What we call reality is a (necessarily) limited synthesis. It is certainly our reality, as it must be, but it is not the entire reality itself:

My perception of the world around me, as cognitive neuroscience teaches us, is synthesized within different regions of my brain. What I call reality results from the integrated sum of countless stimuli collected through my five senses, brought from the outside into my head via my nervous system. Cognition, the awareness of being here now, is a fabrication of a vast set of chemicals flowing through myriad synaptic connections between my neurons. … We have little understanding as to how exactly this neuronal choreography engenders us with a sense of being. We go on with our everyday activities convinced that we can separate ourselves from our surroundings and construct an objective view of reality.

The brain is a great filtering tool, deaf and blind to vast amounts of information around us that offer no evolutionary advantage. Part of it we can see and simply ignore. Other parts, like dust particles and bacteria, go unseen because of limitations of our sensory tools.

As the Fox said to the Little Prince in Antoine de Saint-Exupery’s fable, “What is essential is invisible to the eye.” There is no better example than oxygen.

Science has increased our view. Our measurement tools and instruments can see bacteria and radiation, subatomic particles and more. However precise these tools have become, their view is still limited.

There is no such thing as an exact measurement. Every measurement must be stated within its precision and quoted together with “error bars” estimating the magnitude of errors. High-precision measurements are simply measurements with small error bars or high confidence levels; there are no perfect, zero-error measurements.

[…]

Technology limits how deeply experiments can probe into physical reality. That is to say, machines determine what we can measure and thus what scientists can learn about the Universe and ourselves. Being human inventions, machines depend on our creativity and available resources. When successful, they measure with ever-higher accuracy and on occasion may also reveal the unexpected.

“All models are wrong, some are useful.”
— George Box

What we know about the world is only what we can detect and measure — even if we improve our “detecting and measuring” as time goes along. And thus we make our conclusions of reality on what we can currently “see.”

We see much more than Galileo, but we can’t see it all. And this restriction is not limited to measurements: speculative theories and models that extrapolate into unknown realms of physical reality must also rely on current knowledge. When there is no data to guide intuition, scientists impose a “compatibility” criterion: any new theory attempting to extrapolate beyond tested ground should, in the proper limit, reproduce current knowledge.

[…]

If large portions of the world remain unseen or inaccessible to us, we must consider the meaning of the word “reality” with great care. We must consider whether there is such a thing as an “ultimate reality” out there — the final substrate of all there is — and, if so, whether we can ever hope to grasp it in its totality.

[…]

We thus must ask whether grasping reality’s most fundamental nature is just a matter of pushing the limits of science or whether we are being quite naive about what science can and can’t do.

Here is another way of thinking about this: if someone perceives the world through her senses only (as most people do), and another amplifies her perception through the use of instrumentation, who can legitimately claim to have a truer sense of reality? One “sees” microscopic bacteria, faraway galaxies, and subatomic particles, while the other is completely blind to such entities. Clearly they “see” different things and—if they take what they see literally—will conclude that the world, or at least the nature of physical reality, is very different.

Asking who is right misses the point, although surely the person using tools can see further into the nature of things. Indeed, to see more clearly what makes up the world and, in the process to make more sense of it and ourselves is the main motivation to push the boundaries of knowledge. … What we call “real” is contingent on how deeply we are able to probe reality. Even if there is such thing as the true or ultimate nature of reality, all we have is what we can know of it.

[…]

Our perception of what is real evolves with the instruments we use to probe Nature. Gradually, some of what was unknown becomes known. For this reason, what we call “reality” is always changing. … The version of reality we might call “true” at one time will not remain true at another. … Given that our instruments will always evolve, tomorrow’s reality will necessarily include entitles not known to exist today. … More to the point, as long as technology advances—and there is no reason to suppose that it will ever stop advancing for as long as we are around—we cannot foresee an end to this quest. The ultimate truth is elusive, a phantom.

Gleiser makes his point with a beautiful metaphor. The Island of Knowledge.

Consider, then, the sum total of our accumulated knowledge as constituting an island, which I call the “Island of Knowledge.” … A vast ocean surrounds the Island of Knowledge, the unexplored ocean of the unknown, hiding countless tantalizing mysteries.

The Island of Knowledge grows as we learn more about the world and ourselves. And as the island grows, so too “do the shores of our ignorance—the boundary between the known and unknown.”

Learning more about the world doesn’t lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.

As we move forward we must remember that despite our quest, the shores of our ignorance grow as the Island of Knowledge grows. And while we will struggle with the fact that not all questions will have answers, we will continue to progress. “It is also good to remember,” Gleiser writes, “that science only covers part of the Island.”

Richard Feynman has pointed out before that science can only answer the subset of question that go, roughly, “If I do this, what will happen?” Answers to questions like Why do the rules operate that way? and Should I do it? are not really questions of scientific nature — they are moral, human questions, if they are knowable at all.

There are many ways of understanding and knowing that should, ideally, feed each other. “We are,” Gleiser concludes, “multidimensional creatures and search for answers in many, complementary ways. Each serves a purpose and we need them all.”

“The quest must go on. The quest is what makes us matter: to search for more answers, knowing that the significant ones will often generate surprising new questions.”

The Island of Knowledge is a wide-ranging tour through scientific history from planetary motions to modern scientific theories and how they affect our ideas on what is knowable.

Merchants Of Doubt: How The Tobacco Strategy Obscures the Realities of Global Warming

There will always be those who try to challenge growing scientific consensus — indeed the challenge is fundamental to science. Motives, however, matter and not everyone has good intentions.

***

Naomi Oreskes and Erik Conway’s masterful work Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, was recommended by Elon Musk.

The book illuminates how the tobacco industry created doubt and kept the controversy alive well past scientific consensus. They call this the Tobacco Strategy. And the same playbook is happening all over again. This time with Global Warming.

Merchants of Doubt

The goal of the Tobacco Strategy is to create doubt about the causal link to protect the interests of incumbents.

Millions of pages of documents released during tobacco litigation demonstrate these links. They show the crucial role that scientists played in sowing doubt about the links between smoking and health risks. These documents— which have scarcely been studied except by lawyers and a handful of academics— also show that the same strategy was applied not only to global warming, but to a laundry list of environmental and health concerns, including asbestos, secondhand smoke, acid rain, and the ozone hole.

Interestingly, not only are the tactics the same when it comes to Global Warming, but so are the people.

They used their scientific credentials to present themselves as authorities, and they used their authority to try to discredit any science they didn’t like.

Over the course of more than twenty years, these men did almost no original scientific research on any of the issues on which they weighed in. Once they had been prominent researchers, but by the time they turned to the topics of our story, they were mostly attacking the work and the reputations of others. In fact, on every issue, they were on the wrong side of the scientific consensus. Smoking does kill— both directly and indirectly. Pollution does cause acid rain. Volcanoes are not the cause of the ozone hole. Our seas are rising and our glaciers are melting because of the mounting effects of greenhouse gases in the atmosphere, produced by burning fossil fuels. Yet, for years the press quoted these men as experts, and politicians listened to them, using their claims as justification for inaction.

December 15, 1953, was a fateful day. A few months earlier, researchers at the Sloan-Kettering Institute in New York City had demonstrated that cigarette tar painted on the skin of mice caused fatal cancers. This work had attracted an enormous amount of press attention: the New York Times and Life magazine had both covered it, and Reader’s Digest— the most widely read publication in the world— ran a piece entitled “Cancer by the Carton.” Perhaps the journalists and editors were impressed by the scientific paper’s dramatic concluding sentences: “Such studies, in view of the corollary clinical data relating smoking to various types of cancer, appear urgent. They may not only result in furthering our knowledge of carcinogens, but in promoting some practical aspects of cancer prevention.”

These findings, however, shouldn’t have been a surprise. We’re often blinded by a ‘bad people can do no right’ line of thought.

German scientists had shown in the 1930s that cigarette smoking caused lung cancer, and the Nazi government had run major antismoking campaigns; Adolf Hitler forbade smoking in his presence. However, the German scientific work was tainted by its Nazi associations, and to some extent ignored, if not actually suppressed, after the war; it had taken some time to be rediscovered and independently confirmed. Now, however, American researchers— not Nazis— were calling the matter “urgent,” and the news media were reporting it.  “Cancer by the carton” was not a slogan the tobacco industry would embrace.

 

With the mounting evidence, the tobacco industry was thrown into a panic.

 

So industry executives made a fateful decision, one that would later become the basis on which a federal judge would find the industry guilty of conspiracy to commit fraud— a massive and ongoing fraud to deceive the American public about the health effects of smoking. The decision was to hire a public relations firm to challenge the scientific evidence that smoking could kill you.

On that December morning (December 15th), the presidents of four of America’s largest tobacco companies— American Tobacco, Benson and Hedges, Philip Morris, and U.S. Tobacco— met at the venerable Plaza Hotel in New York City. The French Renaissance chateau-style building— in which unaccompanied ladies were not permitted in its famous Oak Room bar— was a fitting place for the task at hand: the protection of one of America’s oldest and most powerful industries. The man they had come to meet was equally powerful: John Hill, founder and CEO of one of America’s largest and most effective public relations firms, Hill and Knowlton.

The four company presidents— as well as the CEOs of R. J. Reynolds and Brown and Williamson— had agreed to cooperate on a public relations program to defend their product. They would work together to convince the public that there was “no sound scientific basis for the charges,” and that the recent reports were simply “sensational accusations” made by publicity-seeking scientists hoping to attract more funds for their research. They would not sit idly by while their product was vilified; instead, they would create a Tobacco Industry Committee for Public Information to supply a “positive” and “entirely ‘pro-cigarette’” message to counter the anti-cigarette scientific one. As the U.S. Department of Justice would later put it, they decided “to deceive the American public about the health effects of smoking.”

At first, the companies didn’t think they needed to fund new scientific research, thinking it would be sufficient to “disseminate information on hand.” John Hill disagreed, “emphatically warn[ing] … that they should … sponsor additional research,” and that this would be a long-term project. He also suggested including the word “research” in the title of their new committee, because a pro-cigarette message would need science to back it up. At the end of the day, Hill concluded, “scientific doubts must remain.” It would be his job to ensure it.

Over the next half century, the industry did what Hill and Knowlton advised. They created the “Tobacco Industry Research Committee” to challenge the mounting scientific evidence of the harms of tobacco. They funded alternative research to cast doubt on the tobacco-cancer link. They conducted polls to gauge public opinion and used the results to guide campaigns to sway it. They distributed pamphlets and booklets to doctors, the media, policy makers, and the general public insisting there was no cause for alarm.

The industry’s position was that there was “no proof” that tobacco was bad, and they fostered that position by manufacturing a “debate,” convincing the mass media that responsible journalists had an obligation to present “both sides” of it.

Of course there was more to it than that.

The industry did not leave it to journalists to seek out “all the facts.” They made sure they got them. The so-called balance campaign involved aggressive dissemination and promotion to editors and publishers of “information” that supported the industry’s position. But if the science was firm, how could they do that? Was the science firm?

The answer is yes, but. A scientific discovery is not an event; it’s a process, and often it takes time for the full picture to come into clear focus.  By the late 1950s, mounting experimental and epidemiological data linked tobacco with cancer— which is why the industry took action to oppose it. In private, executives acknowledged this evidence. In hindsight it is fair to say— and science historians have said— that the link was already established beyond a reasonable doubt. Certainly no one could honestly say that science showed that smoking was safe.

But science involves many details, many of which remained unclear, such as why some smokers get lung cancer and others do not (a question that remains incompletely answered today). So some scientists remained skeptical.

[…]

The industry made its case in part by cherry-picking data and focusing on unexplained or anomalous details. No one in 1954 would have claimed that everything that needed to be known about smoking and cancer was known, and the industry exploited this normal scientific honesty to spin unreasonable doubt.

[…]

The industry had realized that you could create the impression of controversy simply by asking questions, even if you actually knew the answers and they didn’t help your case. And so the industry began to transmogrify emerging scientific consensus into raging scientific “debate.”

Merchants of Doubt is a fascinating look at how the process for sowing doubt in the minds of people remains the same today as it was in the 1950s. After all, if it ain’t broke, don’t fix it.

Karl Popper on The Line Between Science and Pseudoscience

It’s not immediately clear, to the layman, what the essential difference is between science and something masquerading as science: pseudoscience. The distinction gets at the core of what comprises human knowledge: How do we actually know something to be true? Is it simply because our powers of observation tell us so? Or is there more to it?

Sir Karl Popper (1902-1994), the scientific philosopher, was interested in the same problem. How do we actually define the scientific process? How do we know which theories can be said to be truly explanatory?

3833724834_397c34132c_z

He began addressing it in a lecture, which is printed in the book Conjectures and Refutations: The Growth of Scientific Knowledge (also available online):

When I received the list of participants in this course and realized that I had been asked to speak to philosophical colleagues I thought, after some hesitation and consultation, that you would probably prefer me to speak about those problems which interest me most, and about those developments with which I am most intimately acquainted. I therefore decided to do what I have never done before: to give you a report on my own work in the philosophy of science, since the autumn of 1919 when I first began to grapple with the problem, ‘When should a theory be ranked as scientific?’ or ‘Is there a criterion for the scientific character or status of a theory?’

Popper saw a problem with the number of theories he considered non-scientific that, on their surface, seemed to have a lot in common with good, hard, rigorous science. But the question of how we decide which theories are compatible with the scientific method, and those which are not, was harder than it seemed.

***

It is most common to say that science is done by collecting observations and grinding out theories from them. Charles Darwin once said, after working long and hard on the problem of the Origin of Species,

My mind seems to have become a kind of machine for grinding general laws out of large collections of facts.

This is a popularly accepted notion. We observe, observe, and observe, and we look for theories to best explain the mass of facts. (Although even this is not really true: Popper points out that we must start with some a priori knowledge to be able to generate new knowledge. Observation is always done with some hypotheses in mind–we can’t understand the world from a totally blank slate. More on that another time.)

The problem, as Popper saw it, is that some bodies of knowledge more properly named pseudosciences would be considered scientific if the “Observe & Deduce” operating definition were left alone. For example, a believing astrologist can ably provide you with “evidence” that their theories are sound. The biographical information of a great many people can be explained this way, they’d say.

The astrologist would tell you, for example, about how “Leos” seek to be the centre of attention; ambitious, strong, seeking the limelight. As proof, they might follow up with a host of real-life Leos: World-leaders, celebrities, politicians, and so on. In some sense, the theory would hold up. The observations could be explained by the theory, which is how science works, right?

Sir Karl ran into this problem in a concrete way because he lived during a time when psychoanalytic theories were all the rage at just the same time Einstein was laying out a new foundation for the physical sciences with the concept of relativity. What made Popper uncomfortable were comparisons between the two. Why did he feel so uneasy putting Marxist theories and Freudian psychology in the same category of knowledge as Einstein’s Relativity? Did all three not have vast explanatory power in the world? Each theory’s proponents certainly believed so, but Popper was not satisfied.

It was during the summer of 1919 that I began to feel more and more dissatisfied with these three theories–the Marxist theory of history, psychoanalysis, and individual psychology; and I began to feel dubious about their claims to scientific status. My problem perhaps first took the simple form, ‘What is wrong with Marxism, psycho-analysis, and individual psychology? Why are they so different from physical theories, from Newton’s theory, and especially from the theory of relativity?’

I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory.

Whatever happened always confirmed it. Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth; who refused to see it, either because it was against their class interest, or because of their repressions which were still ‘un-analysed’ and crying aloud for treatment.

Here was the salient problem: The proponents of these new sciences saw validations and verifications of their theories everywhere. If you were having trouble as an adult, it could always be explained by something your mother or father had done to you when you were young, some repressed something-or-other that hadn’t been analysed and solved. They were confirmation bias machines.

What was the missing element? Popper had figured it out before long: The non-scientific theories could not be falsified. They were not testable in a legitimate way. There was no possible objection that could be raised which would show the theory to be wrong.

In a true science, the following statement can be easily made: “If happens, it would show demonstrably that theory is not true.” We can then design an experiment, a physical one or sometimes a simple thought experiment, to figure out if actually does happen It’s the opposite of looking for verification; you must try to show the theory is incorrect, and if you fail to do so, thereby strengthen it.

Pseudosciences cannot and do not do this–they are not strong enough to hold up. As an example, Popper discussed Freud’s theories of the mind in relation to Alfred Adler’s so-called “individual psychology,” which was popular at the time:

I may illustrate this by two very different examples of human behaviour: that of a man who pushes a child into the water with the intention of drowning it; and that of a man who sacrifices his life in an attempt to save the child. Each of these two cases can be explained with equal ease in Freudian and in Adlerian terms. According to Freud the first man suffered from repression (say, of some component of his Oedipus complex), while the second man had achieved sublimation. According to Adler the first man suffered from feelings of inferiority (producing perhaps the need to prove to himself that he dared to commit some crime), and so did the second man (whose need was to prove to himself that he dared to rescue the child). I could not think of any human behaviour which could not be interpreted in terms of either theory. It was precisely this fact–that they always fitted, that they were always confirmed–which in the eyes of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness.

Popper contrasted these theories against Relativity, which made specific, verifiable predictions, giving the conditions under which the predictions could be shown false. It turned out that Einstein’s predictions came to be true when tested, thus verifying the theory through attempts to falsify it. But the essential nature of the theory gave grounds under which it could have been wrong. To this day, physicists seek to figure out where Relativity breaks down in order to come to a more fundamental understanding of physical reality. And while the theory may eventually be proven incomplete or a special case of a more general phenomenon, it has still made accurate, testable predictions that have led to practical breakthroughs.

Thus, in Popper’s words, science requires testability: “If observation shows that the predicted effect is definitely absent, then the theory is simply refuted.”  This means a good theory must have an element of risk to it. It must be able to be proven wrong under stated conditions.

From there, Popper laid out his essential conclusions, which are useful to any thinker trying to figure out if a theory they hold dear is something that can be put in the scientific realm:

1. It is easy to obtain confirmations, or verifications, for nearly every theory–if we look for confirmations.

2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory–an event which would have refuted the theory.

3. Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.

4. A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.

5. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

6. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ‘corroborating evidence’.)

7. Some genuinely testable theories, when found to be false, are still upheld by their admirers–for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a ‘conventionalist twist’ or a ‘conventionalist stratagem’.)

One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.

Finally, Popper was careful to say that it is not possible to prove that Freudianism was not true, at least in part. But we can say that we simply don’t know whether it’s true because it does not make specific testable predictions. It may have many kernels of truth in it, but we can’t tell. The theory would have to be restated.

This is the essential “line of demarcation, as Popper called it, between science and pseudoscience.

What Can Chain Letters Teach us about Natural Selection?

“It is important to understand that none of these replicating entities is consciously interested in getting itself duplicated. But it will just happen that the world becomes filled with replicators that are more efficient.”

***

In 1859, Charles Darwin first described his theory of evolution through natural selection in The Origin of Species. Here we are, 157 years later, and although it has become an established fact in the field of biology, its beauty is still not that well understood among the populace. I think that’s because it’s slightly counter-intuitive. Unlike string theory or quantum mechanics, the theory of evolution through natural selection is pretty easily obtainable by most.

So, is there a way we can help ourselves understand the theory in an intuitive way, so we can better go on applying it to other domains? I think so, and it comes from an interesting little volume released in 1995 by the biologist Richard Dawkins called River Out of Eden. But first, let’s briefly head back to the Origin of Species, so we’re clear on what we’re trying to understand.

***

In the fourth chapter of the book, entitled “Natural Selection,” Darwin describes a somewhat cold and mechanistic process for the development of species: If species had heritable traits and variation within their population, they would survive in different numbers, and those most adapted to survival would thrive and pass on those traits to successive generations. Eventually, new species would arise, slowly, as enough variation and differential reproduction acted on the population to create a de facto branch in the family tree.

Here’s the original description.

Let it be borne in mind how infinitely complex and close-fitting are the mutual relations of all organic beings to each other and to their physical conditions of life. Can it, then, be thought improbable, seeing that variations useful to man have undoubtedly occurred, that other variations useful in some way to each being in the great and complex battle of life, should sometimes occur in the course of thousands of generations? If such do occur, can we doubt (remembering that many more individuals are born than can possibly survive) that individuals having any advantage, however slight, over others, would have the best chance of surviving and of procreating their kind? On the other hand, we may feel sure that any variation in the least degree injurious would be rigidly destroyed. This preservation of favourable variations and the rejection of injurious variations, I call Natural Selection.

[…]

In such case, every slight modification, which in the course of ages chanced to arise, and which in any way favored the individuals of any species, by better adapting them to their altered conditions, would tend to be preserved; and natural selection would thus have free scope for the work of improvement.

[…]

It may be said that natural selection is daily and hourly scrutinizing, throughout the world, every variation, even the slightest; rejection that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. 

The beauty of the theory is in its simplicity. The mechanism of evolution is, at root, a simple one. An unguided one. Better descendants outperform lesser ones in a competitive world and are more successful at replicating. Traits that improve the survival of their holder in its current environment tend to be preserved and amplified over time. This is hard to see in real time, although some examples are helpful in understanding the concept, e.g. antibiotic resistance.

Darwin’s idea didn’t take as quickly as we might like to think. In The Reluctant Mr. Darwin, David Quammen talks about the period after the release of the groundbreaking work, in which the world had trouble coming to grips with Darwin’s theory. It was not the case, as it might seem today, that the world simply threw up its hands and accepted Darwin as a genius. This is a lesson in and of itself. It was quite the contrary:

By the 1890s, natural selection as Darwin had defined it–that is, differential reproductive success resulting from small, undirected variations and serving as the chief mechanism of adaption and divergence–was considered by many evolutionary biologists to have been a wrong guess.

It wasn’t until Gregor Mendel’s peas showed how heritability worked that Darwin’s ideas were truly vindicated against his rivals’. So if we have trouble coming to terms with evolution by natural selection in the modern age, we’re not alone: So did Darwin’s peers.

***

What’s this all got to do with chain letters? Well, in Dawkins’ River Out of Eden, he provides an analogy for the process of evolution through natural selection that is quite intuitive, and helpful in understanding the simple power of the idea. How would a certain type of chain letter come to dominate the population of all chain letters? It would work the same way.

A simple example is the so-called chain letter. You receive in the mail a postcard on which is written: “Make six copies of this card and send them to six friends within a week. If you do not do this, a spell will be cast upon you and you will die in horrible agony within a month.” If you are sensible you will throw it away. But a good percentage of people are not sensible; they are vaguely intrigued, or intimidated by the threat, and send six copies of it to other people. Of these six, perhaps two will be persuaded to send it on to six other people. If, on average, 1/3 of the people who receive the card obey the instructions written on it, the number of cards in circulation will double every week. In theory, this means that the number of cards in circulation after one year will be 2 to the power of 52, or about four thousand trillion. Enough post cards to smother every man, woman, and child in the world.

Exponential growth, if not checked by the lack of resources, always leads to startlingly large-scale results in a surprisingly short time. In practice, resources are limited and other factors, too, serve to limit exponential growth. In our hypothetical example, individuals will probably start to balk when the same chain letter comes around to them for the second time. In the competition for resources, variants of the same replicator may arise that happen to be more efficient at getting themselves duplicated. These more efficient replicators will tend to displace their less efficient rivals. It is important to understand that none of these replicating entities is consciously interested in getting itself duplicated. But it will just happen that the world becomes filled with replicators that are more efficient.

In the case of the chain letter, being efficient may consist in accumulating a better collection of words on the paper. Instead of the somewhat implausible statement that “if you don’t obey the words on the card you will die in horrible agony within a month,” the message might change to “Please, I beg of you, to save your soul and mine, don’t take the risk: if you have the slightest doubt, obey the instructions and send the letter to six more people.”

Such “mutations” happen again and again, and the result will eventually be a heterogenous population of messages all in circulation, all descended from the same original ancestor but differing in detailed wording and in the strength and nature of the blandishments they employ. The variants that are more successful will increase in frequency at the expense of less successful rivals. Success is simply synonymous with frequency in circulation. 

The chain letter contains all of the elements of biological natural selection except one: Someone had to write the first chain letter. The first replicating biological entity, on the other hand, seems to have sprung up from an early chemical brew.

Consider this analogy an intermediate mental “step” towards the final goal. Because we know and appreciate the power of reasoning by analogy and metaphor, we can deduce that finding an appropriate analogy is one of the best ways to pound an idea into your head–assuming it is a correct idea that should be pounded in.

And because evolution through natural selection is one of the more powerful ideas a human being has ever had, it seems worth our time to pound this one in for good and start applying it elsewhere if possible. (For example, Munger has talked about how business evolves in a manner such that competitive results are frequently similar to biological outcomes.)

Read Dawkins’ book in full for a deeper look at his views on replication and natural selection. It’s shorter than some of his other works, but worth the time.

How Darwin Thought: The Golden Rule of Thinking

In his 1986 speech at the commencement of Harvard-Westlake School in Los Angeles (found in Poor Charlie’s Almanack) Charlie Munger gave a short Johnny Carson-like speech on the things to avoid to end up with a happy and successful life. One of his most salient prescriptions comes from the life of Charles Darwin:

It is my opinion, as a certified biography nut, that Charles Robert Darwin would have ranked in the middle of the Harvard School graduating class if 1986. Yet he is now famous in the history of science. This is precisely the type of example you should learn nothing from if bent on minimizing your results from your own endowment.

Darwin’s result was due in large measure to his working method, which violated all my rules for misery and particularly emphasized a backward twist in that he always gave priority attention to evidence tending to disconfirm whatever cherished and hard-won theory he already had. In contrast, most people early achieve and later intensify a tendency to process new and disconfirming information so that any original conclusion remains intact. They become people of whom Philip Wylie observed: “You couldn’t squeeze a dime between what they already know and what they will never learn.”

The life of Darwin demonstrates how a turtle may outrun a hare, aided by extreme objectivity, which helps the objective person end up like the only player without a blindfold in a game of Pin the Tail on the Donkey.

Charles Darwin (Via Wikipedia)

The great Harvard biologist E.O. Wilson agreed. In his book, Letters to a Young Scientist, Wilson argued that Darwin would have probably scored in the 130 range on a standard IQ test. And yet there he is, buried next to the calculus-inventing genius Isaac Newton in Westminster Abbey. (As Munger often notes.)

I had, also, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from memory than favorable ones.

What can we learn from the working and thinking habits of Darwin?

Extreme Focus Combined with Attentive Energy

The first clue comes from his own autobiography. Darwin was a hoover of information related to a topic he was interested in. After describing some of his specific areas of study while aboard the H.M.S. Beagle, Darwin concludes in his Autobiography:

The above various special studies were, however, of no importance compared with the habit of energetic industry and of concentrated attention to whatever I was engaged in, which I then acquired. Everything about which I thought or read was made to bear directly on what I had seen and was likely to see; and this habit of mind was continued during the five years of the voyage. I feel sure that it was this training which has enabled me to do whatever I have done in science.

This habit of pure and attentive focus to the task at hand is, of course, echoed in many of our favorite thinkers, from Sherlock Holmes, to E.O. Wilson, Feynman, Einstein, and others. Munger himself remarked that “I did not succeed in life by intelligence. I succeeded because I have a long attention span.”

In Darwin’s quest, there was almost nothing relevant to his task at hand — the problem of understanding the origin and development of species — which might have escaped his attention. He had an extremely broad antenna. Says David Quammen in his fabulous The Reluctant Mr. Darwin:

One of Darwin’s great strengths as a scientist was also, in some ways, a disadvantage: his extraordinary breadth of curiosity. From his study at Down House he ranged widely and greedily, in his constant search for data, across distances (by letter) and scientific fields. He read eclectically and kept notes like a pack rat. Over the years he collected an enormous quantity of interconnected facts. He looked for patterns but was intrigued equally by exceptions to the patterns, and exceptions to the exceptions. He tested his ideas against complicated groups of organisms with complicated stories, such as the barnacles, the orchids, the social insects, the primroses, and the hominids.

Not only was Darwin thinking broadly, taking in facts at all turns and on many subjects, but he was thinking carefully, This is where Munger’s admiration comes in: Darwin wanted to look at the exceptions. The exceptions to the exceptions. He was on the hunt for truth and not necessarily to confirm some highly-loved idea. Simply put, he didn’t want to be wrong about the nature of reality. To get the theory whole and correct would take lots of detail and time, as we will see.

***

The habit of study and observation didn’t stop at the plant and animal kingdom for Darwin. In a move that might seem strange by today’s standards, Darwin even opened a notebook to study the development of his own newborn son, William. This is from one of his notebooks:

Natural History of Babies

Do babies start (i.e., useless sudden movement of muscles) very early in life. Do they wink, when anything placed before their eyes, very young, before experience can have taught them to avoid danger. Do they know frown when they first see it?

From there, as his child grew and developed, Darwin took close notes. How did he figure out that the reflection in the mirror was him? How did he then figure out it was only an image of him, and that any other images that showed up (say, Dad standing behind him) were mere images too – not reality? These were further data in Darwin’s mental model of the accumulation of gradual changes, but more importantly, displayed his attention to detail. Everything eventually came to “bear directly on what I had seen and what I was likely to see.”

And in a practical sense, Darwin was a relentless note-taker. Notebook A, Notebook B, Notebook C, Notebook M, Notebook N…all filled with observations from his study of journals and texts, his own scientific work, his travels, and his life. Once he sat down to write, he had an enormous amount of prior written thought to draw on. He could also see gaps in his understanding, which he diligently filled in.

Become an Expert

You can learn much about Darwin (and truthfully about anyone) by who he studied and admired. If Darwin held anyone in high esteem, it was Charles Lyell, whose Principles of Geology was his faithful companion on the H.M.S. Beagle. Here is his description of Lyell from his autobiography, which tells us something of the traits Darwin valued and sought to emulate:

I saw more of Lyell than of any other man before and after my marriage. His mind was characterized, as it appeared to me, by clearness, caution, sound judgment and a good deal of originality. When I made any remark to him on Geology, he never rested until he saw the whole case clearly and often made me see it more clearly than I had done before. He would advance all possible objections to my suggestions, and even after these were exhausted would long remain dubious. A second characteristic was his hearty sympathy with the work of other scientific men.

Studying Lyell and geology enhanced Darwin’s (probably natural) suspicion that careful, detailed, and objective work was required to create scientific breakthroughs. And once Darwin had expertise and grounding in the level of expertise required by Lyell to understand and explain the theory of geology, he had a basis for the rest of his scientific work. From his autobiography:

After my return to England, it appeared to me that by following the example of Lyell in Geology, and by collecting all facts which bore in any way on the variation of animals and plants under domestication and nature, some light might perhaps be thrown on the whole subject.

In fact, it was Darwin’s study and understanding of geology itself that gave him something to lean on conceptually. Lyell’s, and his own, theory of geology was of a slow-moving process that accumulated massive gradual changes over time. This seems like common knowledge today, but at the time, people weren’t so sure that the mountains and the islands could have been created by such slow moving and incremental processes.

Wallace & Gruber’s book Creative People at Work, an analysis of a variety of thinkers and artists, argues that this basic mental model carried Darwin pretty far:

Why was the acquisition of expert knowledge in geology so important to the development of Darwin’s overall thinking? Because in learning geology Darwin ground a conceptual lens — a device for bringing into focus and clarifying the problems to which he turned his attention. When his attention shifted to problems beyond geology, the lens remained and Darwin used it in exploring new problems.

[…]

(Darwin’s) coral reef theory shows that he had become an expert in one field…(and) the central idea in Darwin’s understanding of geology was “gradualism” — that great things could be produced by long, continued accumulation of very small effects. The next phase in the development of this thought-form would involve his use of it as the basis for the construction of analogies between geology and new, unfamiliar subjects.

[…]

Darwin wrote his most explicit and concise statement of the nature and utility of his gradualism thought-form: “This multiplication of little means and brinigng the mind to grapple with great effect produced is a most laborious & painful effort of the mind.” He recognized that it took patience and discipline to discover the “little means” that were responsible for great effects. With the necessary effort, however, this gradualism thought-form could become the vehicle for explaining many remarkable phenomena in geology, biology, and even psychology.

It is amazing to note that Darwin did not write The Origin of Species until 1859 even though his notebooks show he had been pretty close to the correct idea at least 15 or 20 years prior. What was he doing in all that time? Well, for eight years at least, he was studying barnacles.

***

One of the reasons Darwin went on a crusade of classifying and studying the barnacles in minute detail was his concern that if he wasn’t a primary expert on some portion of the natural world, his work on a larger and more general thesis would not be taken seriously, and that it would probably have holes. He said as much to his friend Frederic Gerard, a French botanist, before he had begun his barnacle work: “How painfully (to me) true is your remark that no one has hardly a right to examine the question of species who has not minutely described many.” And, of course, Darwin being Darwin, he spent eight years remedying that unfathomable situation.

It seemed like extraordinarily tedious work, unrelated to anything a scientist would consider important on a grand scale. It was taxonomy. Classification. Even Darwin admitted later on that he doubted it was worth the years he spent on it. Yet, in his detail-oriented journey for expertise on barnacles, he hit upon some key ideas that would make his theory of natural selection complete. Says Quammen:

He also found notable differences on another categorical level; within species. Contrary to what he’d believed all along about the rarity of variation in the wild, barnacles turned out to be highly variable. A species wasn’t a Platonic essence or a metaphysical type. A species was a population of differing individuals.

He wouldn’t have seen that if he hadn’t assigned himself the trick job of drawing lines between one species and another. He wouldn’t have seen it if he hadn’t used his network of contacts and his good reputation as a naturalist to gather barnacle specimens, in quantity, from all over the world. The truth of variation only reveals itself in crowds. He wouldn’t have seen it if he hadn’t examined multiple individuals, not just single representatives, of as many species as possible….Abundant variation among barnacles filled a crucial role in his theory. Here they were, the minor differences on which natural selection works.

Darwin was so diligent it could be breathtaking at times. Quammen describes him gathering up various species to assess the data about their development and their variation. Birds, dead or alive, as many as possible. Foxes, dogs, ducks, pigeons, rabbits, cats…nothing escaped his purview. As many specimens as he could get his hands on. All while living in a secluded house in Victorian England, beset by constant illness. He was Big Data before Big Data was a thing, trying to suss out conclusions from a mass of observation.

The Golden Rule

Eventually, his work led him to something new: Species are not immutable, they are all part of the same family tree. They evolve through a process of variation — he didn’t know how; that took years for others to figure out through the study of genetics — and differential survival through natural selection.

Darwin was able to put his finger on why it took so long for humanity to come to this correct theory: It was extremely counter-intuitive to how one would naturally see the world. He admitted as much in the Origin of Species‘ concluding chapter:

The chief cause of our natural unwillingness to admit that one species has given birth to other and distinct species, is that we are always slow in admitting any great changes of which we do not see the steps. The difficulty is the same as that felt by so many geologists, when Lyell first insisted that long lines of inland cliffs had been formed, and great valleys excavated, by the agencies which we still see at work. The mind cannot possibly grasp the full meaning of the term of even a million years; it cannot add up and perceive the full effects of many slight variations, accumulated during an almost infinite number of generations.

Counter-intuition was Darwin’s specialty. And the reason he was so good was he had a very simple habit of thought, described in the autobiography and so cherished by Charlie Munger: He paid special attention to collecting facts which did not agree with his prior conceptions. He called this a golden rule.

I had, also, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from memory than favorable ones. Owing to this habit, very few objections were raised against my views which I had not at least noticed and attempted to answer.

So we see that Darwin’s great success, by his own analysis, owed to his ability to see, note, and learn from objections to his cherished thoughts. The Origin of Species has stood up in the face of 157 years of subsequent biological research because Darwin was so careful to make sure the theory was nearly impossible to refute. Later scientists would find the book slightly incomplete, but not incorrect.

This passage reminds one of, and probably influenced, Charlie Munger‘s prescription on the work required to hold an opinion: You must understand the opposite side of the argument better than the person holding that side does. It’s a very difficult way to think, tremendously unnatural in the face of our genetic makeup (the more typical response is to look for as much confirming evidence as possible). Harnessed properly, though, it is a powerful way to beat your own shortcomings and become a seeing man amongst the blind.

Thus, we can deduce that, in addition to good luck and good timing, it was Darwin’s habits of completeness, diligence, accuracy, and habitual objectivity which ultimately led him to make his greatest breakthroughs. It was tedious. There was no spark of divine insight that gave him his edge. He just started with the right basic ideas and the right heroes, and then worked for a long time and with extreme focus and objectivity, always keeping his eye on reality.

In the end, you can do worse than to read all you can find on Charles Darwin and try to copy his mental habits. They will serve you well over a long life.