Category: Science

Survival of the Fittest: Groups versus Individuals

If ‘survival of the fittest’ is the prime evolutionary tenet, then why do some behaviors that lead to winning or success, seemingly justified by this concept, ultimately leave us cold?

Taken from Darwin’s theory of evolution, survival of the fittest is often conceptualized as the advantage that accrues with certain traits, allowing an individual to both thrive and survive in their environment by out-competing for limited resources. Qualities such as strength and speed were beneficial to our ancestors, allowing them to survive in demanding environments, and thus our general admiration for these qualities is now understood through this evolutionary lens.

However, in humans this evolutionary concept is often co-opted to defend a wide range of behaviors, not all of them good. Winning by cheating, or stepping on others to achieve goals.

Why is this?

One answer is that humans are not only concerned with our individual survival, but the survival of our group. (Which, of course, leads to improved individual survival, on average.) This relationship between individual and group survival is subject to intense debate among biologists.

Selecting for Unselfishness?

Humans display a wide range of behavior that seems counter-intuitive to the survival of the fittest mentality until you consider that we are an inherently social species, and that keeping our group fit is a wise investment of our time and energy.

One of the behaviors that humans display a lot of is “indirect reciprocity”. Distinguished from “direct reciprocity”, in which I help you and you help me, indirect reciprocity confers no immediate benefit to the one doing the helping. Either I help you, then you help someone else at a later time, or I help you and then someone else, some time in the future, helps me.

Martin A. Nowak and Karl Sigmund have studied this phenomenon in humans for many years. Essentially, they ask the question “How can natural selection promote unselfish behavior?”

Many of their studies have shown that “propensity for indirect reciprocity is widespread. A lot of people choose to do it.”


Humans are the champions of reciprocity. Experiments and everyday experience alike show that what Adam Smith called ‘our instinct to trade, barter and truck’ relies to a considerable extent on the widespread tendency to return helpful and harmful acts in kind. We do so even if these acts have been directed not to us but to others.

We care about what happens to others, even if the entire event is one that we have no part in. If you consider evolution in terms of survival of the fittest group, rather than individual, this makes sense.

Supporting those who harm others can breed mistrust and instability. And if we don’t trust each other, day to day transactions in our world will be completely undermined. Sending your kids to school, banking, online shopping: We place a huge amount of trust in our fellow humans every day.

If we consider this idea of group survival, we can also see value in a wider range of human attributes and behaviors. It is now not about “I have to be the fittest in every possible way in order to survive“, but recognizing that I want fit people in my group.

In her excellent book, Quiet: The Power of Introverts in a World That Can’t Stop Talking, author Susan Cain explores, among other things, the relevance of introverts to social function. How their contributions benefit the group as a whole. Introverts are people who “like to focus on one task at a time, … listen more than they talk, think before they speak, … [and] tend to dislike conflict.”

Though out of step with the culture of “the extrovert ideal” we are currently living in, introverts contribute significantly to our group fitness. Without them we would be deprived of much of our art and scientific progress.

Cain argues:

Among evolutionary biologists, who tend to subscribe to the vision of lone individuals hell-bent on reproducing their own DNA, the idea that species include individuals whose traits promote group survival is hotly debated and, not long ago, could practically get you kicked out of the academy.

But the idea makes sense. If personality types such as introverts aren’t the fittest for survival, then why did they persist? Possibly because of their value to the group.

Cain looks at the work of Dr. Elaine Aron, who has spent years studying introverts, and is one herself. In explaining the idea of different personality traits as part of group selection in evolution, Aron offers this story in an article posted on her website:

I used to joke that when a group of prehistoric humans were sitting around the campfire and a lion was creeping up on them all, the sensitive ones [introverts] would alert the others to the lion’s prowling and insist that something be done. But the non-sensitive ones [extroverts] would be the ones more likely to go out and face the lion. Hence there are more of them than there are of us, since they are willing and even happy to do impulsive, dangerous things that will kill many of us. But also, they are willing to protect us and hunt for us, if we are not as good at killing large animals, because the group needs us. We have been the healers, trackers, shamans, strategists, and of course the first to sense danger. So together the two types survive better than a group of just one type or the other.

The lesson is this: Groups survive better if they have individuals with different strengths to draw on. The more tools you have, the more likely you can complete a job. The more people you have that are different the more likely you can survive the unexpected.

Which Group?

How then, does one define the group? Who am I willing to help? Arguably, I’m most willing to sacrifice for my children, or family. My immediate little group. But history is full of examples of those who sacrificed significantly for their tribes or sports teams or countries.

We can’t argue that it is just about the survival of our own DNA. That may explain why I will throw myself in front of a speeding car to protect my child, but the beaches of Normandy were stormed by thousands of young, childless men. Soldiers from World War I, when interviewed about why they would jump out of a trench, trying to take a slice of no man’s land, most often said they did it “for the guy next to them”. They initially joined the military out of a sense of “national pride”, or other very non-DNA reasons.

Clearly, human culture is capable of defining “groups” very broadly though a complex system of mythology, creating deep loyalty to “imaginary” groups like sports teams, corporations, nations, or religions.

As technology shrinks our world, our group expands. Technological advancement pushes us into higher degrees of specialization, so that individual survival becomes clearly linked with group survival.

I know that I have a vested interest in doing my part to maintain the health of my group. I am very attached to indoor plumbing and grocery stores, yet don’t participate at all in the giant webs that allow those things to exist in my life. I don’t know anything about the configuration of the municipal sewer system or how to grow raspberries. (Of course, Adam Smith called this process of the individual benefitting the group through specialization the Invisible Hand.)

When we see ourselves as part of a group, we want the group to survive and even thrive. Yet how big can our group be? Is there always an us vs. them? Does our group surviving always have to be at the expense of others? We leave you with the speculation.


Principles for an Age of Acceleration

We live in an age where technology is developing at a rate faster than what any individual can keep up with. To survive in an age of acceleration, we need a new way of thinking about technology.


MIT Media Lab is a creative nerve center where great ideas like One Laptop per Child, LEGO Mindstorms, and Scratch programming language have emerged.

Its director, Joi Ito, has done a lot of thinking about how prevailing systems of thought will not be the ones to see us through the coming decades. In his book Whiplash: How to Survive our Faster Future, he notes that sometime late in the last century, technology began to outpace our ability to understand it.

We are blessed (or cursed) to live in interesting times, where high school students regularly use gene editing techniques to invent new life forms, and where advancements in artificial intelligence force policymakers to contemplate widespread, permanent unemployment. Small wonder our old habits of mind—forged in an era of coal, steel, and easy prosperity—fall short. The strong no longer necessarily survive; not all risk needs to be mitigated; and the firm is no longer the optimum organizational unit for our scarce resources.

Ito’s ideas are not specific to our moment in history, but adaptive responses to a world with certain characteristics:

1. Asymmetry
In our era, effects are no longer proportional to the size of their source. The biggest change-makers of the future are the small players: “start-ups and rogues, breakaways and indie labs.”

2. Complexity
The level of complexity is shaped by four inputs, all of which are extraordinarily high in today’s world: heterogeneity, interconnection, interdependency and adaptation.

3. Uncertainty
Not knowing is okay. In fact, we’ve entered an age where the admission of ignorance offers strategic advantages over expending resources–subcommittees and think tanks and sales forecasts—toward the increasingly futile goal of forecasting future events.”

When these three conditions are in place, certain guiding principles serve us best. In his book, Ito shares some of the maxims that organize his “anti-disciplinary” Media Lab in a complex and uncertain world.

Emergence over Authority

Complex systems show properties that their individual parts don’t possess, and we call this process “emergence”. For example, life is an emergent property of chemistry. Groups of people also produce a wondrous variety of emergent behaviors—languages, economies, scientific revolutions—when each intellect contributes to a whole that is beyond the abilities of any one person.

Some organizational structures encourage this kind of creativity more than others. Authoritarian systems only allow for incremental changes, whereas nonlinear innovation emerges from decentralized networks with a low barrier to entry. As Stephen Johnson describes in Emergence, when you plug more minds into the system, “isolated hunches and private obsessions coalesce into a new way of looking at the world, shared by thousands of individuals.”

Synthetic biology best exemplifies the type of new field that can arise from emergence. Not to be confused with genetic engineering, which modifies existing organisms, synthetic biology aims to create entirely new forms of life.

Having emerged in the era of open-source software, synthetic biology is becoming an exercise in radical collaboration between students, professors, and a legion of citizen scientists who call themselves biohackers. Emergence has made its way into the lab.

As a result, the cost of sequencing DNA is plummeting at six times the rate of Moore’s Law, and a large Registry of Standard Biological Parts, or BioBricks, now offers genetic components that perform well-understood functions in whatever organism is being created, like a block of Lego.

There is still a place for leaders in an organization that fosters emergence, but the role may feel unfamiliar to a manager from a traditional hierarchy. The new leader spends less time leading and more time “gardening”—pruning the hedges, watering the flowers, and otherwise getting out of the way. (As biologist Lewis Thomas puts it, a great leader must get the air right.)

Pull over Push

“Push” strategies involve directing resources from a central source to sites where, in the leader’s estimation, they are likely to be needed or useful. In contrast, projects that use “pull” strategies attract intellectual, financial and physical resources to themselves just as they are needed, rather than stockpiling them.

Ito is a proponent of the sharing economy, through which a startup might tap into the global community of freelancers and volunteers for a custom-made task force instead of hiring permanent teams of designers, programmers or engineers.

Here’s a great example:

When the Fukushima nuclear meltdown happened, Ito was living just outside of Tokyo. The Japanese government took a command-and-control (“push”) approach to the disaster, in which information would slowly climb up the hierarchy, and decisions would then be passed down stepwise to the ground-level workers.

It soon became clear that the government was not equipped to assess or communicate the radioactivity levels of each neighborhood, so Ito and his friends took the problem into their own hands. Pulling in expertise and money from far-flung scientists and entrepreneurs, they formed a citizen science group called Safecast, which built its own GPS-equipped Geiger counters and strapped them to cars for faster monitoring. They launched a website that continues to share data – more than 50 million data points so far – about local environments.

To benefit from these kinds of “pull” strategies, it pays to foster an environment that is rich with weak ties – a wide network of acquaintances from which to draw just-in-time knowledge and resources, as Ito did with Safecast.

Compasses over Maps

Detailed maps can be more misleading than useful in a fast-changing world, where a compass is the tool of choice. In the same way, organizations that plan exhaustively will be outpaced in an accelerating world by ones that are guided by a more encompassing mission.

A map implies a straightforward knowledge of the terrain, and the existence of an optimum route; the compass is a far more flexible tool and requires the user to employ creativity and autonomy in discovering his or her own path.

One advantage to the compass approach is that when a roadblock inevitably crops up, there is no need to go back to the beginning to form another plan or draw up multiple plans for each contingency. You simply navigate around the obstacle and continue in your chosen direction.

It is impossible, in any case, to make detailed plans for a complex and creative organization. The way to set a compass direction for a company is by creating a culture—or set of mythologies—that animates the parts in a common worldview.

In the case of the MIT Media Lab, that compass heading is described in three values: “Uniqueness, Impact, and Magic”. Uniqueness means that if someone is working on a similar project elsewhere, the lab moves on.

Rather than working to discover knowledge for its own sake, the lab works in the service of Impact, through start-ups and physical creations. It was expressed in the lab’s motto “Deploy or die”, but Barack Obama suggested they work on their messaging, and Ito shortened it to “Deploy.”

The Magic element, though hard to define, speaks to the delight that playful originality so often awakens.

Both students and faculty at the lab are there to learn, but not necessarily to be “educated”. Learning is something you pursue for yourself, after all, whereas education is something that’s done to you. The result is “agile, scrappy, permissionless innovation”.

The new job landscape requires more creativity from everybody. The people who will be most successful in this environment will be the ones who ask questions, trust their instincts, and refuse to follow the rules when the rules get in their way.

Other principles discussed in Whiplash include Risk over Safety, Disobedience over Compliance, Practice over Theory, Diversity over Ability, Resilience over Strength, and Systems over Objects.

The Founder Principle: A Wonderful Idea from Biology

We’ve all been taught natural selection; the mechanism by which species evolve through differential reproductive success. Most of us are familiar with the idea that random mutations in DNA cause variances in offspring, some of which survive more frequently than others. However, this is only part of the story.

Sometimes other situations cause massive changes in species populations, and they’re often more nuanced and tough to spot.

One such concept comes from one of the most influential biologists in history, Ernst Mayr. He called it The Founder Principle, a mechanism by which new species are created by a splintered population; often with lower genetic diversity and an increased risk of extinction.

In the brilliant The Song of the Dodo: Island Biography in an Age of ExtinctionDavid Quammen gives us not only the stories of many brilliant biological naturalists including Mayr, but we also get a deep dive into the core concepts of evolution and extinction, including the founder principle.

Quammen begins by outlining the basic idea:

When a new population is founded in an isolated place, the founders usually constitute a numerically tiny group – a handful of lonely pioneers, or just a pair, or maybe no more than one pregnant female. Descending from such a small number of founders, the new population will carry only a minuscule and to some extent random sample of the gene pool of the base population. The sample will most likely be unrepresentative, encompassing less genetic diversity than the larger pool. This effect shows itself whenever a small sample is taken from a large aggregation of diversity; whether the aggregation consists of genes, colored gum balls, M&M’s, the cards of a deck, or any other collection of varied items, a small sample will usually contain less diversity than the whole.

Why does the founder principle happen? It’s basically applied probability. Perhaps an example will help illuminate the concept.

Think of yourself playing a game of poker (five card draw) with a friend. The deck of cards is separated into four suits: Diamonds, hearts, clubs and spades, each suit having 13 cards for a total of 52 cards.

Now look at your hand of five cards. Do you have one card from each suit? Maybe. Are all five cards from the same suit? Probably not, but it is possible. Will you get the ace of spades? Maybe, but not likely.

This is a good metaphor for how the founder principle works. The gene pool carried by a small group of founders is unlikely to be precisely representative of the gene pool of the larger group. In some rare cases it will be very unrepresentative, like you getting dealt a straight flush.

It starts to get interesting when this founder population starts to reproduce, and genetic drift causes the new population to diverge significantly from its ancestors. Quammen explains:

Already isolated geographically from its base population, the pioneer population now starts drifting away genetically. Over the course of generations, its gene pool becomes more and more different from the gene pool of the base population – different both as to the array of alleles (that is, the variant forms of a given gene) and as to the commonness of each allele.

The founder population, in some cases, will become so different that it can no longer mate with the original population. This new species may even be a competitor for resources if the two populations are ever reintroduced. (Say, if a land bridge is created between two islands, or humans bring two species back in contact.)

Going back to our card metaphor, let’s pretend that you and your friend are playing with four decks of cards — 208 total cards. Say we randomly pulled out forty cards from those decks. If there are absolutely no kings in the forty cards you are playing with, you will never be able to create a royal flush (ace+king+queen+jack+10 of the same suit). It doesn’t matter how the cards are dealt, you can never make a royal flush with no kings.

Thus it is with species: If a splintered-off population isn’t carrying a specific gene variant (allele), that variant can never be represented in the newly created population, no matter how prolific that gene may have been in the original population. It’s gone. And as the rarest variants disappear, the new population becomes increasingly unlike the old one, especially if the new population is small.

Some alleles are common within a population, some are rare. If the population is large, with thousands or millions of parents producing thousands or millions of offspring, the rare alleles as well as the common ones will usually be passed along. Chance operation at high numbers tends to produce stable results, and the proportions of rarity and commonness will hold steady. If the population is small, though, the rare alleles will most likely disappear […] As it loses its rare alleles by the wayside, a small pioneer population will become increasingly unlike the base population from which it derived.

Some of this genetic loss may be positive (a gene that causes a rare disease may be missing), some may be negative (a gene for a useful attribute may be missing) and some may be neutral.

The neutral ones are the most interesting: A neutral gene at one point in time may become a useful gene at another point. It’s like playing a round of poker where 8’s are suddenly declared “wild,” and that card suddenly becomes much more important than it was the hand before. The same goes for animal traits.

Take a mammal population living on an island, having lost all of its ability to swim. That won’t mean much if all is well and it is never required to swim. But the moment there is a natural disaster such as a fire, having the ability to swim the short distance to the mainland could be the difference between survival or extinction.

That’s why the founder principle is so dangerous: The loss of genetic diversity often means losing valuable survival traits. Quammen explains:

Genetic drift compounds the founder-effect problem, stripping a small population of the genetic variation that it needs to continue evolving. Without that variation, the population stiffens toward uniformity. It becomes less capable of adaptive response. There may be no manifest disadvantages in uniformity so long as environmental circumstances remain stable; but when circumstances are disrupted, the population won’t be capable of evolutionary adjustment. If the disruption is drastic, the population may go extinct.

This loss of adaptability is one of the two major issues caused by the founder principle, the second being inbreeding depression. A founder population may have no choice but to only breed within its population and a symptom of too much inbreeding is the manifestation of harmful genetic variants among inbred individuals. (One reason humans consider incest a dangerous activity.) This too increases the fragility of species and decreases their ability to evolve.

The founder principle is just one of many amazing ideas in The Song of the Dodo. In fact, we at Farnam Street feel the book is so important that it made our list of books we recommend to improve your general knowledge of the world and it was the first book we picked for our members-only reading group.

If you have already read this book and want more we suggest Quammen’s The Reluctant Mr. Darwin or his equally thought provoking Spillover: Animal Infections and the Next Human Pandemic. Another wonderful and readable book on species evolution is The Beak of the Finch, by Jonathan Weiner.

The Island of Knowledge: Science and the Meaning of Life

“As the Island of Knowledge grows, so do the shores of our ignorance—the boundary between the known and unknown. Learning more about the world doesn’t lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.”


Common across human history is our longing to better understand the world we live in, and how it works. But how much can we actually know about the world?

In his book, The Island of Knowledge: The Limits of Science and the Search for Meaning, Physicist Marcelo Gleiser traces our progress of modern science in the pursuit to the most fundamental questions on existence, the origin of the universe, and the limits of knowledge.

What we know of the world is limited by what we can see and what we can describe, but our tools have evolved over the years to reveal ever more pleats into our fabric of knowledge. Gleiser celebrates this persistent struggle to understand our place in the world and travels our history from ancient knowledge to our current understanding.

While science is not the only way to see and describe the world we live in, it is a response to the questions on who we are, where we are, and how we got here. “Science speaks directly to our humanity, to our quest for light, ever more light.

To move forward, science needs to fail, which runs counter to our human desire for certainty. “We are surrounded by horizons, by incompleteness.” Rather than give up, we struggle along a scale of progress. What makes us human is this journey to understand more about the mysteries of the world and explain them with reason. This is the core of our nature.

While the pursuit is never ending, the curious journey offers insight not just into the natural world, but insight into ourselves.

“What I see in Nature is a magnificent structure that we can comprehend only
very imperfectly,
and that must fill a thinking person with a feeling of humility.”
— Albert Einstein

We tend to think that what we see is all there is — that there is nothing we cannot see. We know it isn’t true when we stop and think, yet we still get lulled into a trap of omniscience.

Science is thus limited, offering only part of the story — the part we can see and measure. The other part remains beyond our immediate reach.

What we see of the world,” Gleiser begins, “is only a sliver of what’s out there.”

There is much that is invisible to the eye, even when we augment our sensorial perception with telescopes, microscopes, and other tools of exploration. Like our senses, every instrument has a range. Because much of Nature remains hidden from us, our view of the world is based only on the fraction of reality that we can measure and analyze. Science, as our narrative describing what we see and what we conjecture exists in the natural world, is thus necessarily limited, telling only part of the story. … We strive toward knowledge, always more knowledge, but must understand that we are, and will remain, surrounded by mystery. This view is neither antiscientific nor defeatist. … Quite the contrary, it is the flirting with this mystery, the urge to go beyond the boundaries of the known, that feeds our creative impulse, that makes us want to know more.

While we may broadly understand the map of what we call reality, we fail to understand its terrain. Reality, Gleiser argues, “is an ever-shifting mosaic of ideas.”


The incompleteness of knowledge and the limits of our scientific worldview only add to the richness of our search for meaning, as they align science with our human fallibility and aspirations.

What we call reality is a (necessarily) limited synthesis. It is certainly our reality, as it must be, but it is not the entire reality itself:

My perception of the world around me, as cognitive neuroscience teaches us, is synthesized within different regions of my brain. What I call reality results from the integrated sum of countless stimuli collected through my five senses, brought from the outside into my head via my nervous system. Cognition, the awareness of being here now, is a fabrication of a vast set of chemicals flowing through myriad synaptic connections between my neurons. … We have little understanding as to how exactly this neuronal choreography engenders us with a sense of being. We go on with our everyday activities convinced that we can separate ourselves from our surroundings and construct an objective view of reality.

The brain is a great filtering tool, deaf and blind to vast amounts of information around us that offer no evolutionary advantage. Part of it we can see and simply ignore. Other parts, like dust particles and bacteria, go unseen because of limitations of our sensory tools.

As the Fox said to the Little Prince in Antoine de Saint-Exupery’s fable, “What is essential is invisible to the eye.” There is no better example than oxygen.

Science has increased our view. Our measurement tools and instruments can see bacteria and radiation, subatomic particles and more. However precise these tools have become, their view is still limited.

There is no such thing as an exact measurement. Every measurement must be stated within its precision and quoted together with “error bars” estimating the magnitude of errors. High-precision measurements are simply measurements with small error bars or high confidence levels; there are no perfect, zero-error measurements.


Technology limits how deeply experiments can probe into physical reality. That is to say, machines determine what we can measure and thus what scientists can learn about the Universe and ourselves. Being human inventions, machines depend on our creativity and available resources. When successful, they measure with ever-higher accuracy and on occasion may also reveal the unexpected.

“All models are wrong, some are useful.”
— George Box

What we know about the world is only what we can detect and measure — even if we improve our “detecting and measuring” as time goes along. And thus we make our conclusions of reality on what we can currently “see.”

We see much more than Galileo, but we can’t see it all. And this restriction is not limited to measurements: speculative theories and models that extrapolate into unknown realms of physical reality must also rely on current knowledge. When there is no data to guide intuition, scientists impose a “compatibility” criterion: any new theory attempting to extrapolate beyond tested ground should, in the proper limit, reproduce current knowledge.


If large portions of the world remain unseen or inaccessible to us, we must consider the meaning of the word “reality” with great care. We must consider whether there is such a thing as an “ultimate reality” out there — the final substrate of all there is — and, if so, whether we can ever hope to grasp it in its totality.


We thus must ask whether grasping reality’s most fundamental nature is just a matter of pushing the limits of science or whether we are being quite naive about what science can and can’t do.

Here is another way of thinking about this: if someone perceives the world through her senses only (as most people do), and another amplifies her perception through the use of instrumentation, who can legitimately claim to have a truer sense of reality? One “sees” microscopic bacteria, faraway galaxies, and subatomic particles, while the other is completely blind to such entities. Clearly they “see” different things and—if they take what they see literally—will conclude that the world, or at least the nature of physical reality, is very different.

Asking who is right misses the point, although surely the person using tools can see further into the nature of things. Indeed, to see more clearly what makes up the world and, in the process to make more sense of it and ourselves is the main motivation to push the boundaries of knowledge. … What we call “real” is contingent on how deeply we are able to probe reality. Even if there is such thing as the true or ultimate nature of reality, all we have is what we can know of it.


Our perception of what is real evolves with the instruments we use to probe Nature. Gradually, some of what was unknown becomes known. For this reason, what we call “reality” is always changing. … The version of reality we might call “true” at one time will not remain true at another. … Given that our instruments will always evolve, tomorrow’s reality will necessarily include entitles not known to exist today. … More to the point, as long as technology advances—and there is no reason to suppose that it will ever stop advancing for as long as we are around—we cannot foresee an end to this quest. The ultimate truth is elusive, a phantom.

Gleiser makes his point with a beautiful metaphor. The Island of Knowledge.

Consider, then, the sum total of our accumulated knowledge as constituting an island, which I call the “Island of Knowledge.” … A vast ocean surrounds the Island of Knowledge, the unexplored ocean of the unknown, hiding countless tantalizing mysteries.

The Island of Knowledge grows as we learn more about the world and ourselves. And as the island grows, so too “do the shores of our ignorance—the boundary between the known and unknown.”

Learning more about the world doesn’t lead to a point closer to a final destination—whose existence is nothing but a hopeful assumption anyways—but to more questions and mysteries. The more we know, the more exposed we are to our ignorance, and the more we know to ask.

As we move forward we must remember that despite our quest, the shores of our ignorance grow as the Island of Knowledge grows. And while we will struggle with the fact that not all questions will have answers, we will continue to progress. “It is also good to remember,” Gleiser writes, “that science only covers part of the Island.”

Richard Feynman has pointed out before that science can only answer the subset of question that go, roughly, “If I do this, what will happen?” Answers to questions like Why do the rules operate that way? and Should I do it? are not really questions of scientific nature — they are moral, human questions, if they are knowable at all.

There are many ways of understanding and knowing that should, ideally, feed each other. “We are,” Gleiser concludes, “multidimensional creatures and search for answers in many, complementary ways. Each serves a purpose and we need them all.”

“The quest must go on. The quest is what makes us matter: to search for more answers, knowing that the significant ones will often generate surprising new questions.”

The Island of Knowledge is a wide-ranging tour through scientific history from planetary motions to modern scientific theories and how they affect our ideas on what is knowable.

Merchants Of Doubt: How The Tobacco Strategy Obscures the Realities of Global Warming

There will always be those who try to challenge growing scientific consensus — indeed the challenge is fundamental to science. Motives, however, matter and not everyone has good intentions.


Naomi Oreskes and Erik Conway’s masterful work Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, was recommended by Elon Musk.

The book illuminates how the tobacco industry created doubt and kept the controversy alive well past scientific consensus. They call this the Tobacco Strategy. And the same playbook is happening all over again. This time with Global Warming.

Merchants of Doubt

The goal of the Tobacco Strategy is to create doubt about the causal link to protect the interests of incumbents.

Millions of pages of documents released during tobacco litigation demonstrate these links. They show the crucial role that scientists played in sowing doubt about the links between smoking and health risks. These documents— which have scarcely been studied except by lawyers and a handful of academics— also show that the same strategy was applied not only to global warming, but to a laundry list of environmental and health concerns, including asbestos, secondhand smoke, acid rain, and the ozone hole.

Interestingly, not only are the tactics the same when it comes to Global Warming, but so are the people.

They used their scientific credentials to present themselves as authorities, and they used their authority to try to discredit any science they didn’t like.

Over the course of more than twenty years, these men did almost no original scientific research on any of the issues on which they weighed in. Once they had been prominent researchers, but by the time they turned to the topics of our story, they were mostly attacking the work and the reputations of others. In fact, on every issue, they were on the wrong side of the scientific consensus. Smoking does kill— both directly and indirectly. Pollution does cause acid rain. Volcanoes are not the cause of the ozone hole. Our seas are rising and our glaciers are melting because of the mounting effects of greenhouse gases in the atmosphere, produced by burning fossil fuels. Yet, for years the press quoted these men as experts, and politicians listened to them, using their claims as justification for inaction.

December 15, 1953, was a fateful day. A few months earlier, researchers at the Sloan-Kettering Institute in New York City had demonstrated that cigarette tar painted on the skin of mice caused fatal cancers. This work had attracted an enormous amount of press attention: the New York Times and Life magazine had both covered it, and Reader’s Digest— the most widely read publication in the world— ran a piece entitled “Cancer by the Carton.” Perhaps the journalists and editors were impressed by the scientific paper’s dramatic concluding sentences: “Such studies, in view of the corollary clinical data relating smoking to various types of cancer, appear urgent. They may not only result in furthering our knowledge of carcinogens, but in promoting some practical aspects of cancer prevention.”

These findings, however, shouldn’t have been a surprise. We’re often blinded by a ‘bad people can do no right’ line of thought.

German scientists had shown in the 1930s that cigarette smoking caused lung cancer, and the Nazi government had run major antismoking campaigns; Adolf Hitler forbade smoking in his presence. However, the German scientific work was tainted by its Nazi associations, and to some extent ignored, if not actually suppressed, after the war; it had taken some time to be rediscovered and independently confirmed. Now, however, American researchers— not Nazis— were calling the matter “urgent,” and the news media were reporting it.  “Cancer by the carton” was not a slogan the tobacco industry would embrace.


With the mounting evidence, the tobacco industry was thrown into a panic.


So industry executives made a fateful decision, one that would later become the basis on which a federal judge would find the industry guilty of conspiracy to commit fraud— a massive and ongoing fraud to deceive the American public about the health effects of smoking. The decision was to hire a public relations firm to challenge the scientific evidence that smoking could kill you.

On that December morning (December 15th), the presidents of four of America’s largest tobacco companies— American Tobacco, Benson and Hedges, Philip Morris, and U.S. Tobacco— met at the venerable Plaza Hotel in New York City. The French Renaissance chateau-style building— in which unaccompanied ladies were not permitted in its famous Oak Room bar— was a fitting place for the task at hand: the protection of one of America’s oldest and most powerful industries. The man they had come to meet was equally powerful: John Hill, founder and CEO of one of America’s largest and most effective public relations firms, Hill and Knowlton.

The four company presidents— as well as the CEOs of R. J. Reynolds and Brown and Williamson— had agreed to cooperate on a public relations program to defend their product. They would work together to convince the public that there was “no sound scientific basis for the charges,” and that the recent reports were simply “sensational accusations” made by publicity-seeking scientists hoping to attract more funds for their research. They would not sit idly by while their product was vilified; instead, they would create a Tobacco Industry Committee for Public Information to supply a “positive” and “entirely ‘pro-cigarette’” message to counter the anti-cigarette scientific one. As the U.S. Department of Justice would later put it, they decided “to deceive the American public about the health effects of smoking.”

At first, the companies didn’t think they needed to fund new scientific research, thinking it would be sufficient to “disseminate information on hand.” John Hill disagreed, “emphatically warn[ing] … that they should … sponsor additional research,” and that this would be a long-term project. He also suggested including the word “research” in the title of their new committee, because a pro-cigarette message would need science to back it up. At the end of the day, Hill concluded, “scientific doubts must remain.” It would be his job to ensure it.

Over the next half century, the industry did what Hill and Knowlton advised. They created the “Tobacco Industry Research Committee” to challenge the mounting scientific evidence of the harms of tobacco. They funded alternative research to cast doubt on the tobacco-cancer link. They conducted polls to gauge public opinion and used the results to guide campaigns to sway it. They distributed pamphlets and booklets to doctors, the media, policy makers, and the general public insisting there was no cause for alarm.

The industry’s position was that there was “no proof” that tobacco was bad, and they fostered that position by manufacturing a “debate,” convincing the mass media that responsible journalists had an obligation to present “both sides” of it.

Of course there was more to it than that.

The industry did not leave it to journalists to seek out “all the facts.” They made sure they got them. The so-called balance campaign involved aggressive dissemination and promotion to editors and publishers of “information” that supported the industry’s position. But if the science was firm, how could they do that? Was the science firm?

The answer is yes, but. A scientific discovery is not an event; it’s a process, and often it takes time for the full picture to come into clear focus.  By the late 1950s, mounting experimental and epidemiological data linked tobacco with cancer— which is why the industry took action to oppose it. In private, executives acknowledged this evidence. In hindsight it is fair to say— and science historians have said— that the link was already established beyond a reasonable doubt. Certainly no one could honestly say that science showed that smoking was safe.

But science involves many details, many of which remained unclear, such as why some smokers get lung cancer and others do not (a question that remains incompletely answered today). So some scientists remained skeptical.


The industry made its case in part by cherry-picking data and focusing on unexplained or anomalous details. No one in 1954 would have claimed that everything that needed to be known about smoking and cancer was known, and the industry exploited this normal scientific honesty to spin unreasonable doubt.


The industry had realized that you could create the impression of controversy simply by asking questions, even if you actually knew the answers and they didn’t help your case. And so the industry began to transmogrify emerging scientific consensus into raging scientific “debate.”

Merchants of Doubt is a fascinating look at how the process for sowing doubt in the minds of people remains the same today as it was in the 1950s. After all, if it ain’t broke, don’t fix it.

Karl Popper on The Line Between Science and Pseudoscience

It’s not immediately clear, to the layman, what the essential difference is between science and something masquerading as science: pseudoscience. The distinction gets at the core of what comprises human knowledge: How do we actually know something to be true? Is it simply because our powers of observation tell us so? Or is there more to it?

Sir Karl Popper (1902-1994), the scientific philosopher, was interested in the same problem. How do we actually define the scientific process? How do we know which theories can be said to be truly explanatory?


He began addressing it in a lecture, which is printed in the book Conjectures and Refutations: The Growth of Scientific Knowledge (also available online):

When I received the list of participants in this course and realized that I had been asked to speak to philosophical colleagues I thought, after some hesitation and consultation, that you would probably prefer me to speak about those problems which interest me most, and about those developments with which I am most intimately acquainted. I therefore decided to do what I have never done before: to give you a report on my own work in the philosophy of science, since the autumn of 1919 when I first began to grapple with the problem, ‘When should a theory be ranked as scientific?’ or ‘Is there a criterion for the scientific character or status of a theory?’

Popper saw a problem with the number of theories he considered non-scientific that, on their surface, seemed to have a lot in common with good, hard, rigorous science. But the question of how we decide which theories are compatible with the scientific method, and those which are not, was harder than it seemed.


It is most common to say that science is done by collecting observations and grinding out theories from them. Charles Darwin once said, after working long and hard on the problem of the Origin of Species,

My mind seems to have become a kind of machine for grinding general laws out of large collections of facts.

This is a popularly accepted notion. We observe, observe, and observe, and we look for theories to best explain the mass of facts. (Although even this is not really true: Popper points out that we must start with some a priori knowledge to be able to generate new knowledge. Observation is always done with some hypotheses in mind–we can’t understand the world from a totally blank slate. More on that another time.)

The problem, as Popper saw it, is that some bodies of knowledge more properly named pseudosciences would be considered scientific if the “Observe & Deduce” operating definition were left alone. For example, a believing astrologist can ably provide you with “evidence” that their theories are sound. The biographical information of a great many people can be explained this way, they’d say.

The astrologist would tell you, for example, about how “Leos” seek to be the centre of attention; ambitious, strong, seeking the limelight. As proof, they might follow up with a host of real-life Leos: World-leaders, celebrities, politicians, and so on. In some sense, the theory would hold up. The observations could be explained by the theory, which is how science works, right?

Sir Karl ran into this problem in a concrete way because he lived during a time when psychoanalytic theories were all the rage at just the same time Einstein was laying out a new foundation for the physical sciences with the concept of relativity. What made Popper uncomfortable were comparisons between the two. Why did he feel so uneasy putting Marxist theories and Freudian psychology in the same category of knowledge as Einstein’s Relativity? Did all three not have vast explanatory power in the world? Each theory’s proponents certainly believed so, but Popper was not satisfied.

It was during the summer of 1919 that I began to feel more and more dissatisfied with these three theories–the Marxist theory of history, psychoanalysis, and individual psychology; and I began to feel dubious about their claims to scientific status. My problem perhaps first took the simple form, ‘What is wrong with Marxism, psycho-analysis, and individual psychology? Why are they so different from physical theories, from Newton’s theory, and especially from the theory of relativity?’

I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory.

Whatever happened always confirmed it. Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth; who refused to see it, either because it was against their class interest, or because of their repressions which were still ‘un-analysed’ and crying aloud for treatment.

Here was the salient problem: The proponents of these new sciences saw validations and verifications of their theories everywhere. If you were having trouble as an adult, it could always be explained by something your mother or father had done to you when you were young, some repressed something-or-other that hadn’t been analysed and solved. They were confirmation bias machines.

What was the missing element? Popper had figured it out before long: The non-scientific theories could not be falsified. They were not testable in a legitimate way. There was no possible objection that could be raised which would show the theory to be wrong.

In a true science, the following statement can be easily made: “If happens, it would show demonstrably that theory is not true.” We can then design an experiment, a physical one or sometimes a simple thought experiment, to figure out if actually does happen It’s the opposite of looking for verification; you must try to show the theory is incorrect, and if you fail to do so, thereby strengthen it.

Pseudosciences cannot and do not do this–they are not strong enough to hold up. As an example, Popper discussed Freud’s theories of the mind in relation to Alfred Adler’s so-called “individual psychology,” which was popular at the time:

I may illustrate this by two very different examples of human behaviour: that of a man who pushes a child into the water with the intention of drowning it; and that of a man who sacrifices his life in an attempt to save the child. Each of these two cases can be explained with equal ease in Freudian and in Adlerian terms. According to Freud the first man suffered from repression (say, of some component of his Oedipus complex), while the second man had achieved sublimation. According to Adler the first man suffered from feelings of inferiority (producing perhaps the need to prove to himself that he dared to commit some crime), and so did the second man (whose need was to prove to himself that he dared to rescue the child). I could not think of any human behaviour which could not be interpreted in terms of either theory. It was precisely this fact–that they always fitted, that they were always confirmed–which in the eyes of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness.

Popper contrasted these theories against Relativity, which made specific, verifiable predictions, giving the conditions under which the predictions could be shown false. It turned out that Einstein’s predictions came to be true when tested, thus verifying the theory through attempts to falsify it. But the essential nature of the theory gave grounds under which it could have been wrong. To this day, physicists seek to figure out where Relativity breaks down in order to come to a more fundamental understanding of physical reality. And while the theory may eventually be proven incomplete or a special case of a more general phenomenon, it has still made accurate, testable predictions that have led to practical breakthroughs.

Thus, in Popper’s words, science requires testability: “If observation shows that the predicted effect is definitely absent, then the theory is simply refuted.”  This means a good theory must have an element of risk to it. It must be able to be proven wrong under stated conditions.

From there, Popper laid out his essential conclusions, which are useful to any thinker trying to figure out if a theory they hold dear is something that can be put in the scientific realm:

1. It is easy to obtain confirmations, or verifications, for nearly every theory–if we look for confirmations.

2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory–an event which would have refuted the theory.

3. Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.

4. A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.

5. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

6. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ‘corroborating evidence’.)

7. Some genuinely testable theories, when found to be false, are still upheld by their admirers–for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a ‘conventionalist twist’ or a ‘conventionalist stratagem’.)

One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.

Finally, Popper was careful to say that it is not possible to prove that Freudianism was not true, at least in part. But we can say that we simply don’t know whether it’s true because it does not make specific testable predictions. It may have many kernels of truth in it, but we can’t tell. The theory would have to be restated.

This is the essential “line of demarcation, as Popper called it, between science and pseudoscience.