Tag: Science

Towards a Greater Synthesis: Steven Pinker on How to Apply Science to the Humanities

The fundamental idea behind Farnam Street is to learn to think across disciplines and synthesize, using ideas in combination to solve problems in novel ways.

An easy example would be to take a fundamental idea of psychology like the concept of a near-miss (deprival super-reaction) and use it to help explain the success of a gambling enterprise. Or, similarly, using the idea of the endowment effect to help explain why lotteries are a lot more successful if you allow people to choose their own numbers. Sometimes we take ideas from hard science, like the idea of runaway feedback (think of a nuclear reaction gaining steam), to explain why small problems can become large problems or small advantages can become large ones.

This kind of reductionism and synthesis helps one understand the world at a fundamental level and solve new problems.

We’re sometimes asked about untapped ways that this thinking can be applied. In hearing this, it occasionally seems that people fall into the trap of believing all of the great cross-disciplinary thinking has been done. Or maybe even that all of the great thinking has been done, period.

Steven-Pinker-by-Rebecca-Goldstein

Harvard psychologist Steven Pinker is here to say we have a long way to go.

We’ve written before about Pinker’s ideas on a broad education and on writing, but he’s also got a great essay on Edge.org called Writing in the 21st Century wherein he addresses some of the central concepts of his book on writing — The Sense of Style. While the book’s ideas are wonderful, later in the article he moves to a more general point useful for our purposes: Systematic application of the “harder” sciences to the humanities is a huge untapped source of knowledge.

He provides some examples that are fascinating in their potential:

This combination of science and letters is emblematic of what I hope to be a larger trend we spoke of earlier, namely the application of science, particularly psychology and cognitive science, to the traditional domains of humanities. There’s no aspect of human communication and cultural creation that can’t benefit from a greater application of psychology and the other sciences of mind. We would have an exciting addition to literary studies, for example, if literary critics knew more about linguistics.Poetry analysts could apply phonology (the study of sound structure) and the cognitive psychology of metaphor. An analysis of plot in fiction could benefit from a greater understanding of the conflicts and confluences of ultimate interests in human social relationships. The genre of biography would be deepened by an understanding of the nature of human memory, particularly autobiographical memory. How much of the memory of our childhood is confabulated? Memory scientists have a lot to say about that. How much do we polish our image of ourselves in describing ourselves to others, and more importantly, recollecting our own histories? Do we edit our memories in an Orwellian manner to make ourselves more coherent in retrospect? Syntax and semantics are relevant as well. How does a writer use the tense system of English to convey a sense of immediacy or historical distance?

In music the sciences of auditory and speech perception have much to contribute to understanding how musicians accomplish their effects. The visual arts could revive an old method of analysis going back to Ernst Gombrich and Rudolf Arnheim in collaboration with the psychologist Richard Gregory. Indeed, even the art itself in the 1920s was influenced by psychology, thanks in part to Gertrude Stein, who as an undergraduate student of William James did a wonderful thesis on divided attention, and then went to Paris and brought the psychology of perception to the attention of artists like Picasso and Braque. Gestalt psychology may have influenced Paul Klee and the expressionists. Since then we have lost that wonderful synergy between the science of visual perception and the creation of visual art.

Going beyond the arts, the social sciences, such as political science could benefit from a greater understanding of human moral and social instincts, such as the psychology of dominance, the psychology of revenge and forgiveness, and the psychology of gratitude and social competition. All of them are relevant, for example, to international negotiations. We talk about one country being friendly to another or allying or competing, but countries themselves don’t have feelings. It’s the elites and leaders who do, and a lot of international politics is driven by the psychology of its leaders.

In this short section alone, Pinker offers realistically that we can apply:

  • Linguistics to literature
  • Phonology and psychology to poetry
  • The biology of groups to understand fiction
  • The biology of memory to understand biography
  • Semantics to understand historical writing
  • Psychology and biology to understand art and music
  • Psychology and biology to understand politics

Turns out, there’s a huge amount of thinking left to be done. Effectively, Pinker is asking us to imitate the scientist Linus Pauling, who sought to systematically understand chemistry by using the next most fundamental discipline, physics, an approach which led to great breakthroughs and a consilience of knowledge in the two fields which is taken for granted in modern science.

Towards a Greater Synthesis

Even if we’re not trying to make great scientific advances, think about how we could apply this idea to all of our lives. Fields like basic mathematics, statistics, biology, physics, and psychology provide deep insight into the “higher level” functions of humanity like law, medicine, politics, business, and social groups. Or, as Munger has put it, “When you get down to it, you’ll find worldly wisdom looks pretty darn academic.” And it isn’t as hard as it sounds: We don’t need to understand the deep math of relativity to grasp the idea that two observers can see the same event in a different way depending on perspective. The rest of the world’s models are similar, although having some mathematical fluency is necessary.

Pinker, like Munger, doesn’t stop there. He also believes in what Munger calls the ethos of hard science, which is a way of rigorously considering the problems of the practical world.

Even beyond applying the findings of psychology and cognitive science and social and affective neuroscience, it’s the mindset of science that ought to be exported to cultural and intellectual life as a whole. That consists in increased skepticism and scrutiny about factual conventional wisdom: How much of what you think is true really is true if you go to the numbers? For me this has been a salient issue in analyzing violence, because the conventional wisdom is that we’re living in extraordinarily violent times.

But if you take into account the psychology of risk perception, as pioneered by Daniel Kahneman, Amos Tversky, Paul Slovic, Gerd Gigerenzer, and others, you realize that the conventional wisdom is systematically distorted by the source of our information about the world, namely the news. News is about the stuff that happens; it’s not about the stuff that doesn’t happen. Human risk perception is affected by memorable examples, according to Tversky and Kahneman’s availability heuristic. No matter what the rate of violence is objectively, there are always enough examples to fill the news. And since our perception of risk is influenced by memorable examples, we’ll always think we’re living in violent times. It’s only when you apply the scientific mindset to world events, to political science and history, and try to count how many people are killed now as opposed to ten years ago, a hundred years ago, or a thousand years ago that you get an accurate picture about the state of the world and the direction that it’s going, which is largely downward. That conclusion only came from applying an empirical mindset to the traditional subject matter of history and political science.

Nassim Taleb has been on a similar hunt for a long time (although, amusingly, he doesn’t like Pinker’s book on violence at all). The question is relatively straightforward: How do we know what we know? Traditionally, what we know has simply been based on what we can see, something now called the availability bias. In other words, because we see our grandmother live to 95 years old while eating carrots every day, we think carrots prevent cancer. (A conflation of correlation and causation.)

But Pinker and Taleb call for a higher standard called empiricism, which requires pushing beyond anecdote into an accumulation of sound data to support a theory, with disconfirming examples weighted as heavily as confirming ones. This shift from anecdote to empiricism led humanity to make some of its greatest leaps of understanding, yet we’re still falling into the trap regularly, an outcome which itself can be explained by evolutionary biology and modern psychology. (Hint: It’s in the deep structure of our minds to extrapolate.)

Learning to Ask Why

Pinker continues with a claim that Munger would dearly appreciate: The search for explanations is how we push into new ideas. The deeper we push, the better we understand.

The other aspect of the scientific mindset that ought to be exported to the rest of intellectual life is the search for explanations. That is, not to just say that history is one damn thing after another, that stuff happens, and there’s nothing we can do to explain why, but to relate phenomena to more basic or general phenomena … and to try to explain those phenomena with still more basic phenomena. We’ve repeatedly seen that happen in the sciences, where, for example, biological phenomena were explained in part at the level of molecules, which were explained by chemistry, which was explained by physics.

There’s no reason that that this process of explanation can’t continue. Biology gives us a grasp of the brain, and human nature is a product of the organization of the brain, and societies unfold as they do because they consist of brains interacting with other brains and negotiating arrangements to coordinate their behavior, and so on.

This idea certainly takes heat. The biologist E.O. Wilson calls it Consilience, and has gone as far as saying that all human knowledge can eventually be reduced to extreme fundamentals like mathematics and particle physics. (Leading to something like The Atomic Explanation of the Civil War.)

Whether or not you take it to such an extreme depends on your boldness and your confidence in the mental acuity of human beings. But even if you think Wilson is crazy, you can still learn deeply from the more fundamental knowledge in the world. This push to reduce things to their simplest explanations (but not simpler) is how we array all new knowledge and experience on a latticework of mental models.

For example, instead of taking Warren Buffett’s dictum that markets are irrational on its face, try to understand why. What about human nature and the dynamics of human groups leads to that outcome? What about biology itself leads to human nature? And so on. You’ll eventually hit a wall, that’s a certainty, but the further you push, the more fundamentally you understand the world. Elon Musk calls this first principles thinking and credits it with helping him do things in engineering and business that almost everyone considered impossible.

***

From there, Pinker concludes with a thought that hits near and dear to our hearts:

There is no “conflict between the sciences and humanities,” or at least there shouldn’t be. There should be no turf battle as to who gets to speak about what matters. What matters are ideas. We should seek the ideas that give us the deepest, richest, best-informed understanding of the human condition, regardless of which people or what discipline originates them. That has to include the sciences, but it can’t come only from the sciences. The focus should be on ideas, not on people, disciplines, or academic traditions.


Still Interested?
Start building your mental models and read some more Pinker for more goodness.

The Central Mistake of Historicism: Karl Popper on Why Trend is Not Destiny

Philosophy can be a little dry in concept. The word itself conjures up images of thinking about thought, why we exist, and other metaphysical ideas that seem a little divorced from the everyday world.

One true philosopher who bucked the trend was the genius Austrian philosopher of science, Karl Popper.

Popper had at least three important lines of inquiry:

  1. How does progressive scientific thought actually happen?
  2. What type of society do we need to allow for scientific progress to be made?
  3. What can we say we actually know about the world?

Popper’s work led to his idea of falsifiability as the main criterion of a scientific theory. Simply put, an idea or theory doesn’t enter the realm of science until we can state it in such a way that a test could prove it wrong. This important identifier allowed him to distinguish between science and pseudoscience.

An interesting piece of Popper’s work was an attack on what he called historicism — the idea that history has fixed laws or trends that inevitably lead to certain outcomes. Included would be the Marxist interpretation of human history as a push and pull between classes, the Platonic ideals of the systemic “rise and fall” of cities and societies in a fundamentally predictable way, John Stuart Mill’s laws of succession, and even the theory that humanity inevitably progresses towards a “better” and happier outcome, however defined. Modern ideas in this category might well include Thomas Piketty’s theory of how capitalism leads to an accumulation of dangerous inequality, the “inevitability” of America’s fall from grace in the fashion of the Roman empire, or even Russell Brand’s popular diatribe on utopian upheaval from a few years back.

Popper considered this kind of thinking pseudoscience, or worse — a dangerous ideology that tempts wannabe state planners and utopians to control society. (Perhaps through violent revolution, for example.) He did not consider such historicist doctrines falsifiable. There is no way, for example, to test whether Marxist theory is actually true or not, even in a thought experiment. We must simply take it on faith, based on a certain interpretation of history, that the bourgeoisie and the proletariat are at odds, and that the latter is destined to create uprisings. (Destined being the operative word — it implies inevitability.) If we’re to assert that the there is a Law of Increasing Technological Complexity in human society, which many are tempted to do these days, is that actually a testable hypothesis? Too frequently, these Laws become immune to falsifying evidence — any new evidence is interpreted through the lens of the theory. Instead of calling them interpretations, we call them Laws, or some similarly connotative word.

More deeply, Popper realized the important point that history is a unique process — it only gets run once. We can’t derive Laws of History that predict the future the way we can with, say, a law of physics that carries predictive capability under stated conditions. (i.e. If I drop a ceramic coffee cup more than 2 feet, it will shatter.) We can only merely deduce some tendencies of human nature, laws of the physical world, and so on, and generate some reasonable expectation that if X happens, Y is somewhat likely to follow. But viewing the process of human or organic history as possessing the regularity of a solar system is folly.

He discusses this in his book The Poverty of Historicism.

The evolution of life on earth, or of a human society, is a unique historical process. Such a process, we may assume, proceeds in accordance with all kinds of causal laws, for example, the laws of mechanics, of chemistry, of heredity and segregation, of natural selection, etc. Its description, however, is not a law, but only a single historical statement. Universal laws make assertions concerning some unvarying order[…] and although there is no reason why the observation of one single instance should not incite us to formulate a universal law, nor why, if we are lucky, we should not even hit upon the truth, it is clear that any law, formulated in this or in any other way, must be tested by new instances before it can be taken seriously by science. But we cannot hope to test a universal hypothesis nor to find a natural law acceptable to science if we are ever confined to the observation of one unique process. Nor can the observation of one unique process help us to foresee its future development. The most careful observation of one developing caterpillar will to help us to predict its transformation into a butterfly.

Popper realized that once we deduce a theory of the Laws of Human Development, carried into the ever-after, we are led into a gigantic confirmation bias problem. For example, we can certainly find confirmations for the idea that humans have progressed, in a specifically defined way, towards increasing technological complexity. But is that a Law of history, in the inviolable sense? For that, we really can’t say.

The problem is that to establish cause-and-effect, in a scientific sense, requires two things: A universal law (or a set of them) and some initial conditions (and ideally these are played out over a really large sample size to give us confidence). Popper explains:

I suggest that to give a causal explanation of a certain specific event means deducing a statement describing this event from two kinds of premises: from some universal laws, and from some singular or specific statements which we may call specific initial conditions.

For example, we can say that we have given a causal explanation of the breaking of a certain thread if we find this thread could carry a weight of only one pound, and that a weight of two pounds was put on it. If we analyze this causal explanation, then we find that two different constituents are involved. (1) Some hypotheses of the character of universal laws of nature; in this case, perhaps: ‘For every thread of a given structure s (determined by material, thickness, etc.) there is a characteristic weight w such that the thread will break if any weight exceeding w is suspended on it’ and ‘For every thread of the structure s, the characteristic weight w equals one pound.’ (2) Some specific statements—the initial conditions—pertaining to the particular event in question; in this case we may have two such statements: ’This is a thread of structure s, and ‘The weight put on this thread was a weight of two pounds’.

The trend is not destiny

Here we hit on the problem of trying to assert any fundamental laws by which human history must inevitably progress. Trend is not destiny. Even if we can derive and understand certain laws of human biological nature, the trends of history itself dependent on conditions, and conditions change.

Explained trends do exist, but their persistence depends on the persistence of certain specific initial conditions (which in turn may sometimes be trends).

Mill and his fellow historicists overlook the dependence of trends on initial conditions. They operate with trends as if they were unconditional, like laws. Their confusion of laws with trends make them believe in trends which are unconditional (and therefore general); or, as we may say, in ‘absolute trends’; for example a general historical tendency towards progress—‘a tendency towards a better and happier state’. And if they at all consider a ‘reduction’ of their tendencies to laws, they believe that these tendencies can be immediately derived from universal laws alone, such as the laws of psychology (or dialectical materialism, etc.).

This, we may say, is the central mistake of historicism. Its “laws of development” turn out to be absolute trends; trends which, like laws, do not depend on initial conditions, and which carry us irresistibly in a certain direction into the future. They are the basis of unconditional prophecies, as opposed to conditional scientific predictions.

[…]

The point is that these (initial) conditions are so easily overlooked. There is, for example, a trend towards an ‘accumulation of means of production’ (as Marx puts it). But we should hardly expect it to persist in a population which is rapidly decreasing; and such a decrease may in turn depend on extra-economic conditions, for example, on chance interventions, or conceivably on the direct physiological (perhaps bio-chemical) impact of an industrial environment. There are, indeed, countless possible conditions; and in order to be able to examine these possibilities in our search for the true conditions of the trend, we have all the time to try to imagine conditions under which the trend in question would disappear. But this is just what the historicist cannot do. He firmly believes in his favorite trend, and conditions under which it would disappear to him are unthinkable. The poverty of historicism, we might say, is a poverty of imagination. The historicist continuously upbraids those who cannot imagine a change in their little worlds; yet it seems that the historicist is himself deficient in imagination, for he cannot imagine a change in the conditions of change.

Still interested? Check out our previous post on Popper’s theory of falsification, or check out The Poverty of Historicism to explore his idea more deeply. A warning: It’s not a beach read. I had to read it twice to get the basic idea. But, once grasped, it’s well worth the time.

The Boundaries Between Science and Religion: Alan Lightman on Different Kinds of Knowledge

“The physical universe is subject to rational analysis and the methods of science. The spiritual universe is not. All of us have had experiences that are not subject to rational analysis. Besides religion, much of our art and our values and our personal relationships with other people spring from such experiences.”

***

Alan Lightman, whose beautiful meditation on our yearning for permanence in a universe that offers none, looks at the tension between science and religion in The Accidental Universe: The World You Thought You Knew.

In the essay, “The Spiritual Universe,” Lightman sets out to reconcile his personal struggle between religion and science. In so doing he sets out the necessary criteria for science to be compatible with religion:

The first step in this journey is to state what I will call the central doctrine of science: All properties and events in the physical universe are governed by laws, and those laws are true at every time and place in the universe. Although scientists do not talk explicitly about this doctrine, and my doctoral thesis adviser never mentioned it once to his graduate students, the central doctrine is the invisible oxygen that most scientists breathe. We do not, of course, know all the fundamental laws at the present time. But most scientists believe that a complete set of such laws exists and, in principle, that it is discoverable by human beings, just as nineteenth-century explorers believed in the North Pole although no one had yet reached it.

Our knowledge of scientific laws is provisional. We do not know all the laws but we believe in a complete set of them. We further believe, in principle anyway, that humans will uncover these laws. An example of a scientific law is the conservation of energy.

The total amount of energy in a closed system remains constant. The energy in an isolated container may change form, as when the chemical energy latent in a fresh match changes into the heat and light energy of a burning flame— but, according to the law of the conservation of energy, the total amount of energy does not change.

Even scientific laws that we already know about are updated and refined over time. Lightman offers the replacement of Newton’s law of gravity (1687) by Einstein’s deeper and more accurate law of gravity (1915). These revisions are part of the very fabric of science.

Next, Lightman provides a working definition of God.

I would not pretend to know the nature of God, if God does indeed exist, but for the purposes of this discussion, and in agreement with almost all religions, I think we can safely say that God is understood to be a Being not restricted by the laws that govern matter and energy in the physical universe. In other words, God exists outside matter and energy. In most religions, this Being acts with purpose and will, sometimes violating existing physical law (that is, performing miracles), and has additional qualities such as intelligence, compassion, and omniscience.

Lightman then offers a continuum of religious beliefs based on the degree to which God acts in the world. At one end is atheism — or denying the existence of god. Moving along the spectrum, we find deism, which was a prominent view in the seventeenth and eighteenth centuries that God created the universe but has not acted since this spark.

Voltaire was a deist. As God’s role expands we find immanentism, which holds that God created the universe and its scientific laws. Under this view, God continues to act through the repeated application of those laws. We can probably put Einstein in the immanentism camp. (Philosophically both deism and immanentism are similar because God does not perform miracles.)

Opposite atheism lies interventionism. Most religions, including Christianity, Judaism, Islam, and Hinduism subscribe to this view, which is that God created the universe and its laws and occasionally violates the laws to create unpredictable results.

Lightman argues that all of these views, except interventionism, agree with science.

Starting with these axioms, we can say that science and God are compatible as long as the latter is content to stand on the sidelines once the universe has begun. A God that intervenes after the cosmic pendulum has been set into motion, violating the physical laws, would clearly upend the central doctrine of science.

Lightman cites Francis Collins, who offers some thoughtful advice on reconciling a belief in an interventionist God and science, or at least, deciding which to turn to for answers to the right kinds of questions. They are often very different.

“I’ve not had a problem reconciling science and faith since I became a believer at age 27 … if you limit yourself to the kinds of questions that science can ask, you’re leaving out some other things that I think are also pretty important, like why are we here and what’s the meaning of life and is there a God? Those are not scientific questions.

Under this reconciliation, miracles cannot be analyzed by the methods of science. This is an echo of Richard Feynman, who put it most clearly in one of his letters, saying that science only tells us if we do something then what will happen? Cause and effect. It doesn’t give us any guidance on the question of should we do it?

Lightman, himself, falls in the atheist camp.

I am an atheist myself. I completely endorse the central doctrine of science. And I do not believe in the existence of a Being who lives beyond matter and energy, even if that Being refrains from entering the fray of the physical world. However, I certainly agree with (Other Scientists) that science is not the only avenue for arriving at knowledge, that there are interesting and vital questions beyond the reach of test tubes and equations. Obviously, vast territories of the arts concern inner experiences that cannot be analyzed by science. The humanities, such as history and philosophy, raise questions that do not have definite or unanimously accepted answers.

And yet we must believe in things we cannot (yet) prove. Lightman himself believes in the central doctrine which cannot be proven. At most we can only say there is no evidence to contradict it. This is what Karl Popper called real science – a process by which we hypothesize and then attack our hypotheses. A scientific “fact” is one that has stood up to extraordinary scrutiny.

With much of life, and much meaning in the world, there are often things outside of the scientific realm. These are worth considering.

I believe there are things we take on faith, without physical proof and even sometimes without any methodology for proof. We cannot clearly show why the ending of a particular novel haunts us. We cannot prove under what conditions we would sacrifice our own life in order to save the life of our child. We cannot prove whether it is right or wrong to steal in order to feed our family, or even agree on a definition of “right” and “wrong.” We cannot prove the meaning of our life, or whether life has any meaning at all. For these questions, we can gather evidence and debate, but in the end we cannot arrive at any system of analysis akin to the way in which a physicist decides how many seconds it will take a one-foot-long pendulum to make a complete swing. The previous questions are questions of aesthetics, morality, philosophy. These are questions for the arts and the humanities. These are also questions aligned with some of the intangible concerns of traditional religion.

Lightman recalls his time as a grad student in physics and the concept of a “well-posed problem” — a question with “enough clarity and precision that it is guaranteed an answer.” Put another way, scientists are trained not to “waste time on questions that do not have clear and definite answers.” And yet questions without clear and definite answers are sometimes just as important. Just because we can’t apply the scientific method to them doesn’t mean we shouldn’t consider them.

[A]rtists and humanists often don’t care what the answer is because definite answers don’t exist to all interesting and important questions. Ideas in a novel or emotion in a symphony are complicated with the intrinsic ambiguity of human nature. That is why we can never fully understand why the highly sensitive Raskolnikov brutally murdered the old pawnbroker in Crime and Punishment, whether Plato’s ideal form of government could ever be realized in human society, whether we would be happier if we lived to be a thousand years old. For many artists and humanists, the question is more important than the answer.

The question is more important than the answer — just as the journey is more important than the destination and the process is more important than outcome.

As the German Poet Rainer Maria Rilke put it a century ago:  “We should try to love the questions themselves, like locked rooms and like books that are written in a very foreign tongue.”

“As human beings,” Lightman argues, “don’t we need questions without answers as well as questions with answers?”

The God Delusion, a widely read book by Richard Dawkins, uses modern tools to attack two common arguments for the existence of God: Intelligent Design (only an intelligent and powerful being could have designed the universe) and that only the action and will of God explains our morality and desire to help others. Dawkins convincingly shows that Earth could have arisen from the laws of nature and random processes, without the intervention of a supernatural and intelligent Designer. Our sense of morality and altruism could be a logical derivative of natural selection.

However, as Lightman reminds us, refuting or falsifying the arguments put forward to support a proposition does not necessarily falsify the proposition itself.

Science can never know what created our universe. Even if tomorrow we observed another universe spawned from our universe, as could hypothetically happen in certain theories of cosmology, we could not know what created our universe. And as long as God does not intervene in the contemporary universe in such a way as to violate physical laws, science has no way of knowing whether God exists or not. The belief or disbelief in such a Being is therefore a matter of faith.

Lightman is troubled by Dawkins’ wholesale dismissal of religion.

Faith, in its broadest sense, is about far more than belief in the existence of God or the disregard of scientific evidence. Faith is the willingness to give ourselves over, at times, to things we do not fully understand. Faith is the belief in things larger than ourselves. Faith is the ability to honor stillness at some moments and at others to ride the passion and exuberance that is the artistic impulse, the flight of the imagination, the full engagement with this strange and shimmering world.

Indeed, William & Ariel Durant have argued that we need religion; it is part of our fabric of understanding and living in the world.

***

With that, Lightman brings the essay to a beautiful conclusion.

The physical and spiritual universes each have their own domains and their own limitations. The question of the age of planet Earth, for example, falls squarely in the domain of science, since there are reliable tests we can perform, such as using the rate of disintegration of radioactive rocks, to determine a definitive answer. Such questions as “What is the nature of love?” or “Is it moral to kill another person in time of war?” or “Does God exist?” lie outside the bounds of science but fall well within the realm of religion. I am impatient with people who, like Richard Dawkins, try to disprove the existence of God with scientific arguments. Science can never prove or disprove the existence of God, because God, as understood by most religions, is not subject to rational analysis. I am equally impatient with people who make statements about the physical universe that violate physical evidence and the known laws of nature. Within the domain of the physical universe, science cannot hold sway on some days but not on others. Knowingly or not, we all depend on the consistent operation of the laws of nature in the physical universe day after day— for example, when we board an airplane, allow ourselves to be lofted thousands of feet in the air, and hope to land safely at the other end. Or when we stand in line to receive a vaccination against the next season’s influenza.

Some people believe that there is no distinction between the spiritual and physical universes, no distinction between the inner and the outer, between the subjective and the objective, between the miraculous and the rational. I need such distinctions to make sense of my spiritual and scientific lives. For me, there is room for both a spiritual universe and a physical universe, just as there is room for both religion and science. Each universe has its own power. Each has its own beauty, and mystery. A Presbyterian minister recently said to me that science and religion share a sense of wonder. I agree.

The Accidental Universe is a mind-bending read on the known and unknowable, offering a window into our universe and some of the profound questions of our time.

Merchants Of Doubt: How The Tobacco Strategy Obscures the Realities of Global Warming

There will always be those who try to challenge growing scientific consensus — indeed the challenge is fundamental to science. Motives, however, matter and not everyone has good intentions.

***

Naomi Oreskes and Erik Conway’s masterful work Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming, was recommended by Elon Musk.

The book illuminates how the tobacco industry created doubt and kept the controversy alive well past scientific consensus. They call this the Tobacco Strategy. And the same playbook is happening all over again. This time with Global Warming.

Merchants of Doubt

The goal of the Tobacco Strategy is to create doubt about the causal link to protect the interests of incumbents.

Millions of pages of documents released during tobacco litigation demonstrate these links. They show the crucial role that scientists played in sowing doubt about the links between smoking and health risks. These documents— which have scarcely been studied except by lawyers and a handful of academics— also show that the same strategy was applied not only to global warming, but to a laundry list of environmental and health concerns, including asbestos, secondhand smoke, acid rain, and the ozone hole.

Interestingly, not only are the tactics the same when it comes to Global Warming, but so are the people.

They used their scientific credentials to present themselves as authorities, and they used their authority to try to discredit any science they didn’t like.

Over the course of more than twenty years, these men did almost no original scientific research on any of the issues on which they weighed in. Once they had been prominent researchers, but by the time they turned to the topics of our story, they were mostly attacking the work and the reputations of others. In fact, on every issue, they were on the wrong side of the scientific consensus. Smoking does kill— both directly and indirectly. Pollution does cause acid rain. Volcanoes are not the cause of the ozone hole. Our seas are rising and our glaciers are melting because of the mounting effects of greenhouse gases in the atmosphere, produced by burning fossil fuels. Yet, for years the press quoted these men as experts, and politicians listened to them, using their claims as justification for inaction.

December 15, 1953, was a fateful day. A few months earlier, researchers at the Sloan-Kettering Institute in New York City had demonstrated that cigarette tar painted on the skin of mice caused fatal cancers. This work had attracted an enormous amount of press attention: the New York Times and Life magazine had both covered it, and Reader’s Digest— the most widely read publication in the world— ran a piece entitled “Cancer by the Carton.” Perhaps the journalists and editors were impressed by the scientific paper’s dramatic concluding sentences: “Such studies, in view of the corollary clinical data relating smoking to various types of cancer, appear urgent. They may not only result in furthering our knowledge of carcinogens, but in promoting some practical aspects of cancer prevention.”

These findings, however, shouldn’t have been a surprise. We’re often blinded by a ‘bad people can do no right’ line of thought.

German scientists had shown in the 1930s that cigarette smoking caused lung cancer, and the Nazi government had run major antismoking campaigns; Adolf Hitler forbade smoking in his presence. However, the German scientific work was tainted by its Nazi associations, and to some extent ignored, if not actually suppressed, after the war; it had taken some time to be rediscovered and independently confirmed. Now, however, American researchers— not Nazis— were calling the matter “urgent,” and the news media were reporting it.  “Cancer by the carton” was not a slogan the tobacco industry would embrace.

 

With the mounting evidence, the tobacco industry was thrown into a panic.

 

So industry executives made a fateful decision, one that would later become the basis on which a federal judge would find the industry guilty of conspiracy to commit fraud— a massive and ongoing fraud to deceive the American public about the health effects of smoking. The decision was to hire a public relations firm to challenge the scientific evidence that smoking could kill you.

On that December morning (December 15th), the presidents of four of America’s largest tobacco companies— American Tobacco, Benson and Hedges, Philip Morris, and U.S. Tobacco— met at the venerable Plaza Hotel in New York City. The French Renaissance chateau-style building— in which unaccompanied ladies were not permitted in its famous Oak Room bar— was a fitting place for the task at hand: the protection of one of America’s oldest and most powerful industries. The man they had come to meet was equally powerful: John Hill, founder and CEO of one of America’s largest and most effective public relations firms, Hill and Knowlton.

The four company presidents— as well as the CEOs of R. J. Reynolds and Brown and Williamson— had agreed to cooperate on a public relations program to defend their product. They would work together to convince the public that there was “no sound scientific basis for the charges,” and that the recent reports were simply “sensational accusations” made by publicity-seeking scientists hoping to attract more funds for their research. They would not sit idly by while their product was vilified; instead, they would create a Tobacco Industry Committee for Public Information to supply a “positive” and “entirely ‘pro-cigarette’” message to counter the anti-cigarette scientific one. As the U.S. Department of Justice would later put it, they decided “to deceive the American public about the health effects of smoking.”

At first, the companies didn’t think they needed to fund new scientific research, thinking it would be sufficient to “disseminate information on hand.” John Hill disagreed, “emphatically warn[ing] … that they should … sponsor additional research,” and that this would be a long-term project. He also suggested including the word “research” in the title of their new committee, because a pro-cigarette message would need science to back it up. At the end of the day, Hill concluded, “scientific doubts must remain.” It would be his job to ensure it.

Over the next half century, the industry did what Hill and Knowlton advised. They created the “Tobacco Industry Research Committee” to challenge the mounting scientific evidence of the harms of tobacco. They funded alternative research to cast doubt on the tobacco-cancer link. They conducted polls to gauge public opinion and used the results to guide campaigns to sway it. They distributed pamphlets and booklets to doctors, the media, policy makers, and the general public insisting there was no cause for alarm.

The industry’s position was that there was “no proof” that tobacco was bad, and they fostered that position by manufacturing a “debate,” convincing the mass media that responsible journalists had an obligation to present “both sides” of it.

Of course there was more to it than that.

The industry did not leave it to journalists to seek out “all the facts.” They made sure they got them. The so-called balance campaign involved aggressive dissemination and promotion to editors and publishers of “information” that supported the industry’s position. But if the science was firm, how could they do that? Was the science firm?

The answer is yes, but. A scientific discovery is not an event; it’s a process, and often it takes time for the full picture to come into clear focus.  By the late 1950s, mounting experimental and epidemiological data linked tobacco with cancer— which is why the industry took action to oppose it. In private, executives acknowledged this evidence. In hindsight it is fair to say— and science historians have said— that the link was already established beyond a reasonable doubt. Certainly no one could honestly say that science showed that smoking was safe.

But science involves many details, many of which remained unclear, such as why some smokers get lung cancer and others do not (a question that remains incompletely answered today). So some scientists remained skeptical.

[…]

The industry made its case in part by cherry-picking data and focusing on unexplained or anomalous details. No one in 1954 would have claimed that everything that needed to be known about smoking and cancer was known, and the industry exploited this normal scientific honesty to spin unreasonable doubt.

[…]

The industry had realized that you could create the impression of controversy simply by asking questions, even if you actually knew the answers and they didn’t help your case. And so the industry began to transmogrify emerging scientific consensus into raging scientific “debate.”

Merchants of Doubt is a fascinating look at how the process for sowing doubt in the minds of people remains the same today as it was in the 1950s. After all, if it ain’t broke, don’t fix it.

Karl Popper on The Line Between Science and Pseudoscience

It’s not immediately clear, to the layman, what the essential difference is between science and something masquerading as science: pseudoscience. The distinction gets at the core of what comprises human knowledge: How do we actually know something to be true? Is it simply because our powers of observation tell us so? Or is there more to it?

Sir Karl Popper (1902-1994), the scientific philosopher, was interested in the same problem. How do we actually define the scientific process? How do we know which theories can be said to be truly explanatory?

3833724834_397c34132c_z

He began addressing it in a lecture, which is printed in the book Conjectures and Refutations: The Growth of Scientific Knowledge (also available online):

When I received the list of participants in this course and realized that I had been asked to speak to philosophical colleagues I thought, after some hesitation and consultation, that you would probably prefer me to speak about those problems which interest me most, and about those developments with which I am most intimately acquainted. I therefore decided to do what I have never done before: to give you a report on my own work in the philosophy of science, since the autumn of 1919 when I first began to grapple with the problem, ‘When should a theory be ranked as scientific?’ or ‘Is there a criterion for the scientific character or status of a theory?’

Popper saw a problem with the number of theories he considered non-scientific that, on their surface, seemed to have a lot in common with good, hard, rigorous science. But the question of how we decide which theories are compatible with the scientific method, and those which are not, was harder than it seemed.

***

It is most common to say that science is done by collecting observations and grinding out theories from them. Charles Darwin once said, after working long and hard on the problem of the Origin of Species,

My mind seems to have become a kind of machine for grinding general laws out of large collections of facts.

This is a popularly accepted notion. We observe, observe, and observe, and we look for theories to best explain the mass of facts. (Although even this is not really true: Popper points out that we must start with some a priori knowledge to be able to generate new knowledge. Observation is always done with some hypotheses in mind–we can’t understand the world from a totally blank slate. More on that another time.)

The problem, as Popper saw it, is that some bodies of knowledge more properly named pseudosciences would be considered scientific if the “Observe & Deduce” operating definition were left alone. For example, a believing astrologist can ably provide you with “evidence” that their theories are sound. The biographical information of a great many people can be explained this way, they’d say.

The astrologist would tell you, for example, about how “Leos” seek to be the centre of attention; ambitious, strong, seeking the limelight. As proof, they might follow up with a host of real-life Leos: World-leaders, celebrities, politicians, and so on. In some sense, the theory would hold up. The observations could be explained by the theory, which is how science works, right?

Sir Karl ran into this problem in a concrete way because he lived during a time when psychoanalytic theories were all the rage at just the same time Einstein was laying out a new foundation for the physical sciences with the concept of relativity. What made Popper uncomfortable were comparisons between the two. Why did he feel so uneasy putting Marxist theories and Freudian psychology in the same category of knowledge as Einstein’s Relativity? Did all three not have vast explanatory power in the world? Each theory’s proponents certainly believed so, but Popper was not satisfied.

It was during the summer of 1919 that I began to feel more and more dissatisfied with these three theories–the Marxist theory of history, psychoanalysis, and individual psychology; and I began to feel dubious about their claims to scientific status. My problem perhaps first took the simple form, ‘What is wrong with Marxism, psycho-analysis, and individual psychology? Why are they so different from physical theories, from Newton’s theory, and especially from the theory of relativity?’

I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory.

Whatever happened always confirmed it. Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth; who refused to see it, either because it was against their class interest, or because of their repressions which were still ‘un-analysed’ and crying aloud for treatment.

Here was the salient problem: The proponents of these new sciences saw validations and verifications of their theories everywhere. If you were having trouble as an adult, it could always be explained by something your mother or father had done to you when you were young, some repressed something-or-other that hadn’t been analysed and solved. They were confirmation bias machines.

What was the missing element? Popper had figured it out before long: The non-scientific theories could not be falsified. They were not testable in a legitimate way. There was no possible objection that could be raised which would show the theory to be wrong.

In a true science, the following statement can be easily made: “If happens, it would show demonstrably that theory is not true.” We can then design an experiment, a physical one or sometimes a simple thought experiment, to figure out if actually does happen It’s the opposite of looking for verification; you must try to show the theory is incorrect, and if you fail to do so, thereby strengthen it.

Pseudosciences cannot and do not do this–they are not strong enough to hold up. As an example, Popper discussed Freud’s theories of the mind in relation to Alfred Adler’s so-called “individual psychology,” which was popular at the time:

I may illustrate this by two very different examples of human behaviour: that of a man who pushes a child into the water with the intention of drowning it; and that of a man who sacrifices his life in an attempt to save the child. Each of these two cases can be explained with equal ease in Freudian and in Adlerian terms. According to Freud the first man suffered from repression (say, of some component of his Oedipus complex), while the second man had achieved sublimation. According to Adler the first man suffered from feelings of inferiority (producing perhaps the need to prove to himself that he dared to commit some crime), and so did the second man (whose need was to prove to himself that he dared to rescue the child). I could not think of any human behaviour which could not be interpreted in terms of either theory. It was precisely this fact–that they always fitted, that they were always confirmed–which in the eyes of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness.

Popper contrasted these theories against Relativity, which made specific, verifiable predictions, giving the conditions under which the predictions could be shown false. It turned out that Einstein’s predictions came to be true when tested, thus verifying the theory through attempts to falsify it. But the essential nature of the theory gave grounds under which it could have been wrong. To this day, physicists seek to figure out where Relativity breaks down in order to come to a more fundamental understanding of physical reality. And while the theory may eventually be proven incomplete or a special case of a more general phenomenon, it has still made accurate, testable predictions that have led to practical breakthroughs.

Thus, in Popper’s words, science requires testability: “If observation shows that the predicted effect is definitely absent, then the theory is simply refuted.”  This means a good theory must have an element of risk to it. It must be able to be proven wrong under stated conditions.

From there, Popper laid out his essential conclusions, which are useful to any thinker trying to figure out if a theory they hold dear is something that can be put in the scientific realm:

1. It is easy to obtain confirmations, or verifications, for nearly every theory–if we look for confirmations.

2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory–an event which would have refuted the theory.

3. Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.

4. A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.

5. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.

6. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ‘corroborating evidence’.)

7. Some genuinely testable theories, when found to be false, are still upheld by their admirers–for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a ‘conventionalist twist’ or a ‘conventionalist stratagem’.)

One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.

Finally, Popper was careful to say that it is not possible to prove that Freudianism was not true, at least in part. But we can say that we simply don’t know whether it’s true because it does not make specific testable predictions. It may have many kernels of truth in it, but we can’t tell. The theory would have to be restated.

This is the essential “line of demarcation, as Popper called it, between science and pseudoscience.

How Darwin Thought: Follow the Golden Rule of Thinking

In his 1986 speech at the commencement of Harvard-Westlake School in Los Angeles (found in Poor Charlie’s Almanack) Charlie Munger gave a short Johnny Carson-like speech on the things to avoid to end up with a happy and successful life. One of his most salient prescriptions comes from the life of Charles Darwin:

It is my opinion, as a certified biography nut, that Charles Robert Darwin would have ranked in the middle of the Harvard School graduating class if 1986. Yet he is now famous in the history of science. This is precisely the type of example you should learn nothing from if bent on minimizing your results from your own endowment.

Darwin’s result was due in large measure to his working method, which violated all my rules for misery and particularly emphasized a backward twist in that he always gave priority attention to evidence tending to disconfirm whatever cherished and hard-won theory he already had. In contrast, most people early achieve and later intensify a tendency to process new and disconfirming information so that any original conclusion remains intact. They become people of whom Philip Wylie observed: “You couldn’t squeeze a dime between what they already know and what they will never learn.”

The life of Darwin demonstrates how a turtle may outrun a hare, aided by extreme objectivity, which helps the objective person end up like the only player without a blindfold in a game of Pin the Tail on the Donkey.

Charles Darwin (Via Wikipedia)

The great Harvard biologist E.O. Wilson agreed. In his book, Letters to a Young Scientist, Wilson argued that Darwin would have probably scored in the 130 range on a standard IQ test. And yet there he is, buried next to the calculus-inventing genius Isaac Newton in Westminster Abbey. (As Munger often notes.)

I had, also, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from memory than favorable ones.

What can we learn from the working and thinking habits of Darwin?

Extreme Focus Combined with Attentive Energy

The first clue comes from his own autobiography. Darwin was a hoover of information related to a topic he was interested in. After describing some of his specific areas of study while aboard the H.M.S. Beagle, Darwin concludes in his Autobiography:

The above various special studies were, however, of no importance compared with the habit of energetic industry and of concentrated attention to whatever I was engaged in, which I then acquired. Everything about which I thought or read was made to bear directly on what I had seen and was likely to see; and this habit of mind was continued during the five years of the voyage. I feel sure that it was this training which has enabled me to do whatever I have done in science.

This habit of pure and attentive focus to the task at hand is, of course, echoed in many of our favorite thinkers, from Sherlock Holmes, to E.O. Wilson, Feynman, Einstein, and others. Munger himself remarked that “I did not succeed in life by intelligence. I succeeded because I have a long attention span.”

In Darwin’s quest, there was almost nothing relevant to his task at hand — the problem of understanding the origin and development of species — which might have escaped his attention. He had an extremely broad antenna. Says David Quammen in his fabulous The Reluctant Mr. Darwin:

One of Darwin’s great strengths as a scientist was also, in some ways, a disadvantage: his extraordinary breadth of curiosity. From his study at Down House he ranged widely and greedily, in his constant search for data, across distances (by letter) and scientific fields. He read eclectically and kept notes like a pack rat. Over the years he collected an enormous quantity of interconnected facts. He looked for patterns but was intrigued equally by exceptions to the patterns, and exceptions to the exceptions. He tested his ideas against complicated groups of organisms with complicated stories, such as the barnacles, the orchids, the social insects, the primroses, and the hominids.

Not only was Darwin thinking broadly, taking in facts at all turns and on many subjects, but he was thinking carefully, This is where Munger’s admiration comes in: Darwin wanted to look at the exceptions. The exceptions to the exceptions. He was on the hunt for truth and not necessarily to confirm some highly-loved idea. Simply put, he didn’t want to be wrong about the nature of reality. To get the theory whole and correct would take lots of detail and time, as we will see.

***

The habit of study and observation didn’t stop at the plant and animal kingdom for Darwin. In a move that might seem strange by today’s standards, Darwin even opened a notebook to study the development of his own newborn son, William. This is from one of his notebooks:

Natural History of Babies

Do babies start (i.e., useless sudden movement of muscles) very early in life. Do they wink, when anything placed before their eyes, very young, before experience can have taught them to avoid danger. Do they know frown when they first see it?

From there, as his child grew and developed, Darwin took close notes. How did he figure out that the reflection in the mirror was him? How did he then figure out it was only an image of him, and that any other images that showed up (say, Dad standing behind him) were mere images too – not reality? These were further data in Darwin’s mental model of the accumulation of gradual changes, but more importantly, displayed his attention to detail. Everything eventually came to “bear directly on what I had seen and what I was likely to see.”

And in a practical sense, Darwin was a relentless note-taker. Notebook A, Notebook B, Notebook C, Notebook M, Notebook N…all filled with observations from his study of journals and texts, his own scientific work, his travels, and his life. Once he sat down to write, he had an enormous amount of prior written thought to draw on. He could also see gaps in his understanding, which he diligently filled in.

Become an Expert

You can learn much about Darwin (and truthfully about anyone) by who he studied and admired. If Darwin held anyone in high esteem, it was Charles Lyell, whose Principles of Geology was his faithful companion on the H.M.S. Beagle. Here is his description of Lyell from his autobiography, which tells us something of the traits Darwin valued and sought to emulate:

I saw more of Lyell than of any other man before and after my marriage. His mind was characterized, as it appeared to me, by clearness, caution, sound judgment and a good deal of originality. When I made any remark to him on Geology, he never rested until he saw the whole case clearly and often made me see it more clearly than I had done before. He would advance all possible objections to my suggestions, and even after these were exhausted would long remain dubious. A second characteristic was his hearty sympathy with the work of other scientific men.

Studying Lyell and geology enhanced Darwin’s (probably natural) suspicion that careful, detailed, and objective work was required to create scientific breakthroughs. And once Darwin had expertise and grounding in the level of expertise required by Lyell to understand and explain the theory of geology, he had a basis for the rest of his scientific work. From his autobiography:

After my return to England, it appeared to me that by following the example of Lyell in Geology, and by collecting all facts which bore in any way on the variation of animals and plants under domestication and nature, some light might perhaps be thrown on the whole subject.

In fact, it was Darwin’s study and understanding of geology itself that gave him something to lean on conceptually. Lyell’s, and his own, theory of geology was of a slow-moving process that accumulated massive gradual changes over time. This seems like common knowledge today, but at the time, people weren’t so sure that the mountains and the islands could have been created by such slow moving and incremental processes.

Wallace & Gruber’s book Creative People at Work, an analysis of a variety of thinkers and artists, argues that this basic mental model carried Darwin pretty far:

Why was the acquisition of expert knowledge in geology so important to the development of Darwin’s overall thinking? Because in learning geology Darwin ground a conceptual lens — a device for bringing into focus and clarifying the problems to which he turned his attention. When his attention shifted to problems beyond geology, the lens remained and Darwin used it in exploring new problems.

(Darwin’s) coral reef theory shows that he had become an expert in one field…(and) the central idea in Darwin’s understanding of geology was “gradualism” — that great things could be produced by long, continued accumulation of very small effects. The next phase in the development of this thought-form would involve his use of it as the basis for the construction of analogies between geology and new, unfamiliar subjects.

Darwin wrote his most explicit and concise statement of the nature and utility of his gradualism thought-form: “This multiplication of little means and brinigng the mind to grapple with great effect produced is a most laborious & painful effort of the mind.” He recognized that it took patience and discipline to discover the “little means” that were responsible for great effects. With the necessary effort, however, this gradualism thought-form could become the vehicle for explaining many remarkable phenomena in geology, biology, and even psychology.

It is amazing to note that Darwin did not write The Origin of Species until 1859 even though his notebooks show he had been pretty close to the correct idea at least 15 or 20 years prior. What was he doing in all that time? Well, for eight years at least, he was studying barnacles.

***

One of the reasons Darwin went on a crusade of classifying and studying the barnacles in minute detail was his concern that if he wasn’t a primary expert on some portion of the natural world, his work on a larger and more general thesis would not be taken seriously, and that it would probably have holes. He said as much to his friend Frederic Gerard, a French botanist, before he had begun his barnacle work: “How painfully (to me) true is your remark that no one has hardly a right to examine the question of species who has not minutely described many.” And, of course, Darwin being Darwin, he spent eight years remedying that unfathomable situation.

It seemed like extraordinarily tedious work, unrelated to anything a scientist would consider important on a grand scale. It was taxonomy. Classification. Even Darwin admitted later on that he doubted it was worth the years he spent on it. Yet, in his detail-oriented journey for expertise on barnacles, he hit upon some key ideas that would make his theory of natural selection complete. Says Quammen:

He also found notable differences on another categorical level; within species. Contrary to what he’d believed all along about the rarity of variation in the wild, barnacles turned out to be highly variable. A species wasn’t a Platonic essence or a metaphysical type. A species was a population of differing individuals.

He wouldn’t have seen that if he hadn’t assigned himself the trick job of drawing lines between one species and another. He wouldn’t have seen it if he hadn’t used his network of contacts and his good reputation as a naturalist to gather barnacle specimens, in quantity, from all over the world. The truth of variation only reveals itself in crowds. He wouldn’t have seen it if he hadn’t examined multiple individuals, not just single representatives, of as many species as possible….Abundant variation among barnacles filled a crucial role in his theory. Here they were, the minor differences on which natural selection works.

Darwin was so diligent it could be breathtaking at times. Quammen describes him gathering up various species to assess the data about their development and their variation. Birds, dead or alive, as many as possible. Foxes, dogs, ducks, pigeons, rabbits, cats…nothing escaped his purview. As many specimens as he could get his hands on. All while living in a secluded house in Victorian England, beset by constant illness. He was Big Data before Big Data was a thing, trying to suss out conclusions from a mass of observation.

Follow the Golden Rule

Eventually, his work led him to something new: Species are not immutable, they are all part of the same family tree. They evolve through a process of variation — he didn’t know how; that took years for others to figure out through the study of genetics — and differential survival through natural selection.

Darwin was able to put his finger on why it took so long for humanity to come to this correct theory: It was extremely counter-intuitive to how one would naturally see the world. He admitted as much in the Origin of Species‘ concluding chapter:

The chief cause of our natural unwillingness to admit that one species has given birth to other and distinct species, is that we are always slow in admitting any great changes of which we do not see the steps. The difficulty is the same as that felt by so many geologists, when Lyell first insisted that long lines of inland cliffs had been formed, and great valleys excavated, by the agencies which we still see at work. The mind cannot possibly grasp the full meaning of the term of even a million years; it cannot add up and perceive the full effects of many slight variations, accumulated during an almost infinite number of generations.

Counter-intuition was Darwin’s specialty. And the reason he was so good was he had a very simple habit of thought, described in the autobiography and so cherished by Charlie Munger: He paid special attention to collecting facts which did not agree with his prior conceptions. He called this a golden rule.

I had, also, during many years, followed a golden rule, namely, that whenever a published fact, a new observation or thought came across me, which was opposed to my general results, to make a memorandum of it without fail and at once; for I had found by experience that such facts and thoughts were far more apt to escape from memory than favorable ones. Owing to this habit, very few objections were raised against my views which I had not at least noticed and attempted to answer.

So we see that Darwin’s great success, by his own analysis, owed to his ability to see, note, and learn from objections to his cherished thoughts. The Origin of Species has stood up in the face of 157 years of subsequent biological research because Darwin was so careful to make sure the theory was nearly impossible to refute. Later scientists would find the book slightly incomplete, but not incorrect.

This passage reminds one of, and probably influenced, Charlie Munger’s prescription on the work required to hold an opinion: You must understand the opposite side of the argument better than the person holding that side does. It’s a very difficult way to think, tremendously unnatural in the face of our genetic makeup (the more typical response is to look for as much confirming evidence as possible). Harnessed properly, though, it is a powerful way to beat your own shortcomings and become a seeing man amongst the blind.

Thus, we can deduce that, in addition to good luck and good timing, it was Darwin’s habits of completeness, diligence, accuracy, and habitual objectivity which ultimately led him to make his greatest breakthroughs. It was tedious. There was no spark of divine insight that gave him his edge. He just started with the right basic ideas and the right heroes, and then worked for a long time and with extreme focus and objectivity, always keeping his eye on reality.

In the end, you can do worse than to read all you can find on Charles Darwin and try to copy his mental habits. They will serve you well over a long life.