Tag: Science

Good Science, Bad Science, Pseudoscience: How to Tell the Difference

In a digital world that clamors for clicks, news is sensationalized and “facts” change all the time. Here’s how to discern what is trustworthy and what is hogwash.

***

Unless you’ve studied it, most of us are never taught how to evaluate science or how to parse the good from the bad. Yet it is something that dictates every area of our lives. It is vital for helping us understand how the world works. It might be too much effort and time to appraise research for yourself, however. Often, it can be enough to consult an expert or read a trustworthy source.

But some decisions require us to understand the underlying science. There is no way around it. Many of us hear about scientific developments from news articles and blog posts. Some sources put the work into presenting useful information. Others manipulate or misinterpret results to get more clicks. So we need the thinking tools necessary to know what to listen to and what to ignore. When it comes to important decisions, like knowing what individual action to take to minimize your contribution to climate change or whether to believe the friend who cautions against vaccinating your kids, being able to assess the evidence is vital.

Much of the growing (and concerning) mistrust of scientific authority is based on a misunderstanding of how it works and a lack of awareness of how to evaluate its quality. Science is not some big immovable mass. It is not infallible. It does not pretend to be able to explain everything or to know everything. Furthermore, there is no such thing as “alternative” science. Science does involve mistakes. But we have yet to find a system of inquiry capable of achieving what it does: move us closer and closer to truths that improve our lives and understanding of the universe.

“Rather than love, than money, than fame, give me truth.”

— Henry David Thoreau

There is a difference between bad science and pseudoscience. Bad science is a flawed version of good science, with the potential for improvement. It follows the scientific method, only with errors or biases. Often, it’s produced with the best of intentions, just by researchers who are responding to skewed incentives.

Pseudoscience has no basis in the scientific method. It does not attempt to follow standard procedures for gathering evidence. The claims involved may be impossible to disprove. Pseudoscience focuses on finding evidence to confirm it, disregarding disconfirmation. Practitioners invent narratives to preemptively ignore any actual science contradicting their views. It may adopt the appearance of actual science to look more persuasive.

While the tools and pointers in this post are geared towards identifying bad science, they will also help with easily spotting pseudoscience.

Good science is science that adheres to the scientific method, a systematic method of inquiry involving making a hypothesis based on existing knowledge, gathering evidence to test if it is correct, then either disproving or building support for the hypothesis. It takes many repetitions of applying this method to build reasonable support for a hypothesis.

In order for a hypothesis to count as such, there must be evidence that, if collected, would disprove it.

In this post, we’ll talk you through two examples of bad science to point out some of the common red flags. Then we’ll look at some of the hallmarks of good science you can use to sort the signal from the noise. We’ll focus on the type of research you’re likely to encounter on a regular basis, including medicine and psychology, rather than areas less likely to be relevant to your everyday life.

[Note: we will use the terms “research” and “science” and “researcher” and “scientist” interchangeably here.]

Power Posing

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” ―Isaac Asimov

First, here’s an example of flawed science from psychology: power posing. A 2010 study by Dana Carney, Andy J. Yap, and Amy Cuddy entitledPower Posing: Brief Nonverbal Displays Effects Neuroendocrine Levels and Risk Tolerance” claimed “open, expansive” poses caused participants to experience elevated testosterone levels, reduced cortisol levels, and greater risk tolerance. These are all excellent things in a high-pressure situation, like a job interview. The abstract concluded that “a person can, via a simple two-minute pose, embody power and instantly become more powerful.” The idea took off. It spawned hundreds of articles, videos, and tweets espousing the benefits of including a two-minute power pose in your day.

Yet at least eleven follow up studies, many led by Joseph Cesario of Michigan State University including “’Power Poses’ Don’t Work, Eleven New Studies Suggest,” failed to replicate the results. None found that power posing has a measurable impact on people’s performance in tasks or on their physiology. While subjects did report a subjective feeling of increased powerfulness, their performance did not differ from subjects who did not strike a power pose.

One of the researchers of the original study, Carney, has since changed her mind about the effect. Carney stated she no longer believe the results of the original study. Unfortunately, this isn’t always how researchers respond when confronted with evidence discrediting their prior work. We all know how uncomfortable changing our minds is.

The notion of power posing is exactly the kind of nugget that spreads fast online. It’s simple, free, promises dramatic benefits with minimal effort, and is intuitive. We all know posture is important. It has a catchy, memorable name. Yet examining the details of the original study reveals a whole parade of red flags. The study had 42 participants. That might be reasonable for preliminary or pilot studies. But is in no way sufficient to “prove” anything. It was not blinded. Feedback from participants was self-reported, which is notorious for being biased and inaccurate.

There is also a clear correlation/causation issue. Powerful, dominant animals tend to use expansive body language that exaggerates their size. Humans often do the same. But that doesn’t mean it’s the pose making them powerful. Being powerful could make them pose that way.

A TED Talk in which Amy Cuddy, the study’s co-author, claimed power posing could “significantly change the way your life unfolds” is one of the most popular to date, with tens of millions of views. The presentation of the science in the talk is also suspect. Cuddy makes strong claims with a single, small study as justification. She portrays power posing as a panacea. Likewise, the original study’s claim that a power pose makes someone “instantly become more powerful” is suspiciously strong.

This is one of the examples of psychological studies related to small tweaks in our behavior that have not stood up to scrutiny. We’re not singling out the power pose study as being unusually flawed or in any way fraudulent. The researchers had clear good intentions and a sincere belief in their work. It’s a strong example of why we should go straight to the source if we want to understand research. Coverage elsewhere is unlikely to even mention methodological details or acknowledge any shortcomings. It would ruin the story. We even covered power posing on Farnam Street in 2016—we’re all susceptible to taking these ‘scientific’ results seriously, without checking on the validity of the underlying science.

It is a good idea to be skeptical of research promising anything too dramatic or extreme with minimal effort, especially without substantial evidence. If it seems too good to be true, it most likely is.

Green Coffee Beans

“An expert is a person who has made all the mistakes that can be made in a very narrow field.” ―Niels Bohr

The world of weight-loss science is one where bad science is rampant. We all know, deep down, that we cannot circumnavigate the need for healthy eating and exercise. Yet the search for a magic bullet, offering results without effort or risks, continues. Let’s take a look at one study that is a masterclass in bad science.

EntitledRandomized, Double-Blind, Placebo-Controlled, Linear Dose, Crossover Study to Evaluate the Efficacy and Safety of a Green Coffee Bean Extract in Overweight Subjects,” it was published in 2012 in the journal Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy. On the face of it, and to the untrained eye, the study may appear legitimate, but it is rife with serious problems, as Scott Gavura explained in the article “Dr. Oz and Green Coffee Beans – More Weight Loss Pseudoscience” in the publication Science-Based Medicine. The original paper was later retracted by its authors. The Federal Trade Commission (FTC) ordered the supplement manufacturer who funded the study to pay a $3.5 million fine for using it in their marketing materials, describing it as “botched.”

The Food and Drug Administration (FDA) recommends studies relating to weight-loss consist of at least 3,000 participants receiving the active medication and at least 1,500 receiving a placebo, all for a minimum period of 12 months. This study used a mere 16 subjects, with no clear selection criteria or explanation. None of the researchers involved had medical experience or had published related research. They did not disclose the conflict of interest inherent in the funding source. It didn’t cover efforts to avoid any confounding factors. It is vague about whether subjects changed their diet and exercise, showing inconsistencies. The study was not double-blinded, despite claiming to be. It has not been replicated.

The FTC reported that the study’s lead investigator “repeatedly altered the weights and other key measurements of the subjects, changed the length of the trial, and misstated which subjects were taking the placebo or GCA during the trial.” A meta-analysis by Rachel Buchanan and Robert D. Beckett, “Green Coffee for Pharmacological Weight Loss” published in the Journal of Evidence-Based Complementary & Alternative Medicine, failed to find evidence for green coffee beans being safe or effective; all the available studies had serious methodological flaws, and most did not comply with FDA guidelines.

Signs of Good Science

“That which can be asserted without evidence can be dismissed without evidence.” ―Christopher Hitchens

We’ve inverted the problem and considered some of the signs of bad science. Now let’s look at some of the indicators a study is likely to be trustworthy. Unfortunately, there is no single sign a piece of research is good science. None of the signs mentioned here are, alone, in any way conclusive. There are caveats and exceptions to all. These are simply factors to evaluate.

It’s Published by a Reputable Journal

“The discovery of instances which confirm a theory means very little if we have not tried, and failed, to discover refutations.” —Karl Popper

A journal, any journal, publishing a study says little about its quality. Some will publish any research they receive in return for a fee. A few so-called “vanity publishers” claim to have a peer-review process, yet they typically have a short gap between receiving a paper and publishing it. We’re talking days or weeks, not the expected months or years. Many predatory publishers do not even make any attempt to verify quality.

No journal is perfect. Even the most respected journals make mistakes and publish low-quality work sometimes. However, anything that is not published research or based on published research in a journal is not worth consideration. Not as science. A blog post saying green smoothies cured someone’s eczema is not comparable to a published study. The barrier is too low. If someone cared enough about using a hypothesis or “finding” to improve the world and educate others, they would make the effort to get it published. The system may be imperfect, but reputable researchers will generally make the effort to play within it to get their work noticed and respected.

It’s Peer Reviewed

Peer review is a standard process in academic publishing. It’s intended as an objective means of assessing the quality and accuracy of new research. Uninvolved researchers with relevant experience evaluate papers before publication. They consider factors like how well it builds upon pre-existing research or if the results are statistically significant. Peer review should be double-blinded. This means the researcher doesn’t know who is reviewing their work and the reviewer doesn’t know who the researcher is.

Publishers only perform a cursory “desk check” before moving onto peer review. This is to check for major errors, nothing more. They cannot have the expertise necessary to vet the quality of every paper they handle—hence the need for external experts. The number of reviewers and strictness of the process depends on the journal. Reviewers either declare a paper unpublishable or suggest improvements. It is rare for them to suggest publishing without modifications.

Sometimes several rounds of modifications prove necessary. It can take years for a paper to see the light of day, which is no doubt frustrating for the researcher. But it ensures no or fewer mistakes or weak areas.

Pseudoscientific practitioners will often claim they cannot get their work published because peer reviewers suppress anything contradicting prevailing doctrines. Good researchers know having their work challenged and argued against is positive. It makes them stronger. They don’t shy away from it.

Peer review is not a perfect system. Seeing as it involves humans, there is always room for bias and manipulation. In a small field, it may be easy for a reviewer to get past the double-blinding. However, as it stands, peer review seems to be the best available system. In isolation, it’s not a guarantee that research is perfect, but it’s one factor to consider.

The Researchers Have Relevant Experience and Qualifications

One of the red flags in the green coffee bean study was that the researchers involved had no medical background or experience publishing obesity-related research.

While outsiders can sometimes make important advances, researchers should have relevant qualifications and a history of working in that field. It is too difficult to make scientific advancements without the necessary background knowledge and expertise. If someone cares enough about advancing a given field, they will study it. If it’s important, verify their backgrounds.

It’s Part of a Larger Body of Work

“Science, my lad, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.” ―Jules Verne

We all like to stand behind the maverick. But we should be cautious of doing so when it comes to evaluating the quality of science. On the whole, science does not progress in great leaps. It moves along millimeter by millimeter, gaining evidence in increments. Even if a piece of research is presented as groundbreaking, it has years of work behind it.

Researchers do not work in isolation. Good science is rarely, if ever, the result of one person or even one organization. It comes from a monumental collective effort. So when evaluating research, it is important to see if other studies point to similar results and if it is an established field of work. For this reason, meta-analyses, which analyze the combined results of many studies on the same topic, are often far more useful to the public than individual studies. Scientists are humans and they all make mistakes. Looking at a collective body of work helps smooth out any problems. Individual studies are valuable in that they further the field as a whole, allowing for the creation of meta-studies.

Science is about evidence, not reputation. Sometimes well-respected researchers, for whatever reason, produce bad science. Sometimes outsiders produce amazing science. What matters is the evidence they have to support it. While an established researcher may have an easier time getting support for their work, the overall community accepts work on merit. When we look to examples of unknowns who made extraordinary discoveries out of the blue, they always had extraordinary evidence for it.

Questioning the existing body of research is not inherently bad science or pseudoscience. Doing so without a remarkable amount of evidence is.

It Doesn’t Promise a Panacea or Miraculous Cure

Studies that promise anything a bit too amazing can be suspect. This is more common in media reporting of science or in research used for advertising.

In medicine, a panacea is something that can supposedly solve all, or many, health problems. These claims are rarely substantiated by anything even resembling evidence. The more outlandish the claim, the less likely it is to be true. Occam’s razor teaches us that the simplest explanation with the fewest inherent assumptions is most likely to be true. This is a useful heuristic for evaluating potential magic bullets.

It Avoids or at Least Discloses Potential Conflicts of Interest

A conflict of interest is anything that incentivizes producing a particular result. It distorts the pursuit of truth. A government study into the health risks of recreational drug use will be biased towards finding evidence of negative risks. A study of the benefits of breakfast cereal funded by a cereal company will be biased towards finding plenty of benefits. Researchers do have to get funding from somewhere, so this does not automatically make a study bad science. But research without conflicts of interest is more likely to be good science.

High-quality journals require researchers to disclose any potential conflicts of interest. But not all journals do. Media coverage of research may not mention this (another reason to go straight to the source). And people do sometimes lie. We don’t always know how unconscious biases influence us.

It Doesn’t Claim to Prove Anything Based on a Single Study

In the vast majority of cases, a single study is a starting point, not proof of anything. The results could be random chance, or the result of bias, or even outright fraud. Only once other researchers replicate the results can we consider a study persuasive. The more replications, the more reliable the results are. If attempts at replication fail, this can be a sign the original research was biased or incorrect.

A note on anecdotes: they’re not science. Anecdotes, especially from people close to us or those who have a lot of letters behind their name, have a disproportionate clout. But hearing something from one person, no matter how persuasive, should not be enough to discredit published research.

Science is about evidence, not proof. And evidence can always be discredited.

It Uses a Reasonable, Representative Sample Size

A representative sample represents the wider population, not one segment of it. If it does not, then the results may only be relevant for people in that demographic, not everyone. Bad science will often also use very small sample sizes.

There is no set target for what makes a large enough sample size; it all depends on the nature of the research. In general, the larger, the better. The exception is in studies that may put subjects at risk, which use the smallest possible sample to achieve usable results.

In areas like nutrition and medicine, it’s also important for a study to last a long time. A study looking at the impact of a supplement on blood pressure over a week is far less useful than one over a decade. Long-term data smooths out fluctuations and offers a more comprehensive picture.

The Results Are Statistically Significant

Statistical significance refers to the likelihood, measured in a percentage, that the results of a study were not due to pure random chance. The threshold for statistical significance varies between fields. Check if the confidence interval is in the accepted range. If it’s not, it’s not worth paying attention to.

It Is Well Presented and Formatted

“When my information changes, I alter my conclusions. What do you do, sir?” ―John Maynard Keynes

As basic as it sounds, we can expect good science to be well presented and carefully formatted, without prominent typos or sloppy graphics.

It’s not that bad presentation makes something bad science. It’s more the case that researchers producing good science have an incentive to make it look good. As Michael J. I. Brown of Monash University explains in How to Quickly Spot Dodgy Science, this is far more than a matter of aesthetics. The way a paper looks can be a useful heuristic for assessing its quality. Researchers who are dedicated to producing good science can spend years on a study, fretting over its results and investing in gaining support from the scientific community. This means they are less likely to present work looking bad. Brown gives an example of looking at an astrophysics paper and seeing blurry graphs and misplaced image captions—then finding more serious methodological issues upon closer examination. In addition to other factors, sloppy formatting can sometimes be a red flag. At the minimum, a thorough peer-review process should eliminate glaring errors.

It Uses Control Groups and Double-Blinding

A control group serves as a point of comparison in a study. The control group should be people as similar as possible to the experimental group, except they’re not subject to whatever is being tested. The control group may also receive a placebo to see how the outcome compares.

Blinding refers to the practice of obscuring which group participants are in. For a single-blind experiment, the participants do not know if they are in the control or the experimental group. In a double-blind experiment, neither the participants nor the researchers know. This is the gold standard and is essential for trustworthy results in many types of research. If people know which group they are in, the results are not trustworthy. If researchers know, they may (unintentionally or not) nudge participants towards the outcomes they want or expect. So a double-blind study with a control group is far more likely to be good science than one without.

It Doesn’t Confuse Correlation and Causation

In the simplest terms, two things are correlated if they happen at the same time. Causation is when one thing causes another thing to happen. For example, one large-scale study entitled “Are Non-Smokers Smarter than Smokers?” found that people who smoke tobacco tend to have lower IQs than those who don’t. Does this mean smoking lowers your IQ? It might, but there is also a strong link between socio-economic status and smoking. People of low income are, on average, likely to have lower IQ than those with higher incomes due to factors like worse nutrition, less access to education, and sleep deprivation. A study by the Centers for Disease Control and Prevention entitled “Cigarette Smoking and Tobacco Use Among People of Low Socioeconomic Status,” people of low socio-economic status are also more likely to smoke and to do so from a young age. There might be a correlation between smoking and IQ, but that doesn’t mean causation.

Disentangling correlation and causation can be difficult, but good science will take this into account and may detail potential confounding factors of efforts made to avoid them.

Conclusion

“The scientist is not a person who gives the right answers, he’s one who asks the right questions.” ―Claude Lévi-Strauss

The points raised in this article are all aimed at the linchpin of the scientific method—we cannot necessarily prove anything; we must consider the most likely outcome given the information we have. Bad science is generated by those who are willfully ignorant or are so focused on trying to “prove” their hypotheses that they fudge results and cherry-pick to shape their data to their biases. The problem with this approach is that it transforms what could be empirical and scientific into something subjective and ideological.

When we look to disprove what we know, we are able to approach the world with a more flexible way of thinking. If we are unable to defend what we know with reproducible evidence, we may need to reconsider our ideas and adjust our worldviews accordingly. Only then can we properly learn and begin to make forward steps. Through this lens, bad science and pseudoscience are simply the intellectual equivalent of treading water, or even sinking.

Article Summary

  • Most of us are never taught how to evaluate science or how to parse the good from the bad. Yet it is something that dictates every area of our lives.
  • Bad science is a flawed version of good science, with the potential for improvement. It follows the scientific method, only with errors or biases.
  • Pseudoscience has no basis in the scientific method. It does not attempt to follow standard procedures for gathering evidence. The claims involved may be impossible to disprove.
  • Good science is science that adheres to the scientific method, a systematic method of inquiry involving making a hypothesis based on existing knowledge, gathering evidence to test if it is correct, then either disproving or building support for the hypothesis.
  • Science is about evidence, not proof. And evidence can always be discredited.
  • In science, if it seems too good to be true, it most likely is.

Signs of good science include:

  • It’s Published by a Reputable Journal
  • It’s Peer Reviewed
  • The Researchers Have Relevant Experience and Qualifications
  • It’s Part of a Larger Body of Work
  • It Doesn’t Promise a Panacea or Miraculous Cure
  • It Avoids or at Least Discloses Potential Conflicts of Interest
  • It Doesn’t Claim to Prove Anything Based on a Single Study
  • It Uses a Reasonable, Representative Sample Size
  • The Results Are Statistically Significant
  • It Is Well Presented and Formatted
  • It Uses Control Groups and Double-Blinding
  • It Doesn’t Confuse Correlation and Causation

The Disproportional Power of Anecdotes

Anecdotes tend to not be statistically significant, but their added emotional significance leads us to place additional weight on them.

***

Humans, it seems, have an innate tendency to overgeneralize from small samples. How many times have you been caught in an argument where the only proof offered is anecdotal? Perhaps your co-worker saw this bratty kid make a mess in the grocery store while the parents appeared to do nothing. “They just let that child pull things off the shelves and create havoc! My parents would never have allowed that. Parents are so permissive now.” Hmm. Is it true that most parents commonly allow young children to cause trouble in public? It would be a mistake to assume so based on the evidence presented, but a lot of us would go with it anyway. Your co-worker did.

Our propensity to confuse the “now” with “what always is,” as if the immediate world before our eyes consistently represents the entire universe, leads us to bad conclusions and bad decisions. We don’t bother asking questions and verifying validity. So we make mistakes and allow ourselves to be easily manipulated.

Political polling is a good example. It’s actually really hard to design and conduct a good poll. Matthew Mendelsohn and Jason Brent, in their article “Understanding Polling Methodology,” say:

Public opinion cannot be understood by using only a single question asked at a single moment. It is necessary to measure public opinion along several different dimensions, to review results based on a variety of different wordings, and to verify findings on the basis of repetition. Any one result is filled with potential error and represents one possible estimation of the state of public opinion.

This makes sense. But it’s amazing how often we forget.

We see a headline screaming out about the state of affairs and we dive right in, instant believers, without pausing to question the validity of the methodology. How many people did they sample? How did they select them? Most polling aims for random sampling, but there is pre-selection at work immediately, depending on the medium the pollsters use to reach people.

Truly random samples of people are hard to come by. In order to poll people, you have to be able to reach them. The more complicated this is, the more expensive the poll becomes, which acts as a deterrent to thoroughness. The internet can offer high accessibility for a relatively low cost, but it’s a lot harder to verify the integrity of the demographics. And if you go the telephone route, as a lot of polling does, are you already distorting the true randomness of your sample size? Are the people who answer “unknown” numbers already different from those who ignore them?

Polls are meant to generalize larger patterns of behavior based on small samples. You need to put a lot of effort in to make sure that sample is truly representative of the population you are trying to generalize about. Otherwise, erroneous information is presented as truth.

Why does this matter?

It matters because generalization is a widespread human bias, which means a lot of our understanding of the world actually is based on extrapolations made from relatively small sample sizes. Consequently, our individual behavior is shaped by potentially incomplete or inadequate facts that we use to make the decisions that are meant to lead us to success. This bias also shapes a fair degree of public policy and government legislation. We don’t want people who make decisions that affect millions to be dependent on captivating bullshit. (A further concern is that once you are invested, other biases kick in).

Some really smart people are perpetual victims of the problem.

Joseph Henrich, Steven J. Heine, and Ara Norenzayan wrote an article called “The weirdest people in the world?” It’s about how many scientific psychology studies use college students who are predominantly Western, Educated, Industrialized, Rich, and Democratic (WEIRD), and then draw conclusions about the entire human race from these outliers. They reviewed scientific literature from domains such as “visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans.”

Uh-oh. This is a double whammy. “It’s not merely that researchers frequently make generalizations from a narrow subpopulation. The concern is that this particular subpopulation is highly unrepresentative of the species.”

This is why it can be dangerous to make major life decisions based on small samples, like anecdotes or a one-off experience. The small sample may be an outlier in the greater range of possibilities. You could be correcting for a problem that doesn’t exist or investing in an opportunity that isn’t there.

This tendency of mistaken extrapolation from small samples can have profound consequences.

Are you a fan of the San Francisco 49ers? They exist, in part, because of our tendency to over-generalize. In the 19th century in Western America and Canada, a few findings of gold along some creek beds led to a massive rush as entire populations flocked to these regions in the hope of getting rich. San Francisco grew from 200 residents in 1846 to about 36,000 only six years later. The gold rush provided enormous impetus toward California becoming a state, and the corresponding infrastructure developments touched off momentum that long outlasted the mining of gold.

But for most of the actual rushers, those hoping for gold based on the anecdotes that floated east, there wasn’t much to show for their decision to head west. The Canadian Encyclopedia states, “If the nearly 29 million (figure unadjusted) in gold that was recovered during the heady years of 1897 to 1899 [in the Klondike] was divided equally among all those who participated in the gold rush, the amount would fall far short of the total they had invested in time and money.”

How did this happen? Because those miners took anecdotes as being representative of a broader reality. Quite literally, they learned mining from rumor, and didn’t develop any real knowledge. Most people fought for claims along the creeks, where easy gold had been discovered, while rejecting the bench claims on the hillsides above, which often had just as much gold.

You may be thinking that these men must have been desperate if they packed themselves up, heading into unknown territory, facing multiple dangers along the way, to chase a dream of easy money. But most of us aren’t that different. How many times have you invested in a “hot stock” on a tip from one person, only to have the company go under within a year? Ultimately, the smaller the sample size, the greater role the factors of chance play in determining an outcome.

If you want to limit the capriciousness of chance in your quest for success, increase your sample size when making decisions. You need enough information to be able to plot the range of possibilities, identify the outliers, and define the average.

So next time you hear the words “the polls say,” “studies show,” or “you should buy this,” ask questions before you take action. Think about the population that is actually being represented before you start modifying your understanding. Accept the limits of small sample sizes from large populations. And don’t give power to anecdotes.

Half Life: The Decay of Knowledge and What to Do About It

Understanding the concept of a half-life will change what you read and how you invest your time. It will explain why our careers are increasingly specialized and offer a look into how we can compete more effectively in a very crowded world.

The Basics

A half-life is the time taken for something to halve its quantity. The term is most often used in the context of radioactive decay, which occurs when unstable atomic particles lose energy. Twenty-nine elements are known to be capable of undergoing this process. Information also has a half-life, as do drugs, marketing campaigns, and all sorts of other things. We see the concept in any area where the quantity or strength of something decreases over time.

Radioactive decay is random, and measured half-lives are based on the most probable rate. We know that a nucleus will decay at some point; we just cannot predict when. It could be anywhere between instantaneous and the total age of the universe. Although scientists have defined half-lives for different elements, the exact rate is completely random.

Half-lives of elements vary tremendously. For example, carbon takes millions of years to decay; that’s why it is stable enough to be a component of the bodies of living organisms. Different isotopes of the same element can also have different half-lives.

Three main types of nuclear decay have been identified: alpha, beta, and gamma. Alpha decay occurs when a nucleus splits into two parts: a helium nucleus and the remainder of the original nucleus. Beta decay occurs when a neutron in the nucleus of an element changes into a proton. The result is that it turns into a different element, such as when potassium decays into calcium. Beta decay also releases a neutrino — a particle with virtually no mass. If a nucleus emits radiation without experiencing a change in its composition, it is subject to gamma decay. Gamma radiation contains an enormous amount of energy.

The Discovery of Half-Lives

The discovery of half-lives (and alpha and beta radiation) is credited to Ernest Rutherford, one of the most influential physicists of his time. Rutherford was at the forefront of this major discovery when he worked with physicist Joseph John Thompson on complementary experiments leading to the discovery of electrons. Rutherford recognized the potential of what he was observing and began researching radioactivity. Two years later, he identified the distinction between alpha and beta rays. This led to his discovery of half-lives, when he noticed that samples of radioactive materials took the same amount of time to decay by half. By 1902, Rutherford and his collaborators had a coherent theory of radioactive decay (which they called “atomic disintegration”). They demonstrated that radioactive decay enabled one element to turn into another — research which would earn Rutherford a Nobel Prize. A year later, he spotted the missing piece in the work of the chemist Paul Villard and named the third type of radiation gamma.

Half-lives are based on probabilistic thinking. If the half-life of an element is seven days, it is most probable that half of the atoms will have decayed in that time. For a large number of atoms, we can expect half-lives to be fairly consistent. It’s important to note that radioactive decay is based on the element itself, not the quantity of it. By contrast, in other situations, the half-life may vary depending on the amount of material. For example, the half-life of a chemical someone ingests might depend on the quantity.

In biology, a half-life is the time taken for a substance to lose half its effects. The most obvious instance is drugs; the half-life is the time it takes for their effect to halve, or for half of the substance to leave the body. The half-life of caffeine is around 6 hours, but (as with most biological half-lives) numerous factors can alter that number. People with compromised liver function or certain genes will take longer to metabolize caffeine. Consumption of grapefruit juice has been shown in some studies to slow caffeine metabolism. It takes around 24 hours for a dose of caffeine to fully leave the body.

The half-lives of drugs vary from a few seconds to several weeks. To complicate matters, biological half-lives vary for different parts of the body. Lead has a half-life of around a month in the blood, but a decade in bone. Plutonium in bone has a half-life of a century — more than double the time for the liver.

Marketers refer to the half-life of a campaign — the time taken to receive half the total responses. Unsurprisingly, this time varies among media. A paper catalog may have a half-life of about three weeks, whereas a tweet might have a half-life of a few minutes. Calculating this time is important for establishing how frequently a message should be sent.

“Every day that we read the news we have the possibility of being confronted with a fact about our world that is wildly different from what we thought we knew.”

— Samuel Arbesman

The Half-Life of Facts

In The Half-Life of Facts: Why Everything We Know Has an Expiration Date, Samuel Arbesman (see our Knowledge Project interview) posits that facts decay over time until they are no longer facts or perhaps no longer complete. According to Arbesman, information has a predictable half-life: the time taken for half of it to be replaced or disproved. Over time, one group of facts replaces another. As our tools and knowledge become more advanced, we can discover more — sometimes new things that contradict what we thought we knew, sometimes nuances about old things. Sometimes we discover a whole area that we didn’t know about.

The rate of these discoveries varies. Our body of engineering knowledge changes more slowly, for example, than does our body of psychological knowledge.

Arbesman studied the nature of facts. The field was born in 1947, when mathematician Derek J. de Solla Price was arranging a set of philosophical books on his shelf. Price noted something surprising: the sizes of the books fit an exponential curve. His curiosity piqued, he began to see whether the same curve applied to science as a whole. Price established that the quantity of scientific data available was doubling every 15 years. This meant that some of the information had to be rendered obsolete with time.

Scientometrics shows us that facts are always changing, and much of what we know is (or soon will be) incorrect. Indeed, much of the available published research, however often it is cited, has never been reproduced and cannot be considered true. In a controversial paper entitled “Why Most Published Research Findings Are False,” John Ioannides covers the rampant nature of poor science. Many researchers are incentivized to find results that will please those giving them funding. Intense competition makes it essential to find new information, even if it is found in a dubious manner. Yet we all have a tendency to turn a blind eye when beliefs we hold dear are disproved and to pay attention only to information confirming our existing opinions.

As an example, Arbesman points to the number of chromosomes in a human cell. Up until 1965, 48 was the accepted number that medical students were taught. (In 1953, it had been declared an established fact by a leading cytologist). Yet in 1956, two researchers, Joe Hin Tjio and Albert Levan, made a bold assertion. They declared the true number to be 46. During their research, Tjio and Levan could never find the number of chromosomes they expected. Discussing the problem with their peers, they discovered they were not alone. Plenty of other researchers found themselves two chromosomes short of the expected 48. Many researchers even abandoned their work because of this perceived error. But Tjio and Levan were right (for now, anyway). Although an extra two chromosomes seems like a minor mistake, we don’t know the opportunity costs of the time researchers invested in faulty hypotheses or the value of the work that was abandoned. It was an emperor’s-new-clothes situation, and anyone counting 46 chromosomes assumed they were the ones making the error.

As Arbesman puts it, facts change incessantly. Many of us have seen the ironic (in hindsight) doctor-endorsed cigarette ads from the past. A glance at a newspaper will doubtless reveal that meat or butter or sugar has gone from deadly to saintly, or vice versa. We forget that laughable, erroneous beliefs people once held are not necessarily any different from those we now hold. The people who believed that the earth was the center of the universe, or that some animals appeared out of nowhere or that the earth was flat, were not stupid. They just believed facts that have since decayed. Arbesman gives the example of a dermatology test that had the same question two years running, with a different answer each time. This is unsurprising considering the speed at which our world is changing.

As Arbesman points out, in the last century the world’s population has swelled from 2 billion to 7 billion, we have taken on space travel, and we have altered the very definition of science.

Our world seems to be in constant flux. With our knowledge changing all the time, even the most informed people can barely keep up. All this change may seem random and overwhelming (Dinosaurs have feathers? When did that happen?), but it turns out there is actually order within the shifting noise. This order is regular and systematic and is one that can be described by science and mathematics.

The order Arbesman describes mimics the decay of radioactive elements. Whenever new information is discovered, we can be sure it will break down and be proved wrong at some point. As with a radioactive atom, we don’t know precisely when that will happen, but we know it will occur at some point.

If we zoom out and look at a particular body of knowledge, the random decay becomes orderly. Through probabilistic thinking, we can predict the half-life of a group of facts with the same certainty with which we can predict the half-life of a radioactive atom. The problem is that we rarely consider the half-life of information. Many people assume that whatever they learned in school remains true years or decades later. Medical students who learned in university that cells have 48 chromosomes would not learn later in life that this is wrong unless they made an effort to do so.

OK, so we know that our knowledge will decay. What do we do with this information? Arbesman says,

… simply knowing that knowledge changes like this isn’t enough. We would end up going a little crazy as we frantically tried to keep up with the ever changing facts around us, forever living on some sort of informational treadmill. But it doesn’t have to be this way because there are patterns. Facts change in regular and mathematically understandable ways. And only by knowing the pattern of our knowledge evolution can we be better prepared for its change.

Recent initiatives have sought to calculate the half-life of an academic paper. Ironically, academic journals have largely neglected research into how people use them and how best to fund the efforts of researchers. Research by Philip Davis shows the time taken for a paper to receive half of its total downloads. Davis’s results are compelling. While most forms of media have a half-life measured in days or even hours, 97 percent of academic papers have a half-life longer than a year. Engineering papers have a slightly shorter half-life than other fields of research, with double the average (6 percent) having a half-life of under a year. This makes sense considering what we looked at earlier in this post. Health and medical publications have the shortest overall half-life: two to three years. Physics, mathematics, and humanities publications have the longest half-lives: two to four years.

The Half-Life of Secrets

According to Peter Swire, writing in “The Declining Half-Life of Secrets,” the half-life of secrets (by which Swire generally means classified information) is shrinking. In the past, a government secret could be kept for over 25 years. Nowadays, hacks and leaks have shrunk that time considerably. Swire writes:

During the Cold War, the United States developed the basic classification system that exists today. Under Executive Order 13526, an executive agency must declassify its documents after 25 years unless an exception applies, with stricter rules if documents stay classified for 50 years or longer. These time frames are significant, showing a basic mind-set of keeping secrets for a time measured in decades.

Swire notes that there are three main causes: “the continuing effects of Moore’s Law — or the idea that computing power doubles every two years, the sociology of information technologists, and the different source and methods for signals intelligence today compared with the Cold War.” One factor is that spreading leaked information is easier than ever. In the past, it was often difficult to get information published. Newspapers feared legal repercussions if they shared classified information. Anyone can now release secret information, often anonymously, as with WikiLeaks. Governments cannot as easily rely on media gatekeepers to cover up leaks.

Rapid changes in technology or geopolitics often reduce the value of classified information, so the value of some, but not all, classified information also has a half-life. Sometimes it’s days or weeks, and sometimes it’s years. For some secrets, it’s not worth investing the massive amount of computer time that would be needed to break them because by the time you crack the code, the information you wanted to know might have expired.

(As an aside, if you were to invert the problem of all these credit card and SSN leaks, you might conclude that reducing the value of possessing this information would be more effective than spending money to secure it.)

“Our policy (at Facebook) is literally to hire as many talented engineers as we can find. The whole limit in the system is that there are not enough people who are trained and have these skills today.”

— Mark Zuckerberg

The Half-Lives of Careers and Business Models

The issue with information having a half-life should be obvious. Many fields depend on individuals with specialized knowledge, learned through study or experience or both. But what if those individuals are failing to keep up with changes and clinging to outdated facts? What if your doctor is offering advice that has been rendered obsolete since they finished medical school? What if your own degree or qualifications are actually useless? These are real problems, and knowing about half-lives will help you make yourself more adaptable.

While figures for the half-lives of most knowledge-based careers are hard to find, we do know the half-life of an engineering career. A century ago, it would take 35 years for half of what an engineer learned when earning their degree to be disproved or replaced. By the 1960s, that time span shrank to a mere decade. Today that figure is probably even lower.

In 1966 paper entitled “The Dollars and Sense of Continuing Education,” Thomas Jones calculated the effort that would be required for an engineer to stay up to date, assuming a 10-year half-life. According to Jones, an engineer would need to devote at least five hours per week, 48 weeks a year, to stay up to date with new advancements. A typical degree requires about 4800 hours of work. Within 10 years, the information learned during 2400 of those hours would be obsolete. The five-hour figure does not include the time necessary to revise forgotten information that is still relevant. A 40-year career as an engineer would require 9600 hours of independent study.

Keep in mind that Jones made his calculations in the 1960s. Modern estimates place the half-life of an engineering degree at between 2.5 and 5 years, requiring between 10 and 20 hours of study per week. Welcome to the treadmill, where you have to run faster and faster so that you don’t fall behind.

Unsurprisingly, putting in this kind of time is simply impossible for most people. The result is an ever-shrinking length of a typical engineer’s career and a bias towards hiring recent graduates. A partial escape from this time-consuming treadmill that offers little progress is to recognize the continuous need for learning. If you agree with that, it becomes easier to place time and emphasis on developing heuristics and systems to foster learning. The faster the pace of knowledge change, the more valuable the skill of learning becomes.

A study by PayScale found that the median age of workers in most successful technology companies is substantially lower than that of other industries. Of 32 companies, just six had a median worker age above 35, despite the average across all workers being just over 42. Eight of the top companies had a median worker age of 30 or below — 28 for Facebook, 29 for Google, and 26 for Epic Games. The upshot is that salaries are high for those who can stay current while gaining years of experience.

In a similar vein, business models have ever shrinking half-lives. The nature of capitalism is that you have to be better last year than you were this year — not to gain market share but to maintain what you already have. If you want to get ahead, you need asymmetry; otherwise, you get lost in trench warfare. How long would it take for half of Uber or Facebook’s business models to be irrelevant? It’s hard to imagine it being more than a couple of years or even months.

In The Business Model Innovation Factory: How to Stay Relevant When the World Is Changing, Saul Kaplan highlights the changing half-lives of business models. In the past, models could last for generations. The majority of CEOs oversaw a single business for their entire careers. Business schools taught little about agility or pivoting. Kaplan writes:

During the industrial era once the basic rules for how a company creates, delivers, and captures value were established[,] they became etched in stone, fortified by functional silos, and sustained by reinforcing company cultures. All of a company’s DNA, energy, and resources were focused on scaling the business model and beating back competition attempting to do a better job executing the same business model. Companies with nearly identical business models slugged it out for market share within well-defined industry sectors.

[…]

Those days are over. The industrial era is not coming back. The half-life of a business model is declining. Business models just don’t last as long as they used to. In the twenty-first century business leaders are unlikely to manage a single business for an entire career. Business leaders are unlikely to hand down their businesses to the next generation of leaders with the same business model they inherited from the generation before.

The Burden of Knowledge

The flip side of a half-life is the time it takes to double something. A useful guideline to calculate the time it takes for something to double is to divide 70 by the rate of growth. This formula isn’t perfect, but it gives a good indication. Known as the Rule of 70, it applies only to exponential growth when the relative growth rate remains consistent, such as with compound interest.

The higher the rate of growth, the shorter the doubling time. For example, if the population of a city is increasing by 2 percent per year, we divide 70 by 2 to get a doubling time of 35 years. The rule of 70 is a useful heuristic; population growth of 2 percent might seem low, but your perspective might change when you consider that the city’s population could double in just 35 years. The Rule of 70 can also be used to calculate the time for an investment to double in value; for example, $100 at 7 percent compound interest will double in just a decade and quadruple in 20 years. The average newborn baby doubles its birth weight in under four months. The average doubling time for a tumor is also four months.

We can see how information changes in the figures for how long it takes for a body of knowledge to double in size. The figures quoted by Arbesman (drawn from Little Science, Big Science … and Beyond by Derek J. de Solla Price) are compelling, including:

  • Time for the number of entries in a dictionary of national biographies to double: 100 years
  • Time for the number of universities to double: 50 years
  • Time for the number of known chemical compounds to double: 15 years
  • Time for the number of known asteroids to double: 10 years

Arbesman also gives figures for the time taken for the available knowledge in a particular field to double, including:

  • Medicine: 87 years
  • Mathematics: 63 years
  • Chemistry: 35 years
  • Genetics: 32 years

The doubling of knowledge increases the learning load over time. As a body of knowledge doubles so does the cost of wrapping your head around what we already know. This cost is the burden of knowledge. To be the best in a general field today requires that you know more than the person who was the best only 20 years ago. Not only do you have to be better to be the best, but you also have to be better just to stay in the game.

The corollary is that because there is so much to know, we specialize in very niche areas. This makes it easier to grasp the existing body of facts, keep up to date on changes, and rise to the level of expert. The problem is that specializing also makes it easier to see the world through the narrow focus of your specialty, makes it harder to work with other people (as niches are often dominated by jargon), and makes you prone to overvalue the new and novel.

Conclusion

As we have seen, understanding how half-lives work has numerous practical applications, from determining when radioactive materials will become safe to figuring out effective drug dosages. Half-lives also show us that if we spend time learning something that changes quickly, we might be wasting our time. Like Alice in Wonderland — and a perfect example of the Red Queen Effect — we have to run faster and faster just to keep up with where we are. So if we want our knowledge to compound, we’ll need to focus on the invariant general principles.

***

Members can discuss this post on the Learning Community Forum.

Activation Energy: Why Getting Started Is the Hardest Part

The beginning of any complex or challenging endeavor is always the hardest part. Not all of us wake up and jump out of bed ready for the day. Some of us, like me, need a little extra energy to transition out of sleep and into the day. Once I’ve had a cup of coffee, my energy level jumps and I’m good for the rest of the day. Chemical reactions work in much the same way. They need their coffee, too. We call this activation energy.

Understanding how this works can be a useful perspective as part of our latticework of mental models.

Whether you use chemistry in your everyday work or have tried your best not to think about it since school, the ideas behind activation energy are simple and useful outside of chemistry. Understanding the principle can, for example, help you get kids to eat their vegetables, motivate yourself and others, and overcome inertia.

How Activation Energy Works in Chemistry

Chemical reactions need a certain amount of energy to begin working. Activation energy is the minimum energy required to cause a reaction to occur.

To understand activation energy, we must first think about how a chemical reaction occurs.

Anyone who has ever lit a fire will have an intuitive understanding of the process, even if they have not connected it to chemistry.

Most of us have a general feel for the heat necessary to start flames. We know that putting a single match to a large log will not be sufficient and a flame thrower would be excessive. We also know that damp or dense materials will require more heat than dry ones. The imprecise amount of energy we know we need to start a fire is representative of the activation energy.

For a reaction to occur, existing bonds must break and new ones form. A reaction will only proceed if the products are more stable than the reactants. In a fire, we convert carbon in the form of wood into CO2 and is a more stable form of carbon than wood, so the reaction proceeds and in the process produces heat. In this example, the activation energy is the initial heat required to get the fire started. Our effort and spent matches are representative of this.

We can think of activation energy as the barrier between the minima (smallest necessary values) of the reactants and products in a chemical reaction.

The Arrhenius Equation

Svante Arrhenius, a Swedish scientist, established the existence of activation energy in 1889.

Arrhenius developed his eponymous equation to describe the correlation between temperature and reaction rate.

The Arrhenius equation is crucial for calculating the rates of chemical reactions and, importantly, the quantity of energy necessary to start them.

In the Arrhenius equation, K is the reaction rate coefficient (the rate of reaction). A is the frequency factor (how often molecules collide), R is the universal gas constant (units of energy per temperature increment per mole), T represents the absolute temperature (usually measured in kelvins), and E is the activation energy.

It is not necessary to know the value of A to calculate Ea as this can be figured out from the variation in reaction rate coefficients in relation to temperature. Like many equations, it can be rearranged to calculate different values. The Arrhenius equation is used in many branches of chemistry.

Why Activation Energy Matters

Understanding the energy necessary for a reaction to occur gives us control over our surroundings.

Returning to the example of fire, our intuitive knowledge of activation energy keeps us safe. Many chemical reactions have high activation energy requirements, so they do not proceed without an additional input. We all know that a book on a desk is flammable, but will not combust without heat application. At room temperature, we need not see the book as a fire hazard. If we light a candle on the desk, we know to move the book away.

If chemical reactions did not have reliable activation energy requirements, we would live in a dangerous world.

Catalysts

Chemical reactions which require substantial amounts of energy can be difficult to control.

Increasing temperature is not always a viable source of energy due to costs, safety issues, or simple impracticality. Chemical reactions that occur within our bodies, for example, cannot use high temperatures as a source of activation energy. Consequently, it is sometimes necessary to reduce the activation energy required.

Speeding up a reaction by lowering the activation energy required is called catalysis. This is done with an additional substance known as a catalyst, which is generally not consumed in the reaction. In principle, you only need a tiny amount of catalyst to cause catalysis.

Catalysts work by providing an alternative pathway with lower activation energy requirements. Consequently, more of the particles have sufficient energy to react. Catalysts are used in industrial scale reactions to lower costs.

Returning to the fire example, we know that attempting to light a large log with a match is rarely effective. Adding some paper will provide an alternative pathway and serve as a catalyst — firestarters do the same.

Within our bodies, enzymes serve as catalysts in vital reactions (such as building DNA.)

“Energy can have two dimensions. One is motivated, going somewhere, a goal somewhere, this moment is only a means and the goal is going to be the dimension of activity, goal oriented-then everything is a means, somehow it has to be done and you have to reach the goal, then you will relax. But for this type of energy, the goal never comes because this type of energy goes on changing every present moment into a means for something else, into the future. The goal always remains on the horizon. You go on running, but the distance remains the same.
No, there is another dimension of energy: that dimension is unmotivated celebration. The goal is here, now; the goal is not somewhere else. In fact, you are the goal. In fact, there is no other fulfillment than that of this moment–consider the lilies. When you are the goal and when the goal is not in the future, when there is nothing to be achieved, rather you are just celebrating it, then you have already achieved it, it is there. This is relaxation, unmotivated energy.”

— Osho, Tantra

Applying the Concept of Activation Energy to Our Daily Lives

Although activation energy is a scientific concept, we can use it as a practical mental model.

Returning to the morning coffee example, many of the things we do each day depend upon an initial push.

Take the example of a class of students set an essay for their coursework. Each student requires a different sort of activation energy for them to get started. For one student, it might be hearing their friend say she has already finished hers. For another, it might be blocking social media and turning off their phone. A different student might need a few cans of Red Bull and an impending deadline. Or, for another, reading an interesting article on the topic which provides a spark of inspiration. The act of writing an essay necessitates a certain sort of energy.

Getting kids to eat their vegetables can be a difficult process. In this case, incentives can act as a catalyst. “You can’t have your dessert until you eat your vegetables” is not only a psychological play on incentives; it also often requires less energy than constantly fighting with the kids to eat their vegetables. Once kids eat a carrot, they generally eat another one and another one. While they still want dessert, you won’t have to remind them each time, so you’ll save a lot of energy.

The concept of activation energy can also apply to making drastic life changes. Anyone who has ever done something dramatic and difficult (such as quitting an addiction, leaving an abusive relationship, quitting a long-term job, or making crucial lifestyle changes) knows that it is necessary to reach a breaking point first. The bigger and more challenging an action is, the more activation energy we require to do it.

Our coffee drinker might crave little activation energy (a cup or two) to begin their day if they are well rested. Meanwhile, it will take a whole lot more coffee for them to get going if they slept badly and have a dull day to get through.

Conclusion

To understand and use the concept of activation energy in our lives does not require a degree in chemistry. While the concept as used by scientists is complex, we can use the basic idea.

It is no coincidence that many of most useful mental models in our latticework originate from science. There is something quite poetic about the way in which human behavior mirrors what occurs at a microscopic level.

For other examples, look to Occam’s Razor, falsification, feedback loops, and equilibrium.

Alexander von Humboldt and the Invention of Nature: Creating a Holistic View of the World Through A Web of Interdisciplinary Knowledge

In his piece in 2014’s Edge collection This Idea Must Die: Scientific Theories That Are Blocking Progress, dinosaur paleontologist Scott Sampson writes that science needs to “subjectify” nature. By “subjectify”, he essentially means to see ourselves connected with nature, and therefore care about it the same way we do the people with whom we are connected.

That’s not the current approach. He argues: “One of the most prevalent ideas in science is that nature consists of objects. Of course, the very practice of science is grounded in objectivity. We objectify nature so that we can measure it, test it, and study it, with the ultimate goal of unraveling its secrets. Doing so typically requires reducing natural phenomena to their component parts.”

But this approach is ultimately failing us.

Why? Because much of our unsustainable behavior can be traced to a broken relationship with nature, a perspective that treats the nonhuman world as a realm of mindless, unfeeling objects. Sustainability will almost certainly depend upon developing mutually enhancing relations between humans and nonhuman nature.

This isn’t a new plea, though. Over 200 years ago, the famous naturalist Alexander Von Humboldt (1769-1859) was facing the same challenges.

In her compelling book The Invention of Nature: Alexander Von Humboldt’s New World, Andrea Wulf explores Humboldt as the first person to publish works promoting a holistic view of nature, arguing that nature could only be understood in relation to the subjectivity of experiencing it.

Fascinated by scientific instruments, measurements and observations, he was driven by a sense of wonder as well. Of course nature had to be measured and analyzed, but he also believed that a great part of our response to the natural world should be based on the senses and emotions.

Humboldt was a rock star scientist who ignored conventional boundaries in his exploration of nature. Humboldt’s desire to know and understand the world led him to investigate discoveries in all scientific disciplines, and to see the interwoven patterns embedded in this knowledge — mental models anyone?

If nature was a web of life, he couldn’t look at it just as a botanist, a geologist or a zoologist. He required information about everything from everywhere.

Humboldt grew up in a world where science was dry, nature mechanical, and man an aloof and separate chronicler of what was before him. Not only did Humboldt have a new vision of what our understanding of nature could be, but he put humans in the middle of it.

Humboldt’s Essay on the Geography of Plants promoted an entirely different understanding of nature. Instead of only looking at an organism, … Humboldt now presented relationships between plants, climate and geography. Plants were grouped into zones and regions rather than taxonomic units. … He gave western science a new lens through which to view the natural world.

Revolutionary for his time, Humboldt rejected the Cartesian ideas of animals as mechanical objects. He also argued passionately against the growing approach in the sciences that put man atop and separate from the rest of the natural world. Promoting a concept of unity in nature, Humboldt saw nature as a “reflection of the whole … an organism in which the parts only worked in relation to each other.”

Furthermore, that “poetry was necessary to comprehend the mysteries of the natural world.”

Wulf paints one of Humboldt’s greatest achievements as his ability and desire to make science available to everyone. No one before him had “combined exact observation with a ‘painterly description of the landscape”.

By contrast, Humboldt took his readers into the crowded streets of Caracas, across the dusty plains of the Llanos and deep into the rainforest along the Orinoco. As he described a continent that few British had ever seen, Humboldt captured their imagination. His words were so evocative, the Edinburgh Review wrote, that ‘you partake in his dangers; you share his fears, his success and his disappointment.’

In a time when travel was precarious, expensive and unavailable to most people, Humboldt brought his experiences to anyone who could read or listen.

On 3 November 1827, … Humboldt began a series of sixty-one lectures at the university. These proved so popular that he added another sixteen at Berlin’s music hall from 6 December. For six months he delivered lectures several days a week. Hundreds of people attended each talk, which Humboldt presented without reading from his notes. It was lively, exhilarating and utterly new. By not charging any entry fee, Humboldt democratized science: his packed audiences ranged from the royal family to coachmen, from students to servants, from scholars to bricklayers – and half of those attending were women. Berlin had never seen anything like it.

The subjectification of nature is about seeing nature, experiencing it. Humboldt was a master of bringing people to worlds they couldn’t visit, allowing them to feel a part of it. In doing so, he wanted to force humanity to see itself in nature. If we were all part of the giant web, then we all had a responsibility to understand it.

When he listed the three ways in which the human species was affecting the climate, he named deforestation, ruthless irrigation and, perhaps most prophetically, the ‘great masses of steam and gas’ produced in the industrial centres. No one but Humboldt had looked at the relationship between humankind and nature like this before.

His final opus, a series of books called Cosmos, was the culmination of everything that Humboldt had learned and discovered.

Cosmos was unlike any previous book about nature. Humboldt took his readers on a journey from outer space to earth, and then from the surface of the planet into its inner core. He discussed comets, the Milky Way and the solar system as well as terrestrial magnetism, volcanoes and the snow line of mountains. He wrote about the migration of the human species, about plants and animals and the microscopic organisms that live in stagnant water or on the weathered surface of rocks. Where others insisted that nature was stripped of its magic as humankind penetrated into its deepest secrets, Humboldt believed exactly the opposite. How could this be, Humboldt asked, in a world in which the coloured rays of an aurora ‘unite in a quivering sea flame’, creating a sight so otherworldly ‘the splendour of which no description can reach’? Knowledge, he said, could never ‘kill the creative force of imagination’ – instead it brought excitement, astonishment and wondrousness.

This is the ultimate subjectivity of nature. Being inspired by its beauty to try and understand how it works. Humboldt had respect for nature, for the wonders it contained, but also as the system in which we ourselves are an inseparable part.

Wulf concludes at the end that Humboldt,

…was one of the last polymaths, and died at a time when scientific disciplines were hardening into tightly fenced and more specialized fields. Consequently his more holistic approach – a scientific method that included art, history, poetry and politics alongside hard data – has fallen out of favour.

Maybe this is where the subjectivity of nature has gone. But we can learn from Humboldt the value of bringing it back.

In a world where we tend to draw a sharp line between the sciences and the arts, between the subjective and the objective, Humboldt’s insight that we can only truly understand nature by using our imagination makes him a visionary.

A little imagination is all it takes.

How To Mentally Overachieve — Charles Darwin’s Reflections On His Own Mind

We’ve written quite a bit about the marvelous British naturalist Charles Darwin, who with his Origin of Species created perhaps the most intense intellectual debate in human history, one which continues up to this day.

Darwin’s Origin was a courageous and detailed thought piece on the nature and development of biological species. It’s the starting point for nearly all of modern biology.

But, as we’ve noted before, Darwin was not a man of pure IQ. He was not Issac Newton, or Richard Feynman, or Albert Einstein — breezing through complex mathematical physics at a young age.

Charlie Munger thinks Darwin would have placed somewhere in the middle of a good private high school class. He was also in notoriously bad health for most of his adult life and, by his son’s estimation, a terrible sleeper. He really only worked a few hours a day in the many years leading up to the Origin of Species.

Yet his “thinking work” outclassed almost everyone. An incredible story.

In his autobiography, Darwin reflected on this peculiar state of affairs. What was he good at that led to the result? What was he so weak at? Why did he achieve better thinking outcomes? As he put it, his goal was to:

“Try to analyse the mental qualities and the conditions on which my success has depended; though I am aware that no man can do this correctly.”

In studying Darwin ourselves, we hope to better appreciate our own strengths and weaknesses and, not to mention understand the working methods of a “mental overachiever.

Let’s explore what Darwin saw in himself.

***

1. He did not have a quick intellect or an ability to follow long, complex, or mathematical reasoning. He may have been a bit hard on himself, but Darwin realized that he wasn’t a “5 second insight” type of guy (and let’s face it, most of us aren’t). His life also proves how little that trait matters if you’re aware of it and counter-weight it with other methods.

I have no great quickness of apprehension or wit which is so remarkable in some clever men, for instance, Huxley. I am therefore a poor critic: a paper or book, when first read, generally excites my admiration, and it is only after considerable reflection that I perceive the weak points. My power to follow a long and purely abstract train of thought is very limited; and therefore I could never have succeeded with metaphysics or mathematics. My memory is extensive, yet hazy: it suffices to make me cautious by vaguely telling me that I have observed or read something opposed to the conclusion which I am drawing, or on the other hand in favour of it; and after a time I can generally recollect where to search for my authority. So poor in one sense is my memory, that I have never been able to remember for more than a few days a single date or a line of poetry.

2. He did not feel easily able to write clearly and concisely. He compensated by getting things down quickly and then coming back to them later, thinking them through again and again. Slow, methodical….and ridiculously effective: For those who haven’t read it, the Origin of Species is extremely readable and clear, even now, 150 years later.

I have as much difficulty as ever in expressing myself clearly and concisely; and this difficulty has caused me a very great loss of time; but it has had the compensating advantage of forcing me to think long and intently about every sentence, and thus I have been led to see errors in reasoning and in my own observations or those of others.

There seems to be a sort of fatality in my mind leading me to put at first my statement or proposition in a wrong or awkward form. Formerly I used to think about my sentences before writing them down; but for several years I have found that it saves time to scribble in a vile hand whole pages as quickly as I possibly can, contracting half the words; and then correct deliberately. Sentences thus scribbled down are often better ones than I could have written deliberately.

3. He forced himself to be an incredibly effective and organized collector of information. Darwin’s system of reading and indexing facts in large portfolios is worth emulating, as is the habit of taking down conflicting ideas immediately.

As in several of my books facts observed by others have been very extensively used, and as I have always had several quite distinct subjects in hand at the same time, I may mention that I keep from thirty to forty large portfolios, in cabinets with labelled shelves, into which I can at once put a detached reference or memorandum. I have bought many books, and at their ends I make an index of all the facts that concern my work; or, if the book is not my own, write out a separate abstract, and of such abstracts I have a large drawer full. Before beginning on any subject I look to all the short indexes and make a general and classified index, and by taking the one or more proper portfolios I have all the information collected during my life ready for use.

4. He had possibly the most valuable trait in any sort of thinker: A passionate interest in understanding reality and putting it in useful order in his headThis “Reality Orientation” is hard to measure and certainly does not show up on IQ tests, but probably determines, to some extent, success in life.

On the favourable side of the balance, I think that I am superior to the common run of men in noticing things which easily escape attention, and in observing them carefully. My industry has been nearly as great as it could have been in the observation and collection of facts. What is far more important, my love of natural science has been steady and ardent.

This pure love has, however, been much aided by the ambition to be esteemed by my fellow naturalists. From my early youth I have had the strongest desire to understand or explain whatever I observed,–that is, to group all facts under some general laws. These causes combined have given me the patience to reflect or ponder for any number of years over any unexplained problem. As far as I can judge, I am not apt to follow blindly the lead of other men. I have steadily endeavoured to keep my mind free so as to give up any hypothesis, however much beloved (and I cannot resist forming one on every subject), as soon as facts are shown to be opposed to it.

Indeed, I have had no choice but to act in this manner, for with the exception of the Coral Reefs, I cannot remember a single first-formed hypothesis which had not after a time to be given up or greatly modified. This has naturally led me to distrust greatly deductive reasoning in the mixed sciences. On the other hand, I am not very sceptical—a frame of mind which I believe to be injurious to the progress of science. A good deal of scepticism in a scientific man is advisable to avoid much loss of time, but I have met with not a few men, who, I feel sure, have often thus been deterred from experiment or observations, which would have proved directly or indirectly serviceable.

[…]

Therefore my success as a man of science, whatever this may have amounted to, has been determined, as far as I can judge, by complex and diversified mental qualities and conditions. Of these, the most important have been—the love of science—unbounded patience in long reflecting over any subject—industry in observing and collecting facts—and a fair share of invention as well as of common sense.

5. Most inspirational to us of average intellect, he outperformed his own mental aptitude with these good habits, surprising even himself with the results.

With such moderate abilities as I possess, it is truly surprising that I should have influenced to a considerable extent the belief of scientific men on some important points.

***

Still Interested? Read his autobiography, his The Origin of Species, or check out David Quammen’s wonderful short biography of the most important period of Darwin’s life. Also, if you missed it, check out our prior post on Darwin’s Golden Rule.