Category: Science

How To Spot Bad Science

In a digital world that clamors for clicks, news is sensationalized and “facts” change all the time. Here’s how to discern what is trustworthy and what is hogwash.

***

Unless you’ve studied it, most of us are never taught how to evaluate science or how to parse the good from the bad. Yet it is something that dictates every area of our lives. It is vital for helping us understand how the world works. It might be too much effort and time to appraise research for yourself, however. Often, it can be enough to consult an expert or read a trustworthy source.

But some decisions require us to understand the underlying science. There is no way around it. Many of us hear about scientific developments from news articles and blog posts. Some sources put the work into presenting useful information. Others manipulate or misinterpret results to get more clicks. So we need the thinking tools necessary to know what to listen to and what to ignore. When it comes to important decisions, like knowing what individual action to take to minimize your contribution to climate change or whether to believe the friend who cautions against vaccinating your kids, being able to assess the evidence is vital.

Much of the growing (and concerning) mistrust of scientific authority is based on a misunderstanding of how it works and a lack of awareness of how to evaluate its quality. Science is not some big immovable mass. It is not infallible. It does not pretend to be able to explain everything or to know everything. Furthermore, there is no such thing as “alternative” science. Science does involve mistakes. But we have yet to find a system of inquiry capable of achieving what it does: move us closer and closer to truths that improve our lives and understanding of the universe.

“Rather than love, than money, than fame, give me truth.”

— Henry David Thoreau

There is a difference between bad science and pseudoscience. Bad science is a flawed version of good science, with the potential for improvement. It follows the scientific method, only with errors or biases. Often, it’s produced with the best of intentions, just by researchers who are responding to skewed incentives.

Pseudoscience has no basis in the scientific method. It does not attempt to follow standard procedures for gathering evidence. The claims involved may be impossible to disprove. Pseudoscience focuses on finding evidence to confirm it, disregarding disconfirmation. Practitioners invent narratives to preemptively ignore any actual science contradicting their views. It may adopt the appearance of actual science to look more persuasive.

While the tools and pointers in this post are geared towards identifying bad science, they will also help with easily spotting pseudoscience.

Good science is science that adheres to the scientific method, a systematic method of inquiry involving making a hypothesis based on existing knowledge, gathering evidence to test if it is correct, then either disproving or building support for the hypothesis. It takes many repetitions of applying this method to build reasonable support for a hypothesis.

In order for a hypothesis to count as such, there must be evidence that, if collected, would disprove it.

In this post, we’ll talk you through two examples of bad science to point out some of the common red flags. Then we’ll look at some of the hallmarks of good science you can use to sort the signal from the noise. We’ll focus on the type of research you’re likely to encounter on a regular basis, including medicine and psychology, rather than areas less likely to be relevant to your everyday life.

[Note: we will use the terms “research” and “science” and “researcher” and “scientist” interchangeably here.]

Power Posing

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” ―Isaac Asimov

First, here’s an example of flawed science from psychology: power posing. A 2010 study by Dana Carney, Andy J. Yap, and Amy Cuddy entitledPower Posing: Brief Nonverbal Displays Effects Neuroendocrine Levels and Risk Tolerance” claimed “open, expansive” poses caused participants to experience elevated testosterone levels, reduced cortisol levels, and greater risk tolerance. These are all excellent things in a high-pressure situation, like a job interview. The abstract concluded that “a person can, via a simple two-minute pose, embody power and instantly become more powerful.” The idea took off. It spawned hundreds of articles, videos, and tweets espousing the benefits of including a two-minute power pose in your day.

Yet at least eleven follow up studies, many led by Joseph Cesario of Michigan State University including “’Power Poses’ Don’t Work, Eleven New Studies Suggest,” failed to replicate the results. None found that power posing has a measurable impact on people’s performance in tasks or on their physiology. While subjects did report a subjective feeling of increased powerfulness, their performance did not differ from subjects who did not strike a power pose.

One of the researchers of the original study, Carney, has since changed her mind about the effect. Carney stated she no longer believe the results of the original study. Unfortunately, this isn’t always how researchers respond when confronted with evidence discrediting their prior work. We all know how uncomfortable changing our minds is.

The notion of power posing is exactly the kind of nugget that spreads fast online. It’s simple, free, promises dramatic benefits with minimal effort, and is intuitive. We all know posture is important. It has a catchy, memorable name. Yet examining the details of the original study reveals a whole parade of red flags. The study had 42 participants. That might be reasonable for preliminary or pilot studies. But is in no way sufficient to “prove” anything. It was not blinded. Feedback from participants was self-reported, which is notorious for being biased and inaccurate.

There is also a clear correlation/causation issue. Powerful, dominant animals tend to use expansive body language that exaggerates their size. Humans often do the same. But that doesn’t mean it’s the pose making them powerful. Being powerful could make them pose that way.

A TED Talk in which Amy Cuddy, the study’s co-author, claimed power posing could “significantly change the way your life unfolds” is one of the most popular to date, with tens of millions of views. The presentation of the science in the talk is also suspect. Cuddy makes strong claims with a single, small study as justification. She portrays power posing as a panacea. Likewise, the original study’s claim that a power pose makes someone “instantly become more powerful” is suspiciously strong.

This is one of the examples of psychological studies related to small tweaks in our behavior that have not stood up to scrutiny. We’re not singling out the power pose study as being unusually flawed or in any way fraudulent. The researchers had clear good intentions and a sincere belief in their work. It’s a strong example of why we should go straight to the source if we want to understand research. Coverage elsewhere is unlikely to even mention methodological details or acknowledge any shortcomings. It would ruin the story. We even covered power posing on Farnam Street in 2016—we’re all susceptible to taking these ‘scientific’ results seriously, without checking on the validity of the underlying science.

It is a good idea to be skeptical of research promising anything too dramatic or extreme with minimal effort, especially without substantial evidence. If it seems too good to be true, it most likely is.

Green Coffee Beans

“An expert is a person who has made all the mistakes that can be made in a very narrow field.” ―Niels Bohr

The world of weight-loss science is one where bad science is rampant. We all know, deep down, that we cannot circumnavigate the need for healthy eating and exercise. Yet the search for a magic bullet, offering results without effort or risks, continues. Let’s take a look at one study that is a masterclass in bad science.

EntitledRandomized, Double-Blind, Placebo-Controlled, Linear Dose, Crossover Study to Evaluate the Efficacy and Safety of a Green Coffee Bean Extract in Overweight Subjects,” it was published in 2012 in the journal Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy. On the face of it, and to the untrained eye, the study may appear legitimate, but it is rife with serious problems, as Scott Gavura explained in the article “Dr. Oz and Green Coffee Beans – More Weight Loss Pseudoscience” in the publication Science-Based Medicine. The original paper was later retracted by its authors. The Federal Trade Commission (FTC) ordered the supplement manufacturer who funded the study to pay a $3.5 million fine for using it in their marketing materials, describing it as “botched.”

The Food and Drug Administration (FDA) recommends studies relating to weight-loss consist of at least 3,000 participants receiving the active medication and at least 1,500 receiving a placebo, all for a minimum period of 12 months. This study used a mere 16 subjects, with no clear selection criteria or explanation. None of the researchers involved had medical experience or had published related research. They did not disclose the conflict of interest inherent in the funding source. It didn’t cover efforts to avoid any confounding factors. It is vague about whether subjects changed their diet and exercise, showing inconsistencies. The study was not double-blinded, despite claiming to be. It has not been replicated.

The FTC reported that the study’s lead investigator “repeatedly altered the weights and other key measurements of the subjects, changed the length of the trial, and misstated which subjects were taking the placebo or GCA during the trial.” A meta-analysis by Rachel Buchanan and Robert D. Beckett, “Green Coffee for Pharmacological Weight Loss” published in the Journal of Evidence-Based Complementary & Alternative Medicine, failed to find evidence for green coffee beans being safe or effective; all the available studies had serious methodological flaws, and most did not comply with FDA guidelines.

Signs of Good Science

“That which can be asserted without evidence can be dismissed without evidence.” ―Christopher Hitchens

We’ve inverted the problem and considered some of the signs of bad science. Now let’s look at some of the indicators a study is likely to be trustworthy. Unfortunately, there is no single sign a piece of research is good science. None of the signs mentioned here are, alone, in any way conclusive. There are caveats and exceptions to all. These are simply factors to evaluate.

It’s Published by a Reputable Journal

“The discovery of instances which confirm a theory means very little if we have not tried, and failed, to discover refutations.” —Karl Popper

A journal, any journal, publishing a study says little about its quality. Some will publish any research they receive in return for a fee. A few so-called “vanity publishers” claim to have a peer-review process, yet they typically have a short gap between receiving a paper and publishing it. We’re talking days or weeks, not the expected months or years. Many predatory publishers do not even make any attempt to verify quality.

No journal is perfect. Even the most respected journals make mistakes and publish low-quality work sometimes. However, anything that is not published research or based on published research in a journal is not worth consideration. Not as science. A blog post saying green smoothies cured someone’s eczema is not comparable to a published study. The barrier is too low. If someone cared enough about using a hypothesis or “finding” to improve the world and educate others, they would make the effort to get it published. The system may be imperfect, but reputable researchers will generally make the effort to play within it to get their work noticed and respected.

It’s Peer Reviewed

Peer review is a standard process in academic publishing. It’s intended as an objective means of assessing the quality and accuracy of new research. Uninvolved researchers with relevant experience evaluate papers before publication. They consider factors like how well it builds upon pre-existing research or if the results are statistically significant. Peer review should be double-blinded. This means the researcher doesn’t know who is reviewing their work and the reviewer doesn’t know who the researcher is.

Publishers only perform a cursory “desk check” before moving onto peer review. This is to check for major errors, nothing more. They cannot have the expertise necessary to vet the quality of every paper they handle—hence the need for external experts. The number of reviewers and strictness of the process depends on the journal. Reviewers either declare a paper unpublishable or suggest improvements. It is rare for them to suggest publishing without modifications.

Sometimes several rounds of modifications prove necessary. It can take years for a paper to see the light of day, which is no doubt frustrating for the researcher. But it ensures no or fewer mistakes or weak areas.

Pseudoscientific practitioners will often claim they cannot get their work published because peer reviewers suppress anything contradicting prevailing doctrines. Good researchers know having their work challenged and argued against is positive. It makes them stronger. They don’t shy away from it.

Peer review is not a perfect system. Seeing as it involves humans, there is always room for bias and manipulation. In a small field, it may be easy for a reviewer to get past the double-blinding. However, as it stands, peer review seems to be the best available system. In isolation, it’s not a guarantee that research is perfect, but it’s one factor to consider.

The Researchers Have Relevant Experience and Qualifications

One of the red flags in the green coffee bean study was that the researchers involved had no medical background or experience publishing obesity-related research.

While outsiders can sometimes make important advances, researchers should have relevant qualifications and a history of working in that field. It is too difficult to make scientific advancements without the necessary background knowledge and expertise. If someone cares enough about advancing a given field, they will study it. If it’s important, verify their backgrounds.

It’s Part of a Larger Body of Work

“Science, my lad, is made up of mistakes, but they are mistakes which it is useful to make, because they lead little by little to the truth.” ―Jules Verne

We all like to stand behind the maverick. But we should be cautious of doing so when it comes to evaluating the quality of science. On the whole, science does not progress in great leaps. It moves along millimeter by millimeter, gaining evidence in increments. Even if a piece of research is presented as groundbreaking, it has years of work behind it.

Researchers do not work in isolation. Good science is rarely, if ever, the result of one person or even one organization. It comes from a monumental collective effort. So when evaluating research, it is important to see if other studies point to similar results and if it is an established field of work. For this reason, meta-analyses, which analyze the combined results of many studies on the same topic, are often far more useful to the public than individual studies. Scientists are humans and they all make mistakes. Looking at a collective body of work helps smooth out any problems. Individual studies are valuable in that they further the field as a whole, allowing for the creation of meta-studies.

Science is about evidence, not reputation. Sometimes well-respected researchers, for whatever reason, produce bad science. Sometimes outsiders produce amazing science. What matters is the evidence they have to support it. While an established researcher may have an easier time getting support for their work, the overall community accepts work on merit. When we look to examples of unknowns who made extraordinary discoveries out of the blue, they always had extraordinary evidence for it.

Questioning the existing body of research is not inherently bad science or pseudoscience. Doing so without a remarkable amount of evidence is.

It Doesn’t Promise a Panacea or Miraculous Cure

Studies that promise anything a bit too amazing can be suspect. This is more common in media reporting of science or in research used for advertising.

In medicine, a panacea is something that can supposedly solve all, or many, health problems. These claims are rarely substantiated by anything even resembling evidence. The more outlandish the claim, the less likely it is to be true. Occam’s razor teaches us that the simplest explanation with the fewest inherent assumptions is most likely to be true. This is a useful heuristic for evaluating potential magic bullets.

It Avoids or at Least Discloses Potential Conflicts of Interest

A conflict of interest is anything that incentivizes producing a particular result. It distorts the pursuit of truth. A government study into the health risks of recreational drug use will be biased towards finding evidence of negative risks. A study of the benefits of breakfast cereal funded by a cereal company will be biased towards finding plenty of benefits. Researchers do have to get funding from somewhere, so this does not automatically make a study bad science. But research without conflicts of interest is more likely to be good science.

High-quality journals require researchers to disclose any potential conflicts of interest. But not all journals do. Media coverage of research may not mention this (another reason to go straight to the source). And people do sometimes lie. We don’t always know how unconscious biases influence us.

It Doesn’t Claim to Prove Anything Based on a Single Study

In the vast majority of cases, a single study is a starting point, not proof of anything. The results could be random chance, or the result of bias, or even outright fraud. Only once other researchers replicate the results can we consider a study persuasive. The more replications, the more reliable the results are. If attempts at replication fail, this can be a sign the original research was biased or incorrect.

A note on anecdotes: they’re not science. Anecdotes, especially from people close to us or those who have a lot of letters behind their name, have a disproportionate clout. But hearing something from one person, no matter how persuasive, should not be enough to discredit published research.

Science is about evidence, not proof. And evidence can always be discredited.

It Uses a Reasonable, Representative Sample Size

A representative sample represents the wider population, not one segment of it. If it does not, then the results may only be relevant for people in that demographic, not everyone. Bad science will often also use very small sample sizes.

There is no set target for what makes a large enough sample size; it all depends on the nature of the research. In general, the larger, the better. The exception is in studies that may put subjects at risk, which use the smallest possible sample to achieve usable results.

In areas like nutrition and medicine, it’s also important for a study to last a long time. A study looking at the impact of a supplement on blood pressure over a week is far less useful than one over a decade. Long-term data smooths out fluctuations and offers a more comprehensive picture.

The Results Are Statistically Significant

Statistical significance refers to the likelihood, measured in a percentage, that the results of a study were not due to pure random chance. The threshold for statistical significance varies between fields. Check if the confidence interval is in the accepted range. If it’s not, it’s not worth paying attention to.

It Is Well Presented and Formatted

“When my information changes, I alter my conclusions. What do you do, sir?” ―John Maynard Keynes

As basic as it sounds, we can expect good science to be well presented and carefully formatted, without prominent typos or sloppy graphics.

It’s not that bad presentation makes something bad science. It’s more the case that researchers producing good science have an incentive to make it look good. As Michael J. I. Brown of Monash University explains in How to Quickly Spot Dodgy Science, this is far more than a matter of aesthetics. The way a paper looks can be a useful heuristic for assessing its quality. Researchers who are dedicated to producing good science can spend years on a study, fretting over its results and investing in gaining support from the scientific community. This means they are less likely to present work looking bad. Brown gives an example of looking at an astrophysics paper and seeing blurry graphs and misplaced image captions—then finding more serious methodological issues upon closer examination. In addition to other factors, sloppy formatting can sometimes be a red flag. At the minimum, a thorough peer-review process should eliminate glaring errors.

It Uses Control Groups and Double-Blinding

A control group serves as a point of comparison in a study. The control group should be people as similar as possible to the experimental group, except they’re not subject to whatever is being tested. The control group may also receive a placebo to see how the outcome compares.

Blinding refers to the practice of obscuring which group participants are in. For a single-blind experiment, the participants do not know if they are in the control or the experimental group. In a double-blind experiment, neither the participants nor the researchers know. This is the gold standard and is essential for trustworthy results in many types of research. If people know which group they are in, the results are not trustworthy. If researchers know, they may (unintentionally or not) nudge participants towards the outcomes they want or expect. So a double-blind study with a control group is far more likely to be good science than one without.

It Doesn’t Confuse Correlation and Causation

In the simplest terms, two things are correlated if they happen at the same time. Causation is when one thing causes another thing to happen. For example, one large-scale study entitled “Are Non-Smokers Smarter than Smokers?” found that people who smoke tobacco tend to have lower IQs than those who don’t. Does this mean smoking lowers your IQ? It might, but there is also a strong link between socio-economic status and smoking. People of low income are, on average, likely to have lower IQ than those with higher incomes due to factors like worse nutrition, less access to education, and sleep deprivation. A study by the Centers for Disease Control and Prevention entitled “Cigarette Smoking and Tobacco Use Among People of Low Socioeconomic Status,” people of low socio-economic status are also more likely to smoke and to do so from a young age. There might be a correlation between smoking and IQ, but that doesn’t mean causation.

Disentangling correlation and causation can be difficult, but good science will take this into account and may detail potential confounding factors of efforts made to avoid them.

Conclusion

“The scientist is not a person who gives the right answers, he’s one who asks the right questions.” ―Claude Lévi-Strauss

The points raised in this article are all aimed at the linchpin of the scientific method—we cannot necessarily prove anything; we must consider the most likely outcome given the information we have. Bad science is generated by those who are willfully ignorant or are so focused on trying to “prove” their hypotheses that they fudge results and cherry-pick to shape their data to their biases. The problem with this approach is that it transforms what could be empirical and scientific into something subjective and ideological.

When we look to disprove what we know, we are able to approach the world with a more flexible way of thinking. If we are unable to defend what we know with reproducible evidence, we may need to reconsider our ideas and adjust our worldviews accordingly. Only then can we properly learn and begin to make forward steps. Through this lens, bad science and pseudoscience are simply the intellectual equivalent of treading water, or even sinking.

Article Summary

  • Most of us are never taught how to evaluate science or how to parse the good from the bad. Yet it is something that dictates every area of our lives.
  • Bad science is a flawed version of good science, with the potential for improvement. It follows the scientific method, only with errors or biases.
  • Pseudoscience has no basis in the scientific method. It does not attempt to follow standard procedures for gathering evidence. The claims involved may be impossible to disprove.
  • Good science is science that adheres to the scientific method, a systematic method of inquiry involving making a hypothesis based on existing knowledge, gathering evidence to test if it is correct, then either disproving or building support for the hypothesis.
  • Science is about evidence, not proof. And evidence can always be discredited.
  • In science, if it seems too good to be true, it most likely is.

Signs of good science include:

  • It’s Published by a Reputable Journal
  • It’s Peer Reviewed
  • The Researchers Have Relevant Experience and Qualifications
  • It’s Part of a Larger Body of Work
  • It Doesn’t Promise a Panacea or Miraculous Cure
  • It Avoids or at Least Discloses Potential Conflicts of Interest
  • It Doesn’t Claim to Prove Anything Based on a Single Study
  • It Uses a Reasonable, Representative Sample Size
  • The Results Are Statistically Significant
  • It Is Well Presented and Formatted
  • It Uses Control Groups and Double-Blinding
  • It Doesn’t Confuse Correlation and Causation

The Stormtrooper Problem: Why Thought Diversity Makes Us Better

Diversity of thought makes us stronger, not weaker. Without diversity, we die off as a species. We can no longer adapt to changes in the environment. We need each other to survive.

***

Diversity is how we survive as a species. This is a quantifiable fact easily observed in the biological world. From niches to natural selection, diversity is the common theme of success for both the individual and the group.

Take the central idea of natural selection: The genes, individuals, groups, and species with the most advantageous traits in a given environment survive and reproduce in greater numbers. Eventually, those advantageous traits spread. The overall population becomes more suited to that environment. This occurs at multiple levels, from single genes to entire ecosystems.

That said, natural selection cannot operate without a diverse set of traits to select from! Without variation, selection cannot improve the lot of the higher-level group.

Diversity of Thought

We often seem to struggle with diversity of thought. This type of diversity shouldn’t threaten us. It should energize us. It means we have a wider variety of resources to deal with the inevitable challenges we face as a species.

Imagine that a meteor is on its way to earth. A crash would be the end of everyone. No matter how self-involved we are, no one wants to see humanity wiped out. So what do we do? Wouldn’t you hope that we could call on more than three people to help find a solution?

Ideally there would be thousands of people with different skills and backgrounds tackling this meteor problem, many minds and lots of options for changing the rock’s course and saving life as we know it. The diversity of backgrounds—variations in skills, knowledge, ways of looking at and understanding the problem—might be what saves the day. But why wait for the threat? A smart species would recognize that if diversity of knowledge and skills would be useful for dealing with a meteor, then diversity would be probably useful in a whole set of other situations.

For example, very few businesses can get by with one knowledge set that will take their product from concept to the homes of customers. You would never imagine that a business could be staffed with clones and be successful. It would be the ultimate in social proof. Everyone would literally be saying the same thing.

The Stormtrooper Problem

Intelligence agencies face a unique set of problems that require creative, un-googleable solutions to one-off problems.

You’d naturally think they would value and seek out diversity in order to solve those problems. And you’d be wrong. Increasingly it’s harder and harder to get a security clearance.

Do you have a lot of debt? That might make you susceptible to blackmail. Divorced? You might be an emotional wreck, which could mean you’ll make emotional decisions and not rational ones. Do something as a youth that you don’t want anyone to know? That makes it harder to trust you. Gay but haven’t told anyone? Blackmail risk. Independently wealthy? That means you don’t need our paycheck, which means you might be harder to work with. Do you have a nuanced opinion of politics? What about Edward Snowden? Yikes. The list goes on.

As the process gets harder and harder (trying to reduce risk), there is less and less diversity in the door. The people that make it through the door are Stormtroopers.

And if you’re one of the lucky Stormtrooopers to make it in, you’re given a checklist career development path. If you want a promotion, you know the exact experience and training you need to receive one. It’s simple. It doesn’t require much thought on your part.

The combination of these two things means that employees increasingly look at—and attempt to solve—problems the same way. The workforce is less effective than it used to be. This means you have to hire more people to do the same thing or outsource more work to people that hire misfits. This is the Stormtrooper problem.

Creativity and Innovation

Diversity is necessary in the workplace to generate creativity and innovation. It’s also necessary to get the job done. Teams with members from different backgrounds can attack problems from all angles and identify more possible solutions than teams whose members think alike. Companies also need diverse skills and knowledge to keep a company functioning. Finance superstars may not be the same people who will rock marketing. And the faster things change, the more valuable diversity becomes for allowing us to adapt and seize opportunity.

We all know that any one person doesn’t have it all figured out and cannot possibly do it all. We can all recognize that we rely on thousands of other people every day just to live. We interact with the world through the products we use, the entertainment we consume, the services we provide. So why do differences often unsettle us?

Any difference can raise this reaction: gender, race, ethnic background, sexual orientation. Often, we hang out with others like us because, let’s face it, communicating is easier with people who are having a similar life experience. And most of us like to feel that we belong. But a sense of belonging should not come at the cost of diversity.

Where Birds Got Feathers

Consider this: Birds did not get their feathers for flying. They originally developed them for warmth, or for being more attractive to potential mates. It was only after feathers started appearing that birds eventually began to fly. Feathers are considered an exaptation, something that evolved for one purpose but then became beneficial for other reasons. When the environment changes, which it inevitably does, a species has a significantly increased chance of survival if it has a diversity of traits that it can re-purpose. What can we re-purpose if everyone looks, acts, and thinks the same?

Further, a genetically homogeneous population is easy to wipe out. It baffles me that anyone thinks they are a good idea. Consider the Irish Potato Famine. In the mid-19th century a potato disease made its way around much of the world. Although it devastated potato crops everywhere, only in Ireland did it result in widespread devastation and death. About one quarter of Ireland’s population died or emigrated to avoid starvation over just a few years. Why did this potato disease have such significant consequences there and not anywhere else?

The short answer is a lack of diversity. The potato was the staple crop for Ireland’s poor. Tenant farms were so small that only potatoes could be grown in sufficient quantity to—barely—feed a family. Too many people depended on this one crop to meet their nutritional needs. In addition, the Irish primarily grew one type of potato, so most of the crops were vulnerable to the same disease. Once the blight hit, it easily infected potato fields all over Ireland, because they were all the same.

You can’t adapt if you have nothing to adapt. If we are all the same, if we’ve wiped out every difference because we find it less challenging, then we increase our vulnerability to complete extinction. Are we too much alike to survive unforeseen challenges?

Even the reproductive process is, at its core, about diversity. You get half your genes from your mother and half from your father. These can be combined in so many different ways that 21 siblings are all going to be genetically unique.

Why is this important? Without this diversity we never would have made it this far. It’s this newness, each time life is started, that has given us options in the form of mutations. They’re like unexpected scientific breakthroughs. Some of these drove our species to awesome new capabilities. The ones that resulted in less fitness? These weren’t likely to survive. Success in life, survival on the large scale, has a lot to do with the potential benefits created by the diversity inherent in the reproductive process.

Diversity is what makes us stronger, not weaker. Biologically, without diversity we die off as a species. We can no longer adapt to changes in the environment. This is true of social diversity as well. Without diversity, we have no resources to face the inevitable challenges, no potential for beneficial mutations or breakthroughs that may save us. Yet we continue to have such a hard time with that. We’re still trying to figure out how to live with each other. We’re nowhere near ready for that meteor.

Article Summary

  • Visible diversity is not the same as cognitive diversity.
  • Cognitive diversity comes from thinking about problems differnetly, not from race, gender, or sexual orientation.
  • Cognitive diversity helps us avoid blind spots and adapt to changing environments.
  • You can’t have selection without variation.
  • The Stormtrooper problem is when everyone working on a problem thinks about it in the same way.

Alexander von Humboldt and the Invention of Nature: Creating a Holistic View of the World Through A Web of Interdisciplinary Knowledge

In his piece in 2014’s Edge collection This Idea Must Die: Scientific Theories That Are Blocking Progress, dinosaur paleontologist Scott Sampson writes that science needs to “subjectify” nature. By “subjectify”, he essentially means to see ourselves connected with nature, and therefore care about it the same way we do the people with whom we are connected.

That’s not the current approach. He argues: “One of the most prevalent ideas in science is that nature consists of objects. Of course, the very practice of science is grounded in objectivity. We objectify nature so that we can measure it, test it, and study it, with the ultimate goal of unraveling its secrets. Doing so typically requires reducing natural phenomena to their component parts.”

But this approach is ultimately failing us.

Why? Because much of our unsustainable behavior can be traced to a broken relationship with nature, a perspective that treats the nonhuman world as a realm of mindless, unfeeling objects. Sustainability will almost certainly depend upon developing mutually enhancing relations between humans and nonhuman nature.

This isn’t a new plea, though. Over 200 years ago, the famous naturalist Alexander Von Humboldt (1769-1859) was facing the same challenges.

In her compelling book The Invention of Nature: Alexander Von Humboldt’s New World, Andrea Wulf explores Humboldt as the first person to publish works promoting a holistic view of nature, arguing that nature could only be understood in relation to the subjectivity of experiencing it.

Fascinated by scientific instruments, measurements and observations, he was driven by a sense of wonder as well. Of course nature had to be measured and analyzed, but he also believed that a great part of our response to the natural world should be based on the senses and emotions.

Humboldt was a rock star scientist who ignored conventional boundaries in his exploration of nature. Humboldt’s desire to know and understand the world led him to investigate discoveries in all scientific disciplines, and to see the interwoven patterns embedded in this knowledge — mental models anyone?

If nature was a web of life, he couldn’t look at it just as a botanist, a geologist or a zoologist. He required information about everything from everywhere.

Humboldt grew up in a world where science was dry, nature mechanical, and man an aloof and separate chronicler of what was before him. Not only did Humboldt have a new vision of what our understanding of nature could be, but he put humans in the middle of it.

Humboldt’s Essay on the Geography of Plants promoted an entirely different understanding of nature. Instead of only looking at an organism, … Humboldt now presented relationships between plants, climate and geography. Plants were grouped into zones and regions rather than taxonomic units. … He gave western science a new lens through which to view the natural world.

Revolutionary for his time, Humboldt rejected the Cartesian ideas of animals as mechanical objects. He also argued passionately against the growing approach in the sciences that put man atop and separate from the rest of the natural world. Promoting a concept of unity in nature, Humboldt saw nature as a “reflection of the whole … an organism in which the parts only worked in relation to each other.”

Furthermore, that “poetry was necessary to comprehend the mysteries of the natural world.”

Wulf paints one of Humboldt’s greatest achievements as his ability and desire to make science available to everyone. No one before him had “combined exact observation with a ‘painterly description of the landscape”.

By contrast, Humboldt took his readers into the crowded streets of Caracas, across the dusty plains of the Llanos and deep into the rainforest along the Orinoco. As he described a continent that few British had ever seen, Humboldt captured their imagination. His words were so evocative, the Edinburgh Review wrote, that ‘you partake in his dangers; you share his fears, his success and his disappointment.’

In a time when travel was precarious, expensive and unavailable to most people, Humboldt brought his experiences to anyone who could read or listen.

On 3 November 1827, … Humboldt began a series of sixty-one lectures at the university. These proved so popular that he added another sixteen at Berlin’s music hall from 6 December. For six months he delivered lectures several days a week. Hundreds of people attended each talk, which Humboldt presented without reading from his notes. It was lively, exhilarating and utterly new. By not charging any entry fee, Humboldt democratized science: his packed audiences ranged from the royal family to coachmen, from students to servants, from scholars to bricklayers – and half of those attending were women. Berlin had never seen anything like it.

The subjectification of nature is about seeing nature, experiencing it. Humboldt was a master of bringing people to worlds they couldn’t visit, allowing them to feel a part of it. In doing so, he wanted to force humanity to see itself in nature. If we were all part of the giant web, then we all had a responsibility to understand it.

When he listed the three ways in which the human species was affecting the climate, he named deforestation, ruthless irrigation and, perhaps most prophetically, the ‘great masses of steam and gas’ produced in the industrial centres. No one but Humboldt had looked at the relationship between humankind and nature like this before.

His final opus, a series of books called Cosmos, was the culmination of everything that Humboldt had learned and discovered.

Cosmos was unlike any previous book about nature. Humboldt took his readers on a journey from outer space to earth, and then from the surface of the planet into its inner core. He discussed comets, the Milky Way and the solar system as well as terrestrial magnetism, volcanoes and the snow line of mountains. He wrote about the migration of the human species, about plants and animals and the microscopic organisms that live in stagnant water or on the weathered surface of rocks. Where others insisted that nature was stripped of its magic as humankind penetrated into its deepest secrets, Humboldt believed exactly the opposite. How could this be, Humboldt asked, in a world in which the coloured rays of an aurora ‘unite in a quivering sea flame’, creating a sight so otherworldly ‘the splendour of which no description can reach’? Knowledge, he said, could never ‘kill the creative force of imagination’ – instead it brought excitement, astonishment and wondrousness.

This is the ultimate subjectivity of nature. Being inspired by its beauty to try and understand how it works. Humboldt had respect for nature, for the wonders it contained, but also as the system in which we ourselves are an inseparable part.

Wulf concludes at the end that Humboldt,

…was one of the last polymaths, and died at a time when scientific disciplines were hardening into tightly fenced and more specialized fields. Consequently his more holistic approach – a scientific method that included art, history, poetry and politics alongside hard data – has fallen out of favour.

Maybe this is where the subjectivity of nature has gone. But we can learn from Humboldt the value of bringing it back.

In a world where we tend to draw a sharp line between the sciences and the arts, between the subjective and the objective, Humboldt’s insight that we can only truly understand nature by using our imagination makes him a visionary.

A little imagination is all it takes.

Warnings From Sleep: Nightmares and Protecting The Self

“All of this is evidence that the mind, although asleep,
is constantly concerned about the safety and integrity of the self.”

***

Rosalind Cartwright — also known as the Queen of Dreams — is a leading sleep researcher. In The Twenty-four Hour Mind: The Role of Sleep and Dreaming in Our Emotional Lives, she explores the role of nightmares and how we use sleep to protect ourselves.

When our time awake is frightening or remains unpressed, the sleeping brain “may process horrible images with enough raw fear attached to awaken a sleeper with a horrendous nightmare.” The more trauma we have in our lives the more likely we are to experience anxiety and nightmares after a horrific event.

The common feature is a threat of harm, accompanied by a lack of ability to control the circumstances of the threat, and the lack of or inability to develop protective behaviors.

The strategies we use for coping effectively with extreme stress and fear are controversial. Should we deny the threatening event and avoid thinking about it better than thinking about it and becoming sensitized to it?

One clear principle that comes out of this work is that the effects of trauma on sleep and dreaming depend on the nature of the threat. If direct action against the threat is irrelevant or impossible (as it would be if the trauma was well in the past), then denial may be helpful in reducing stress so that the person can get on with living as best they can. However, if the threat will be encountered over and over (such as with spousal abuse), and direct action would be helpful in addressing the threat, then denial by avoiding thinking about the danger (which helps in the short-term) will undermine problem-solving efforts and mastery in the long run. In other words, if nothing can be done, emotion-coping efforts to regulate the distress (dreaming) is a good strategy; but if constructive actions can be taken, waking problem-solving action is more adaptive.

What about nightmares?

Nightmares are defined as frightening dreams that wake the sleeper into full consciousness and with a clear memory of the dream imagery. These are not to be confused with sleep terrors. There are three main differences between these two. First, nightmare arousals are more often from late in the night’s sleep, when dreams are longest and the content is most bizarre and affect-laden (emotional); sleep terrors occur early in sleep. Second, nightmares are REM sleep-related, while sleep terrors come out of non-REM (NREM) slow-wave sleep (SWS). Third, sleepers experience vivid recall of nightmares, whereas with sleep terrors the experience is of full or partial amnesia for the episode itself, and only rarely is a single image recalled.

Nightmares abort the REM sleep, a critical component of our always on brain, Cartwright explains:

If we are right that the mind is continuously active throughout sleep—reviewing emotion-evoking new experiences from the day, scanning memory networks for similar experiences (which will defuse immediate emotional impact), revising by updating our organized sense of ourselves, and rehearsing new coping behaviors—nightmares are an exception and fail to perform these functions.

The impact is to temporarily relieve the negative emotion. The example Cartwright gives is “I am not about to be eaten by a monster. I am safe in my own bed.” But because the nightmare has woken me up, the nightmare is of no help in regulating my emotions (a critical role of sleep). As we learn to manage negative emotions while we are awake, that is, as we grow up, nightmares reduce in frequency and we develop skills for resolving fears.

It’s not always fear that wakes us from a nightmare. We can also be woken by anger, disgust, and grief.

Cartwright concludes, with an interesting insight, on the role of sleep in consolidating and protecting “the self.”:

[N]ightmares appear to be more common in those who have intense reactions to stress. The criteria cited for nightmare disorder in the diagnostic manual for psychiatric disorders, the Diagnostic and Statistical Manual IV-TR (DSM IV-TR), include this phrase “frightening dreams usually involving threats to survival, security, or self-esteem.” This theme may sound familiar: Remember that threats to self-esteem seem to precede NREM parasomnia awakenings. All of this is evidence that the mind, although asleep, is constantly concerned about the safety and integrity of the self.

The Twenty-four Hour Mind goes on to explore the history of sleep research through case studies and synthesis.

The Science of Sleep: Regulating Emotions and the 24 Hour Mind

Even if we often think of sleeping as ‘switching off’, it’s a complex state during which a lot of important things occur in our bodies. In particular, dreams are vital for helping our brains to process emotions and encode new learning.

***

“Memory is never a precise duplicate of the original; instead, it is a continuing act of creation..”

— Rosalind Cartwright

Rosalind Cartwright is one of the leading sleep researchers in the world. Her unofficial title is Queen of Dreams.

In The Twenty-four Hour Mind: The Role of Sleep and Dreaming in Our Emotional Lives, she looks back on the progress of sleep research and reminds us there is much left in the black box of sleep that we have yet to shine light on.

In the introduction she underscores the elusive nature of sleep:

The idea that sleep is good for us, beneficial to both mind and body, lies behind the classic advice from the busy physician: “Take two aspirins and call me in the morning.” But the meaning of this message is somewhat ambiguous. Will a night’s sleep plus the aspirin be of help no matter what ails us, or does the doctor himself need a night’s sleep before he is able to dispense more specific advice? In either case, the presumption is that there is some healing power in sleep for the patient or better insight into the diagnosis for the doctor, and that the overnight delay allows time for one or both of these natural processes to take place. Sometimes this happens, but unfortunately sometimes it does not. Sometimes it is sleep itself that is the problem.

Cartwright underscores that our brains like to run on “automatic pilot” mode, which is one of the reasons that getting better at things requires concentrated and focused effort. She explains:

We do not always use our highest mental abilities, but instead run on what we could call “automatic pilot”; once learned, many of our daily cognitive behaviors are directed by habit, those already-formed points of view, attitudes, and schemas that in part make us who we are. The formation of these habits frees us to use our highest mental processes for those special instances when a prepared response will not do, when circumstances change and attention must be paid, choices made or a new response developed. The result is that much of our baseline thoughts and behavior operate unconsciously.

Relating this back to dreams, and one of the more fascinating parts of Cartwright’s research, is the role sleep and dreams play in regulating emotions. She explains:

When emotions evoked by a waking experience are strong, or more often were under-attended at the time they occurred, they may not be fully resolved by nighttime. In other words, it may take us a while to come to terms with strong or neglected emotions. If, during the day, some event challenges a basic, habitual way in which we think about ourselves (such as the comment from a friend, “Aren’t you putting on weight?”) it may be a threat to our self-concepts. It will probably be brushed off at the time, but that question, along with its emotional baggage, will be carried forward in our minds into sleep. Nowadays, researchers do not stop our investigations at the border of sleep but continue to trace mental activity from the beginning of sleep on into dreaming. All day, the conscious mind goes about its work planning, remembering, and choosing, or just keeping the shop running as usual. On balance, we humans are more action oriented by day. We stay busy doing, but in the inaction of sleep we turn inward to review and evaluate the implications of our day, and the input of those new perceptions, learnings, and—most important—emotions about what we have experienced.

What we experience as a dream is the result of our brain’s effort to match recent, emotion-evoking events to other similar experiences already stored in long-term memory. One purpose of this sleep-related matching process, this putting of similar memory experiences together, is to defuse the impact of those feelings that might otherwise linger and disrupt our moods and behaviors the next day. The various ways in which this extraordinary mind of ours works—the top-level rational thinking and executive deciding functions, the middle management of routine habits of thought, and the emotional relating and updating of the organized schemas of our self-concept—are not isolated from each other. They interact. The emotional aspect, which is often not consciously recognized, drives the not-conscious mental activity of sleep.

Later in the book, she writes more about how dreams regulate emotions:

Despite differences in terminology, all the contemporary theories of dreaming have a common thread — they all emphasize that dreams are not about prosaic themes, not about reading, writing, and arithmetic, but about emotion, or what psychologists refer to as affect. What is carried forward from waking hours into sleep are recent experiences that have an emotional component, often those that were negative in tone but not noticed at the time or not fully resolved. One proposed purpose of dreaming, of what dreaming accomplishes (known as the mood regulatory function of dreams theory) is that dreaming modulates disturbances in emotion, regulating those that are troublesome. My research, as well as that of other investigators in this country and abroad, supports this theory. Studies show that negative mood is down-regulated overnight. How this is accomplished has had less attention.

I propose that when some disturbing waking experience is reactivated in sleep and carried forward into REM, where it is matched by similarity in feeling to earlier memories, a network of older associations is stimulated and is displayed as a sequence of compound images that we experience as dreams. This melding of new and old memory fragments modifies the network of emotional self-defining memories, and thus updates the organizational picture we hold of “who I am and what is good for me and what is not.” In this way, dreaming diffuses the emotional charge of the event and so prepares the sleeper to wake ready to see things in a more positive light, to make a fresh start. This does not always happen over a single night; sometimes a big reorganization of the emotional perspective of our self-concept must be made—from wife to widow or married to single, say, and this may take many nights. We must look for dream changes within the night and over time across nights to detect whether a productive change is under way. In very broad strokes, this is the definition of the mood-regulatory function of dreaming, one basic to the new model of the twenty-four hour mind I am proposing.

In another fascinating part of her research, Cartwright outlines the role of sleep in skill enhancement. In short, “sleeping on it” is wise advice.

Think back to “take two aspirins and call me in the morning.” Want to improve your golf stroke? Concentrate on it before sleeping. An interval of sleep has been proven to bestow a real benefit for both laboratory animals and humans when they are tested on many different types of newly learned tasks. You will remember more items or make fewer mistakes if you have had a period of sleep between learning something new and the test of your ability to recall it later than you would if you spent the same amount of time awake.

Most researchers agree “with the overall conclusion that one of the ways sleep works is by enhancing the memory of important bits of new information and clearing out unnecessary or competing bits, and then passing the good bits on to be integrated into existing memory circuits.” This happens in two steps.

The first is in early NREM sleep when the brain circuits that were active while we were learning something new, a motor skill, say, or a new language, are reactivated and stay active until REM sleep occurs. In REM sleep, these new bits of information are then matched to older related memories already stored in long-term memory networks. This causes the new learning to stick (to be consolidated) and to remain accessible for when we need it later in waking.

As for the effect of alcohol has before sleep, Carlyle Smith, a Canadian Psychologist, found that it reduces memory formation, “reducing the number of rapid eye movements” in REM sleep. The eye movements, similar to the ones we make while reading, are how we do scanning of visual information.

The mind is active 24 hours a day:

If the mind is truly working continuously, during all 24 hours of the day, it is not in its conscious mode during the time spent asleep. That time belongs to the unconscious. In waking, the two types of cognition, conscious and unconscious, are working sometimes in parallel, but also often interacting. They may alternate, depending on our focus of attention and the presence of an explicit goal. If we get bored or sleepy, we can slip into a third mode of thought, daydreaming. These thoughts can be recalled when we return to conscious thinking, which is not generally true of unconscious cognition unless we are caught in the act in the sleep lab. This third in-between state is variously called the preconscious or subconscious, and has been studied in a few investigations of what is going on in the mind during the transition before sleep onset.

Toward the end, Cartwright explores the role of sleep.

[I]n good sleepers, the mind is continuously active, reviewing experience from yesterday, sorting which new information is relevant and important to save due to its emotional saliency. Dreams are not without sense, nor are they best understood to be expressions of infantile wishes. They are the result of the interconnectedness of new experience with that already stored in memory networks. But memory is never a precise duplicate of the original; instead, it is a continuing act of creation. Dream images are the product of that creation. They are formed by pattern recognition between some current emotionally valued experience matching the condensed representation of similarly toned memories. Networks of these become our familiar style of thinking, which gives our behavior continuity and us a coherent sense of who we are. Thus, dream dimensions are elements of the schemas, and both represent accumulated experience and serve to filter and evaluate the new day’s input.

Sleep is a busy time, interweaving streams of thought with emotional values attached, as they fit or challenge the organizational structure that represents our identity. One function of all this action, I believe, is to regulate disturbing emotion in order to keep it from disrupting our sleep and subsequent waking functioning. In this book, I have offered some tests of that hypothesis by considering what happens to this process of down-regulation within the night when sleep is disordered in various ways.

Cartwright develops several themes throughout The Twenty-four Hour Mind. First is that the mind is continuously active. Second is the role of emotion in “carrying out the collaboration of the waking and sleeping mind.” This includes exploring whether the sleeping mind “contributes to resolving emotional turmoil stirred up by some real anxiety inducing circumstance.” Third is how sleeping contributes to how new learning is retained. Accumulated experiences serve to filter and evaluate the new day’s input.

Competition, Cooperation, and the Selfish Gene

Richard Dawkins has one of the best-selling books of all time for a serious piece of scientific writing.

Often labeled “pop science”, The Selfish Gene pulls together the “gene-centered” view of evolution: It is not really individuals being selected for in the competition for life, but their genes. The individual bodies (phenotypes) are simply carrying out the instructions of the genes. This leads most people to a very “competition focused” view of life. But is that all?

***

More than 100 years before The Selfish Gene, Charles Darwin had famously outlined his Theory of Natural Selection in The Origin of Species.

We’re all hopefully familiar with this concept: Species evolve over long periods time through a process of heredity, variation, competition, and differential survival.

The mechanism of heredity was invisible to Darwin, but a series of scientists, not without a little argument, had figured it out by the 1970’s: Strands of the protein DNA (“genes”) encoded instructions for the building of physical structures. These genes were passed on to offspring in a particular way – the process of heredity. Advantageous genes were propagated in greater numbers. Disadvantageous genes, vice versa.

The Selfish Gene makes a particular kind of case: Specific gene variants grow in proportion to a gene pool by, on average, creating advantaged physical bodies and brains. The genes do their work through “phenotypes” – the physical representation of their information. As Helena Cronin would put in her book The Ant and the Peacock, “It is the net selective value of a gene’s phenotypic effect that determines the fate of the gene.”

This take of the evolutionary process became influential because of the range of hard-to-explain behavior that it illuminated.

Why do we see altruistic behavior? Because copies of genes are present throughout a population, not just in single individuals, and altruism can cause great advantages in those gene variants surviving and thriving. (In other words, genes that cause individuals to sacrifice themselves for other copies of those same genes will tend to thrive.)

Why do we see more altruistic behavior among family members? Because they are closely related, and share more genes!

Many problems seemed to be solved here, and the Selfish Gene model became one for all-time, worth having in your head.

However, buried in the logic of the gene-centered view of evolution is a statistical argument. Gene variants rapidly grow in proportion to the rest of the gene pool because they provide survival advantages in the average environment that the gene will experience over its existence. Thus, advantageous genes “selfishly” dominate their environment before long. It’s all about gene competition.

This has led many people, some biologists especially, to view evolution solely through the lens of competition. Unsurprisingly, this also led to some false paradigms about a strictly “dog eat dog” world where unrestricted and ruthless individual competition is deemed “natural”.

But what about cooperation?

***

The complex systems researcher Yaneer Bar-Yam argues that not only is the Selfish Gene a limiting concept biologically and possibly wrong mathematically (too complex to address here, but if you want to read about it, check out these pieces), but that there are more nuanced ways to understand the way competition and cooperation comfortably coexist. Not only that, but Bar-Yam argues that this has implications for optimal team formation.

In his book Making Things Work, Bar-Yam lays out a basic message: Even in the biological world, competition is a limited lens through which to see evolution. There’s always a counterbalance of cooperation.

Counter to the traditional perspective, the basic message of this and the following chapter is that competition and cooperation always coexist. People see them as opposing and incompatible forces. I think that this is a result of an outdated and one-sided understanding of evolution…This is extremely useful in describing nature and society; the basic insight that “what works, works” still holds. It turns out, however, that what works is a combination of competition and cooperation.

Bar-Yam uses the analogy of a sports team which exists in context of a sports league – let’s say the NBA. Through this lens we can see why players, teams, and leagues compete and cooperate. (The obvious analogy is that genes, individuals, and groups compete and cooperate in the biological world.)

In general, when we think about the conflict between cooperation and completion in team sports, we tend to think about the relationships between the players on a team. We care deeply about their willingness to cooperate and we distinguish cooperative “team players” from selfish non-team players, complaining about the latter even when their individual skill is formidable.

The reason we want players to cooperate is so that they can compete better as a team. Cooperation at the level of the individual enables effective competition at the level of the group, and conversely, the competition between teams motivates cooperation between players. There is a constructive relationship between cooperation and competition when they operate at different levels of organization.

The interplay between levels is a kind of evolutionary process where competition at the team level improves the cooperation between players. Just as in biological evolution, in organized team sports there is a process of selection of winners through competition of teams. Over time, the teams will change how they behave; the less successful teams will emulate strategies of teams that are doing well.

At every level then, there is an interplay between cooperation and competition. Players compete for playing time, and yet must be intensively cooperative on the court to compete with other teams. At the next level up, teams compete with each other for victories, and yet must cooperate intensively to sustain a league at all.

They create agreed upon rules, schedule times to play, negotiate television contracts, and so on. This allows the league itself to compete with other leagues for scarce attention from sports fans. And so on, up and down the ladder.

Competition among players, teams, and leagues is certainly a crucial dynamic. But it isn’t all that’s going on: They’re cooperating intensely at every level, because a group of selfish individuals loses to a group of cooperative ones.

And it is the same among biological species. Genes are competing with each other, as are individuals, tribes, and species. Yet at every level, they are also cooperating. The success of the human species is clearly due to its ability to cooperate in large numbers; and yet any student of war can attest to its deadly competitive nature. Similar dynamics are at play with ants, rats, and chimpanzees, among other species of insect and animal. It’s a yin and yang world.

Bar-Yam thinks this has great implications for how to build successful teams.

Teams will improve naturally – in any organization – when they are involved in a competition that is structured to select those teams that are better at cooperation. Winners of a competition become successful models of behavior for less successful teams, who emulate their success by learning their strategies and by selecting and trading team members.

For a business, a society, or any other complex system made up of many individuals, this means that improvement will come when the system’s structure involves a completion that rewards successful groups. The idea here is not a cutthroat competition of teams (or individuals) but a competition with rules that incorporate some cooperative activity with a mutual goal.

The dictum that “politics is the art of marshaling hatreds” would seem to reflect this notion: A non-violent way for competition of cooperative groups for dominance. As would the incentive systems of majorly successful corporations like Nucor and the best hospital systems, like the Mayo Clinic. Even modern business books are picking up on it.

Individual competition is important and drives excellence. Yet, as Bar-Yam points out, it’s ultimately not a complete formula. Having teams compete is more effective: You need to harness competition and cooperation at every level. You want groups pulling together, creating emerging effects where the whole is greater than the sum of the parts (a recurrent theme throughout nature).

You should read his book for more details on both this idea and the concept of complex systems in general. Bar-Yam also elaborated on his sports analogy in a white-paper here. If you’re interested in complex systems, check out this post on frozen accidents. Also, for more on creating better groups, check out how Steve Jobs did it.