Tag: Biology

Advice for Young Scientists—and Curious People in General

The Nobel Prize-winning biologist Peter Medawar (1915–1987) is best known for work that made the first organ transplants and skin grafts possible. Medawar was also a lively, witty writer who penned numerous books on science and philosophy.

In 1979, he published Advice to a Young Scientist, a book brimming with both practical advice and philosophical guidance for anyone “engaged in exploratory activities.” Here, we summarize some of Medawar’s key insights from the book.

***

Application, diligence, a sense of purpose

“There is no certain way of telling in advance if the daydreams of a life dedicated to the pursuit of truth will carry a novice through the frustration of seeing experiments fail and of making the dismaying discovery that some of one’s favourite ideas are groundless.”

If you want to make progress in any area, you need to be willing to give up your best ideas from time to time. Science proceeds because researchers do all they can to disprove their hypotheses rather than prove them right. Medawar notes that he twice spent two whole years trying to corroborate groundless hypotheses. The key to being a good scientist is the capacity to take no for an answer—when necesssary. Additionally:

“…one does not need to be terrifically brainy to be a good scientist…there is nothing in experimental science that calls for great feats of ratiocination or a preternatural gift for deductive reasoning. Common sense one cannot do without, and one would be the better for owning some of those old-fashioned virtues which have fallen into disrepute. I mean application, diligence, a sense of purpose, the power to concentrate, to persevere and not be cast down by adversity—by finding out after long and weary inquiry, for example, that a dearly loved hypothesis is in large measure mistaken.”

The truth is, any measure of risk-taking comes with the possibility of failure. Learning from failure to continue exploring the unknown is a broadly useful mindset.

***

How to make important discoveries

“It can be said with marked confidence that any scientist of any age who wants to make important discoveries must study important problems. Dull or piffling problems yield dull or piffling answers.”

A common piece of advice for people early on in their careers is to pursue what they find most interesting. Medawar disagrees, explaining that “almost any problem is interesting if it is studied in sufficient depth.” He advises scientists to look for important problems, meaning ones with answers that matter to humankind.

When choosing an area of research, Medawar cautions against mistaking a fashion (“some new histochemical procedure or technical gimmick”) for a movement (“such as molecular genetics or cellular immunology”). Movements lead somewhere; fashions generally don’t.

***

Getting started

Whenever we begin some new endeavor, it can be tempting to think we need to know everything there is to know about it before we even begin. Often, this becomes a form of procrastination. Only once we try something and our plans make contact with reality can we know what we need to know. Medawar believes it’s unnecessary for scientists to spend an enormous amount of time learning techniques and supporting disciplines before beginning research:

“As there is no knowing in advance where a research enterprise may lead and what kind of skills it will require as it unfolds, this process of ‘equipping oneself’ has no predeterminable limits and is bad psychological policy….The great incentive to learning a new skill or supporting discipline is needing to use it.”

The best way to learn what we need to know is by getting started, then picking up new knowledge as it proves itself necessary. When there’s an urgent need, we learn faster and avoid unnecessary learning. The same can be true for too much reading:

“Too much book learning may crab and confine the imagination, and endless poring over the research of others is sometimes psychologically a research substitute, much as reading romantic fiction may be a substitute for real-life romance….The beginner must read, but intently and choosily and not too much.”

We don’t talk about this much at Farnam Street, but it is entirely possible to read too much. Reading becomes counterproductive when it serves as a substitute for doing the real thing, if that’s what someone is reading for. Medawar explains that it is “psychologically most important to get results, even if they are not original.” It’s important to build confidence by doing something concrete and seeing a visible manifestation of our labors. For Medawar, the best scientists begin with the understanding that they can never know anything and, besides, learning needs to be a lifelong process.

***

The secrets to effective collaboration

“Scientific collaboration is not at all like cooks elbowing each other from the pot of broth; nor is it like artists working on the same canvas, or engineers working out how to start a tunnel simultaneously from both sides of a mountain in such a way that the contractors do not miss each other in the middle and emerge independently at opposite ends.”

Instead, scientific collaboration is about researchers creating the right environment to develop and expand upon each other’s ideas. A good collaboration is greater than the sum of its parts and results in work that isn’t attributable to a single person.

For scientists who find their collaborators infuriating from time to time, Medawar advises being self-aware. We all have faults, and we too are probably almost intolerable to work with sometimes.

When collaboration becomes contentious, Medawar maintains that we should give away our best ideas.

Scientists sometimes face conflict over the matter of credit. If several researchers are working on the same problem, whichever one finds the solution (or a solution) first gets the credit, no matter how close the others were. This is a problem most creative fields don’t face: “The twenty years Wagner spent on composing the first three operas of The Ring were not clouded by the fear that someone else might nip ahead of him with Götterdämmerung.” Once a scientific idea becomes established, it becomes public property. So the only chance of ownership a researcher has comes by being the first.

However, Medawar advocates for being open about ideas and doing away with secrecy because “anyone who shuts his door keeps out more than he lets out.” He goes on to write, “The agreed house rule of the little group of close colleagues I have always worked with has always been ‘Tell everyone everything you know,’ and I don’t know anyone who came to any harm by falling in with it.

***

How to handle moral dilemmas

A scientist will normally have contractual obligations to his employer and has always a special and unconditionally binding obligation to the truth.

Medawar writes that many scientists, at some point in their career, find themselves grappling with the conflict between a contractual obligation and their own conscience. However, the “time to grapple is before a moral dilemma arises.” If we think an enterprise might lead somewhere damaging, we shouldn’t start on it in the first place.

We should know our values and aim to do work in accordance with them.

***

The first rule is never to fool yourself

“I cannot give any scientist of any age better advice than this: the intensity of the conviction that a hypothesis is true has no bearing of whether it is true or not.”

Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” All scientists make mistakes sometimes. Medawar advises, when this happens, to issue a swift correction. To do so is far more respectable and beneficial for the field than trying to cover it up. Echoing the previous advice to always be willing to take no for an answer, Medawar warns about falling in love with a hypothesis and believing it is true without evidence.

“A scientist who habitually deceives himself is well on the way toward deceiving others.”

***

The best creative environment

“To be creative, scientists need libraries and laboratories and the company of other scientists; certainly a quiet and untroubled life is a help. A scientist’s work is in no way deepened or made more cogent by privation, anxiety, distress, or emotional harassment. To be sure, the private lives of scientists may be strangely and comically mixed up, but not in ways that have any special bearing on the nature and quality of their work.”

Creativity rises from tranquility, not from disarray. Creativity is supported by a safe environment, one in which you can share and question openly and be heard with compassion and a desire to understand.

***

A final piece of advice:

“A scientist who wishes to keep his friends and not add to the number of his enemies must not be forever scoffing and criticizing and so earn a reputation for habitual disbelief; but he owes it to his profession not to acquiesce in or appear to condone folly, superstition, or demonstrably unsound belief. The recognition and castigation of folly will not win him friends, but it may gain him some respect.”

Focused and Diffuse: Two Modes of Thinking

Our brains employ two modes of thinking to tackle any large task: focused and diffuse. Both are equally valuable but serve very different purposes. To do your best work, you need to master both.

***

As she lost consciousness of outer things…her mind kept throwing up from its depths, scenes, and names, and sayings, and memories and ideas, like a fountain spurting. — Virginia Woolf, To the Lighthouse

Professor and former Knowledge Project Podcast guest, Barbara Oakley, is credited with popularizing the concept of focused and diffuse forms of thinking. In A Mind for Numbers, Oakley explains how distinct these modes are and how we switch between the two throughout the day. We are constantly in pursuit of true periods of focus – deep work, flow states, and highly productive sessions where we see tangible results. Much of the learning process occurs during the focused mode of thinking. The diffuse mode is equally important to understand and pursue.

When our minds are free to wander, we shift into a diffuse mode of thinking. This is sometimes referred to as our natural mode of thinking, or the daydream mode; it’s when we form connections and subconsciously mull over problems. Although diffuse thinking comes in the guise of a break from focus, our minds are still working. Often, it’s only after we switch away from this mode that we realize our brains were indeed working for us. Moving into diffuse mode can be a very brief phenomenon, such as when we briefly stare into the distance before returning to work.

Oakley uses evolutionary biology to explain why we have these two distinct modes. Vertebrates need both focused and diffuse modes to survive. The focused mode is useful for vital tasks like foraging for food or caring for offspring. On the other hand, the diffuse mode is useful for scanning the area for predators and other threats. She explains: “A bird, for example, needs to focus carefully so it can pick up tiny pieces of grain as it pecks the ground for food, and at the same time, it must scan the horizon for predators such as hawks…. If you watch birds, they’ll first peck, and then pause to scan the horizon—almost as if they are alternating between focused and diffuse modes.”

Both modes of thinking are equally valuable, but it’s the harmony between them which matters. We can’t maintain the effort of the focused mode for long. At some point, we need to relax and slip into the diffuse mode. Learning a complex skill —a language, a musical instrument, chess, a mental model—requires both modes to work together. We master the details in focused mode, then comprehend how everything fits together in diffuse mode. It’s about combining creativity with execution.

Think of how your mind works when you read. As you read a particular sentence of a book, you can’t simultaneously step back to ponder the entire work. Only when you put the book down can you develop a comprehensive picture, drawing connections between concepts and making sense of it all.

In a journal article entitled “The Middle Way: Finding the Balance between Mindfulness and Mind-Wandering” the authors write that “consciousness… ebbs like a breaking wave, outwardly expanding and then inwardly retreating. This perennial rhythm of the mind—extracting information from the external world, withdrawing to inner musings, and then returning to the outer realm—defines mental life.” This mental oscillation is important. If we stay in a focused mode too long, diminishing returns set in and our thinking stagnates. We stop getting new ideas and can experience cognitive tunnelling. It’s also tiring, and we become less productive. This can also set the conditions for us to fall victim to counter-productive cognitive biases and risky shortcuts, as we lose context and the bigger picture.

History is peppered with examples of serendipitous discoveries and ideas that combined diffuse and focused thinking. In many cases, the broad insight came during diffuse thinking periods, while the concrete development work was accomplished in focused mode.

Einstein figured out relativity during an argument with a friend. He then spent decades refining and clarifying his theories for publication, working until the day before his death. Many of Stephen King’s books begin as single sentences scribbled in a notebook or on a napkin after showering, driving, or walking. To turn these ideas into books, he then sticks to a focused schedule, writing 2000 words each morning. Jack Kerouac wrote On the Road following seven years of travel and drawing links between his experiences. After years of planning and drafting, he wrote his masterpiece in just three weeks using a 120-foot roll of tracing paper to avoid having to change the sheets in his typewriter. Both Thomas Edison and Salvador Dali took advantage of micro-naps lasting less than a second to generate ideas. Take a look at the recorded schedule of any great mind and you will see a careful balance between activities chosen to facilitate both focused and diffuse modes of thinking.

Studies exploring creative thinking have supported the idea that we need both types of thinking. In a paper entitled “The Richness of Inner Experience: Relating Styles of Daydreaming to Creative Processes,” Zedelius and Schooler write that “Research has supported the theorized benefit of stimulus independent thought for creativity. It was found that taking a break from consciously working on a creative problem and engaging in an unrelated task improves subsequent creativity, a phenomenon termed incubation.” When asked to generate novel uses for common objects such as a brick or paperclip, a useful test of creativity, individuals who are given breaks to engage in tasks which facilitate diffuse thinking tend to come up with more ideas. So how can we better fit the two modes together?

One way is to work in intense, focused bursts. When the ideas stop flowing and diminishing returns set in, do something which is conducive to mind-wandering. Exercise, walk, read, or listen to music. We veer naturally toward this diffuse state—gazing out of windows, walking around the room or making coffee when focusing gets too hard. The problem is that activities which encourage diffuse thinking can make us feel lazy and guilty. Instead, we often opt for mediocre substitutes, like social media, which give our mind a break without really allowing for true mind-wandering.

Our minds are eventually going to beg for a diffuse mode break no matter how much focus we try to maintain. Entering the diffuse mode requires stepping away and doing something which ideally is physically absorbing and mentally freeing. It might feel like taking a break or wasting time, but it’s a necessary part of creating something valuable.

The Stormtrooper Problem: Why Thought Diversity Makes Us Better

Diversity of thought makes us stronger, not weaker. Without diversity, we die off as a species. We can no longer adapt to changes in the environment. We need each other to survive.

***

Diversity is how we survive as a species. This is a quantifiable fact easily observed in the biological world. From niches to natural selection, diversity is the common theme of success for both the individual and the group.

Take the central idea of natural selection: The genes, individuals, groups, and species with the most advantageous traits in a given environment survive and reproduce in greater numbers. Eventually, those advantageous traits spread. The overall population becomes more suited to that environment. This occurs at multiple levels, from single genes to entire ecosystems.

That said, natural selection cannot operate without a diverse set of traits to select from! Without variation, selection cannot improve the lot of the higher-level group.

Diversity of Thought

We often seem to struggle with diversity of thought. This type of diversity shouldn’t threaten us. It should energize us. It means we have a wider variety of resources to deal with the inevitable challenges we face as a species.

Imagine that a meteor is on its way to earth. A crash would be the end of everyone. No matter how self-involved we are, no one wants to see humanity wiped out. So what do we do? Wouldn’t you hope that we could call on more than three people to help find a solution?

Ideally there would be thousands of people with different skills and backgrounds tackling this meteor problem, many minds and lots of options for changing the rock’s course and saving life as we know it. The diversity of backgrounds—variations in skills, knowledge, ways of looking at and understanding the problem—might be what saves the day. But why wait for the threat? A smart species would recognize that if diversity of knowledge and skills would be useful for dealing with a meteor, then diversity would be probably useful in a whole set of other situations.

For example, very few businesses can get by with one knowledge set that will take their product from concept to the homes of customers. You would never imagine that a business could be staffed with clones and be successful. It would be the ultimate in social proof. Everyone would literally be saying the same thing.

The Stormtrooper Problem

Intelligence agencies face a unique set of problems that require creative, un-googleable solutions to one-off problems.

You’d naturally think they would value and seek out diversity in order to solve those problems. And you’d be wrong. Increasingly it’s harder and harder to get a security clearance.

Do you have a lot of debt? That might make you susceptible to blackmail. Divorced? You might be an emotional wreck, which could mean you’ll make emotional decisions and not rational ones. Do something as a youth that you don’t want anyone to know? That makes it harder to trust you. Gay but haven’t told anyone? Blackmail risk. Independently wealthy? That means you don’t need our paycheck, which means you might be harder to work with. Do you have a nuanced opinion of politics? What about Edward Snowden? Yikes. The list goes on.

As the process gets harder and harder (trying to reduce risk), there is less and less diversity in the door. The people that make it through the door are Stormtroopers.

And if you’re one of the lucky Stormtrooopers to make it in, you’re given a checklist career development path. If you want a promotion, you know the exact experience and training you need to receive one. It’s simple. It doesn’t require much thought on your part.

The combination of these two things means that employees increasingly look at—and attempt to solve—problems the same way. The workforce is less effective than it used to be. This means you have to hire more people to do the same thing or outsource more work to people that hire misfits. This is the Stormtrooper problem.

Creativity and Innovation

Diversity is necessary in the workplace to generate creativity and innovation. It’s also necessary to get the job done. Teams with members from different backgrounds can attack problems from all angles and identify more possible solutions than teams whose members think alike. Companies also need diverse skills and knowledge to keep a company functioning. Finance superstars may not be the same people who will rock marketing. And the faster things change, the more valuable diversity becomes for allowing us to adapt and seize opportunity.

We all know that any one person doesn’t have it all figured out and cannot possibly do it all. We can all recognize that we rely on thousands of other people every day just to live. We interact with the world through the products we use, the entertainment we consume, the services we provide. So why do differences often unsettle us?

Any difference can raise this reaction: gender, race, ethnic background, sexual orientation. Often, we hang out with others like us because, let’s face it, communicating is easier with people who are having a similar life experience. And most of us like to feel that we belong. But a sense of belonging should not come at the cost of diversity.

Where Birds Got Feathers

Consider this: Birds did not get their feathers for flying. They originally developed them for warmth, or for being more attractive to potential mates. It was only after feathers started appearing that birds eventually began to fly. Feathers are considered an exaptation, something that evolved for one purpose but then became beneficial for other reasons. When the environment changes, which it inevitably does, a species has a significantly increased chance of survival if it has a diversity of traits that it can re-purpose. What can we re-purpose if everyone looks, acts, and thinks the same?

Further, a genetically homogeneous population is easy to wipe out. It baffles me that anyone thinks they are a good idea. Consider the Irish Potato Famine. In the mid-19th century a potato disease made its way around much of the world. Although it devastated potato crops everywhere, only in Ireland did it result in widespread devastation and death. About one quarter of Ireland’s population died or emigrated to avoid starvation over just a few years. Why did this potato disease have such significant consequences there and not anywhere else?

The short answer is a lack of diversity. The potato was the staple crop for Ireland’s poor. Tenant farms were so small that only potatoes could be grown in sufficient quantity to—barely—feed a family. Too many people depended on this one crop to meet their nutritional needs. In addition, the Irish primarily grew one type of potato, so most of the crops were vulnerable to the same disease. Once the blight hit, it easily infected potato fields all over Ireland, because they were all the same.

You can’t adapt if you have nothing to adapt. If we are all the same, if we’ve wiped out every difference because we find it less challenging, then we increase our vulnerability to complete extinction. Are we too much alike to survive unforeseen challenges?

Even the reproductive process is, at its core, about diversity. You get half your genes from your mother and half from your father. These can be combined in so many different ways that 21 siblings are all going to be genetically unique.

Why is this important? Without this diversity we never would have made it this far. It’s this newness, each time life is started, that has given us options in the form of mutations. They’re like unexpected scientific breakthroughs. Some of these drove our species to awesome new capabilities. The ones that resulted in less fitness? These weren’t likely to survive. Success in life, survival on the large scale, has a lot to do with the potential benefits created by the diversity inherent in the reproductive process.

Diversity is what makes us stronger, not weaker. Biologically, without diversity we die off as a species. We can no longer adapt to changes in the environment. This is true of social diversity as well. Without diversity, we have no resources to face the inevitable challenges, no potential for beneficial mutations or breakthroughs that may save us. Yet we continue to have such a hard time with that. We’re still trying to figure out how to live with each other. We’re nowhere near ready for that meteor.

Article Summary

  • Visible diversity is not the same as cognitive diversity.
  • Cognitive diversity comes from thinking about problems differently, not from race, gender, or sexual orientation.
  • Cognitive diversity helps us avoid blind spots and adapt to changing environments.
  • You can’t have selection without variation.
  • The Stormtrooper problem is when everyone working on a problem thinks about it in the same way.

Half Life: The Decay of Knowledge and What to Do About It

Understanding the concept of a half-life will change what you read and how you invest your time. It will explain why our careers are increasingly specialized and offer a look into how we can compete more effectively in a very crowded world.

The Basics

A half-life is the time taken for something to halve its quantity. The term is most often used in the context of radioactive decay, which occurs when unstable atomic particles lose energy. Twenty-nine elements are known to be capable of undergoing this process. Information also has a half-life, as do drugs, marketing campaigns, and all sorts of other things. We see the concept in any area where the quantity or strength of something decreases over time.

Radioactive decay is random, and measured half-lives are based on the most probable rate. We know that a nucleus will decay at some point; we just cannot predict when. It could be anywhere between instantaneous and the total age of the universe. Although scientists have defined half-lives for different elements, the exact rate is completely random.

Half-lives of elements vary tremendously. For example, carbon takes millions of years to decay; that’s why it is stable enough to be a component of the bodies of living organisms. Different isotopes of the same element can also have different half-lives.

Three main types of nuclear decay have been identified: alpha, beta, and gamma. Alpha decay occurs when a nucleus splits into two parts: a helium nucleus and the remainder of the original nucleus. Beta decay occurs when a neutron in the nucleus of an element changes into a proton. The result is that it turns into a different element, such as when potassium decays into calcium. Beta decay also releases a neutrino — a particle with virtually no mass. If a nucleus emits radiation without experiencing a change in its composition, it is subject to gamma decay. Gamma radiation contains an enormous amount of energy.

The Discovery of Half-Lives

The discovery of half-lives (and alpha and beta radiation) is credited to Ernest Rutherford, one of the most influential physicists of his time. Rutherford was at the forefront of this major discovery when he worked with physicist Joseph John Thompson on complementary experiments leading to the discovery of electrons. Rutherford recognized the potential of what he was observing and began researching radioactivity. Two years later, he identified the distinction between alpha and beta rays. This led to his discovery of half-lives, when he noticed that samples of radioactive materials took the same amount of time to decay by half. By 1902, Rutherford and his collaborators had a coherent theory of radioactive decay (which they called “atomic disintegration”). They demonstrated that radioactive decay enabled one element to turn into another — research which would earn Rutherford a Nobel Prize. A year later, he spotted the missing piece in the work of the chemist Paul Villard and named the third type of radiation gamma.

Half-lives are based on probabilistic thinking. If the half-life of an element is seven days, it is most probable that half of the atoms will have decayed in that time. For a large number of atoms, we can expect half-lives to be fairly consistent. It’s important to note that radioactive decay is based on the element itself, not the quantity of it. By contrast, in other situations, the half-life may vary depending on the amount of material. For example, the half-life of a chemical someone ingests might depend on the quantity.

In biology, a half-life is the time taken for a substance to lose half its effects. The most obvious instance is drugs; the half-life is the time it takes for their effect to halve, or for half of the substance to leave the body. The half-life of caffeine is around 6 hours, but (as with most biological half-lives) numerous factors can alter that number. People with compromised liver function or certain genes will take longer to metabolize caffeine. Consumption of grapefruit juice has been shown in some studies to slow caffeine metabolism. It takes around 24 hours for a dose of caffeine to fully leave the body.

The half-lives of drugs vary from a few seconds to several weeks. To complicate matters, biological half-lives vary for different parts of the body. Lead has a half-life of around a month in the blood, but a decade in bone. Plutonium in bone has a half-life of a century — more than double the time for the liver.

Marketers refer to the half-life of a campaign — the time taken to receive half the total responses. Unsurprisingly, this time varies among media. A paper catalog may have a half-life of about three weeks, whereas a tweet might have a half-life of a few minutes. Calculating this time is important for establishing how frequently a message should be sent.

“Every day that we read the news we have the possibility of being confronted with a fact about our world that is wildly different from what we thought we knew.”

— Samuel Arbesman

The Half-Life of Facts

In The Half-Life of Facts: Why Everything We Know Has an Expiration Date, Samuel Arbesman (see our Knowledge Project interview) posits that facts decay over time until they are no longer facts or perhaps no longer complete. According to Arbesman, information has a predictable half-life: the time taken for half of it to be replaced or disproved. Over time, one group of facts replaces another. As our tools and knowledge become more advanced, we can discover more — sometimes new things that contradict what we thought we knew, sometimes nuances about old things. Sometimes we discover a whole area that we didn’t know about.

The rate of these discoveries varies. Our body of engineering knowledge changes more slowly, for example, than does our body of psychological knowledge.

Arbesman studied the nature of facts. The field was born in 1947, when mathematician Derek J. de Solla Price was arranging a set of philosophical books on his shelf. Price noted something surprising: the sizes of the books fit an exponential curve. His curiosity piqued, he began to see whether the same curve applied to science as a whole. Price established that the quantity of scientific data available was doubling every 15 years. This meant that some of the information had to be rendered obsolete with time.

Scientometrics shows us that facts are always changing, and much of what we know is (or soon will be) incorrect. Indeed, much of the available published research, however often it is cited, has never been reproduced and cannot be considered true. In a controversial paper entitled “Why Most Published Research Findings Are False,” John Ioannides covers the rampant nature of poor science. Many researchers are incentivized to find results that will please those giving them funding. Intense competition makes it essential to find new information, even if it is found in a dubious manner. Yet we all have a tendency to turn a blind eye when beliefs we hold dear are disproved and to pay attention only to information confirming our existing opinions.

As an example, Arbesman points to the number of chromosomes in a human cell. Up until 1965, 48 was the accepted number that medical students were taught. (In 1953, it had been declared an established fact by a leading cytologist). Yet in 1956, two researchers, Joe Hin Tjio and Albert Levan, made a bold assertion. They declared the true number to be 46. During their research, Tjio and Levan could never find the number of chromosomes they expected. Discussing the problem with their peers, they discovered they were not alone. Plenty of other researchers found themselves two chromosomes short of the expected 48. Many researchers even abandoned their work because of this perceived error. But Tjio and Levan were right (for now, anyway). Although an extra two chromosomes seems like a minor mistake, we don’t know the opportunity costs of the time researchers invested in faulty hypotheses or the value of the work that was abandoned. It was an emperor’s-new-clothes situation, and anyone counting 46 chromosomes assumed they were the ones making the error.

As Arbesman puts it, facts change incessantly. Many of us have seen the ironic (in hindsight) doctor-endorsed cigarette ads from the past. A glance at a newspaper will doubtless reveal that meat or butter or sugar has gone from deadly to saintly, or vice versa. We forget that laughable, erroneous beliefs people once held are not necessarily any different from those we now hold. The people who believed that the earth was the center of the universe, or that some animals appeared out of nowhere or that the earth was flat, were not stupid. They just believed facts that have since decayed. Arbesman gives the example of a dermatology test that had the same question two years running, with a different answer each time. This is unsurprising considering the speed at which our world is changing.

As Arbesman points out, in the last century the world’s population has swelled from 2 billion to 7 billion, we have taken on space travel, and we have altered the very definition of science.

Our world seems to be in constant flux. With our knowledge changing all the time, even the most informed people can barely keep up. All this change may seem random and overwhelming (Dinosaurs have feathers? When did that happen?), but it turns out there is actually order within the shifting noise. This order is regular and systematic and is one that can be described by science and mathematics.

The order Arbesman describes mimics the decay of radioactive elements. Whenever new information is discovered, we can be sure it will break down and be proved wrong at some point. As with a radioactive atom, we don’t know precisely when that will happen, but we know it will occur at some point.

If we zoom out and look at a particular body of knowledge, the random decay becomes orderly. Through probabilistic thinking, we can predict the half-life of a group of facts with the same certainty with which we can predict the half-life of a radioactive atom. The problem is that we rarely consider the half-life of information. Many people assume that whatever they learned in school remains true years or decades later. Medical students who learned in university that cells have 48 chromosomes would not learn later in life that this is wrong unless they made an effort to do so.

OK, so we know that our knowledge will decay. What do we do with this information? Arbesman says,

… simply knowing that knowledge changes like this isn’t enough. We would end up going a little crazy as we frantically tried to keep up with the ever changing facts around us, forever living on some sort of informational treadmill. But it doesn’t have to be this way because there are patterns. Facts change in regular and mathematically understandable ways. And only by knowing the pattern of our knowledge evolution can we be better prepared for its change.

Recent initiatives have sought to calculate the half-life of an academic paper. Ironically, academic journals have largely neglected research into how people use them and how best to fund the efforts of researchers. Research by Philip Davis shows the time taken for a paper to receive half of its total downloads. Davis’s results are compelling. While most forms of media have a half-life measured in days or even hours, 97 percent of academic papers have a half-life longer than a year. Engineering papers have a slightly shorter half-life than other fields of research, with double the average (6 percent) having a half-life of under a year. This makes sense considering what we looked at earlier in this post. Health and medical publications have the shortest overall half-life: two to three years. Physics, mathematics, and humanities publications have the longest half-lives: two to four years.

The Half-Life of Secrets

According to Peter Swire, writing in “The Declining Half-Life of Secrets,” the half-life of secrets (by which Swire generally means classified information) is shrinking. In the past, a government secret could be kept for over 25 years. Nowadays, hacks and leaks have shrunk that time considerably. Swire writes:

During the Cold War, the United States developed the basic classification system that exists today. Under Executive Order 13526, an executive agency must declassify its documents after 25 years unless an exception applies, with stricter rules if documents stay classified for 50 years or longer. These time frames are significant, showing a basic mind-set of keeping secrets for a time measured in decades.

Swire notes that there are three main causes: “the continuing effects of Moore’s Law — or the idea that computing power doubles every two years, the sociology of information technologists, and the different source and methods for signals intelligence today compared with the Cold War.” One factor is that spreading leaked information is easier than ever. In the past, it was often difficult to get information published. Newspapers feared legal repercussions if they shared classified information. Anyone can now release secret information, often anonymously, as with WikiLeaks. Governments cannot as easily rely on media gatekeepers to cover up leaks.

Rapid changes in technology or geopolitics often reduce the value of classified information, so the value of some, but not all, classified information also has a half-life. Sometimes it’s days or weeks, and sometimes it’s years. For some secrets, it’s not worth investing the massive amount of computer time that would be needed to break them because by the time you crack the code, the information you wanted to know might have expired.

(As an aside, if you were to invert the problem of all these credit card and SSN leaks, you might conclude that reducing the value of possessing this information would be more effective than spending money to secure it.)

“Our policy (at Facebook) is literally to hire as many talented engineers as we can find. The whole limit in the system is that there are not enough people who are trained and have these skills today.”

— Mark Zuckerberg

The Half-Lives of Careers and Business Models

The issue with information having a half-life should be obvious. Many fields depend on individuals with specialized knowledge, learned through study or experience or both. But what if those individuals are failing to keep up with changes and clinging to outdated facts? What if your doctor is offering advice that has been rendered obsolete since they finished medical school? What if your own degree or qualifications are actually useless? These are real problems, and knowing about half-lives will help you make yourself more adaptable.

While figures for the half-lives of most knowledge-based careers are hard to find, we do know the half-life of an engineering career. A century ago, it would take 35 years for half of what an engineer learned when earning their degree to be disproved or replaced. By the 1960s, that time span shrank to a mere decade. Today that figure is probably even lower.

In 1966 paper entitled “The Dollars and Sense of Continuing Education,” Thomas Jones calculated the effort that would be required for an engineer to stay up to date, assuming a 10-year half-life. According to Jones, an engineer would need to devote at least five hours per week, 48 weeks a year, to stay up to date with new advancements. A typical degree requires about 4800 hours of work. Within 10 years, the information learned during 2400 of those hours would be obsolete. The five-hour figure does not include the time necessary to revise forgotten information that is still relevant. A 40-year career as an engineer would require 9600 hours of independent study.

Keep in mind that Jones made his calculations in the 1960s. Modern estimates place the half-life of an engineering degree at between 2.5 and 5 years, requiring between 10 and 20 hours of study per week. Welcome to the treadmill, where you have to run faster and faster so that you don’t fall behind.

Unsurprisingly, putting in this kind of time is simply impossible for most people. The result is an ever-shrinking length of a typical engineer’s career and a bias towards hiring recent graduates. A partial escape from this time-consuming treadmill that offers little progress is to recognize the continuous need for learning. If you agree with that, it becomes easier to place time and emphasis on developing heuristics and systems to foster learning. The faster the pace of knowledge change, the more valuable the skill of learning becomes.

A study by PayScale found that the median age of workers in most successful technology companies is substantially lower than that of other industries. Of 32 companies, just six had a median worker age above 35, despite the average across all workers being just over 42. Eight of the top companies had a median worker age of 30 or below — 28 for Facebook, 29 for Google, and 26 for Epic Games. The upshot is that salaries are high for those who can stay current while gaining years of experience.

In a similar vein, business models have ever shrinking half-lives. The nature of capitalism is that you have to be better last year than you were this year — not to gain market share but to maintain what you already have. If you want to get ahead, you need asymmetry; otherwise, you get lost in trench warfare. How long would it take for half of Uber or Facebook’s business models to be irrelevant? It’s hard to imagine it being more than a couple of years or even months.

In The Business Model Innovation Factory: How to Stay Relevant When the World Is Changing, Saul Kaplan highlights the changing half-lives of business models. In the past, models could last for generations. The majority of CEOs oversaw a single business for their entire careers. Business schools taught little about agility or pivoting. Kaplan writes:

During the industrial era once the basic rules for how a company creates, delivers, and captures value were established[,] they became etched in stone, fortified by functional silos, and sustained by reinforcing company cultures. All of a company’s DNA, energy, and resources were focused on scaling the business model and beating back competition attempting to do a better job executing the same business model. Companies with nearly identical business models slugged it out for market share within well-defined industry sectors.

[…]

Those days are over. The industrial era is not coming back. The half-life of a business model is declining. Business models just don’t last as long as they used to. In the twenty-first century business leaders are unlikely to manage a single business for an entire career. Business leaders are unlikely to hand down their businesses to the next generation of leaders with the same business model they inherited from the generation before.

The Burden of Knowledge

The flip side of a half-life is the time it takes to double something. A useful guideline to calculate the time it takes for something to double is to divide 70 by the rate of growth. This formula isn’t perfect, but it gives a good indication. Known as the Rule of 70, it applies only to exponential growth when the relative growth rate remains consistent, such as with compound interest.

The higher the rate of growth, the shorter the doubling time. For example, if the population of a city is increasing by 2 percent per year, we divide 70 by 2 to get a doubling time of 35 years. The rule of 70 is a useful heuristic; population growth of 2 percent might seem low, but your perspective might change when you consider that the city’s population could double in just 35 years. The Rule of 70 can also be used to calculate the time for an investment to double in value; for example, $100 at 7 percent compound interest will double in just a decade and quadruple in 20 years. The average newborn baby doubles its birth weight in under four months. The average doubling time for a tumor is also four months.

We can see how information changes in the figures for how long it takes for a body of knowledge to double in size. The figures quoted by Arbesman (drawn from Little Science, Big Science … and Beyond by Derek J. de Solla Price) are compelling, including:

  • Time for the number of entries in a dictionary of national biographies to double: 100 years
  • Time for the number of universities to double: 50 years
  • Time for the number of known chemical compounds to double: 15 years
  • Time for the number of known asteroids to double: 10 years

Arbesman also gives figures for the time taken for the available knowledge in a particular field to double, including:

  • Medicine: 87 years
  • Mathematics: 63 years
  • Chemistry: 35 years
  • Genetics: 32 years

The doubling of knowledge increases the learning load over time. As a body of knowledge doubles so does the cost of wrapping your head around what we already know. This cost is the burden of knowledge. To be the best in a general field today requires that you know more than the person who was the best only 20 years ago. Not only do you have to be better to be the best, but you also have to be better just to stay in the game.

The corollary is that because there is so much to know, we specialize in very niche areas. This makes it easier to grasp the existing body of facts, keep up to date on changes, and rise to the level of expert. The problem is that specializing also makes it easier to see the world through the narrow focus of your specialty, makes it harder to work with other people (as niches are often dominated by jargon), and makes you prone to overvalue the new and novel.

Conclusion

As we have seen, understanding how half-lives work has numerous practical applications, from determining when radioactive materials will become safe to figuring out effective drug dosages. Half-lives also show us that if we spend time learning something that changes quickly, we might be wasting our time. Like Alice in Wonderland — and a perfect example of the Red Queen Effect — we have to run faster and faster just to keep up with where we are. So if we want our knowledge to compound, we’ll need to focus on the invariant general principles.

Activation Energy: Why Getting Started Is the Hardest Part

The beginning of any complex or challenging endeavor is always the hardest part. Not all of us wake up and jump out of bed ready for the day. Some of us, like me, need a little extra energy to transition out of sleep and into the day. Once I’ve had a cup of coffee, my energy level jumps and I’m good for the rest of the day. Chemical reactions work in much the same way. They need their coffee, too. We call this activation energy.

Understanding how this works can be a useful perspective as part of our latticework of mental models.

Whether you use chemistry in your everyday work or have tried your best not to think about it since school, the ideas behind activation energy are simple and useful outside of chemistry. Understanding the principle can, for example, help you get kids to eat their vegetables, motivate yourself and others, and overcome inertia.

How Activation Energy Works in Chemistry

Chemical reactions need a certain amount of energy to begin working. Activation energy is the minimum energy required to cause a reaction to occur.

To understand activation energy, we must first think about how a chemical reaction occurs.

Anyone who has ever lit a fire will have an intuitive understanding of the process, even if they have not connected it to chemistry.

Most of us have a general feel for the heat necessary to start flames. We know that putting a single match to a large log will not be sufficient and a flame thrower would be excessive. We also know that damp or dense materials will require more heat than dry ones. The imprecise amount of energy we know we need to start a fire is representative of the activation energy.

For a reaction to occur, existing bonds must break and new ones form. A reaction will only proceed if the products are more stable than the reactants. In a fire, we convert carbon in the form of wood into CO2 and is a more stable form of carbon than wood, so the reaction proceeds and in the process produces heat. In this example, the activation energy is the initial heat required to get the fire started. Our effort and spent matches are representative of this.

We can think of activation energy as the barrier between the minima (smallest necessary values) of the reactants and products in a chemical reaction.

The Arrhenius Equation

Svante Arrhenius, a Swedish scientist, established the existence of activation energy in 1889.

Arrhenius developed his eponymous equation to describe the correlation between temperature and reaction rate.

The Arrhenius equation is crucial for calculating the rates of chemical reactions and, importantly, the quantity of energy necessary to start them.

In the Arrhenius equation, K is the reaction rate coefficient (the rate of reaction). A is the frequency factor (how often molecules collide), R is the universal gas constant (units of energy per temperature increment per mole), T represents the absolute temperature (usually measured in kelvins), and E is the activation energy.

It is not necessary to know the value of A to calculate Ea as this can be figured out from the variation in reaction rate coefficients in relation to temperature. Like many equations, it can be rearranged to calculate different values. The Arrhenius equation is used in many branches of chemistry.

Why Activation Energy Matters

Understanding the energy necessary for a reaction to occur gives us control over our surroundings.

Returning to the example of fire, our intuitive knowledge of activation energy keeps us safe. Many chemical reactions have high activation energy requirements, so they do not proceed without an additional input. We all know that a book on a desk is flammable, but will not combust without heat application. At room temperature, we need not see the book as a fire hazard. If we light a candle on the desk, we know to move the book away.

If chemical reactions did not have reliable activation energy requirements, we would live in a dangerous world.

Catalysts

Chemical reactions which require substantial amounts of energy can be difficult to control.

Increasing temperature is not always a viable source of energy due to costs, safety issues, or simple impracticality. Chemical reactions that occur within our bodies, for example, cannot use high temperatures as a source of activation energy. Consequently, it is sometimes necessary to reduce the activation energy required.

Speeding up a reaction by lowering the activation energy required is called catalysis. This is done with an additional substance known as a catalyst, which is generally not consumed in the reaction. In principle, you only need a tiny amount of catalyst to cause catalysis.

Catalysts work by providing an alternative pathway with lower activation energy requirements. Consequently, more of the particles have sufficient energy to react. Catalysts are used in industrial scale reactions to lower costs.

Returning to the fire example, we know that attempting to light a large log with a match is rarely effective. Adding some paper will provide an alternative pathway and serve as a catalyst — firestarters do the same.

Within our bodies, enzymes serve as catalysts in vital reactions (such as building DNA.)

“Energy can have two dimensions. One is motivated, going somewhere, a goal somewhere, this moment is only a means and the goal is going to be the dimension of activity, goal oriented-then everything is a means, somehow it has to be done and you have to reach the goal, then you will relax. But for this type of energy, the goal never comes because this type of energy goes on changing every present moment into a means for something else, into the future. The goal always remains on the horizon. You go on running, but the distance remains the same.
No, there is another dimension of energy: that dimension is unmotivated celebration. The goal is here, now; the goal is not somewhere else. In fact, you are the goal. In fact, there is no other fulfillment than that of this moment–consider the lilies. When you are the goal and when the goal is not in the future, when there is nothing to be achieved, rather you are just celebrating it, then you have already achieved it, it is there. This is relaxation, unmotivated energy.”

— Osho, Tantra

Applying the Concept of Activation Energy to Our Daily Lives

Although activation energy is a scientific concept, we can use it as a practical mental model.

Returning to the morning coffee example, many of the things we do each day depend upon an initial push.

Take the example of a class of students set an essay for their coursework. Each student requires a different sort of activation energy for them to get started. For one student, it might be hearing their friend say she has already finished hers. For another, it might be blocking social media and turning off their phone. A different student might need a few cans of Red Bull and an impending deadline. Or, for another, reading an interesting article on the topic which provides a spark of inspiration. The act of writing an essay necessitates a certain sort of energy.

Getting kids to eat their vegetables can be a difficult process. In this case, incentives can act as a catalyst. “You can’t have your dessert until you eat your vegetables” is not only a psychological play on incentives; it also often requires less energy than constantly fighting with the kids to eat their vegetables. Once kids eat a carrot, they generally eat another one and another one. While they still want dessert, you won’t have to remind them each time, so you’ll save a lot of energy.

The concept of activation energy can also apply to making drastic life changes. Anyone who has ever done something dramatic and difficult (such as quitting an addiction, leaving an abusive relationship, quitting a long-term job, or making crucial lifestyle changes) knows that it is necessary to reach a breaking point first. The bigger and more challenging an action is, the more activation energy we require to do it.

Our coffee drinker might crave little activation energy (a cup or two) to begin their day if they are well rested. Meanwhile, it will take a whole lot more coffee for them to get going if they slept badly and have a dull day to get through.

Conclusion

To understand and use the concept of activation energy in our lives does not require a degree in chemistry. While the concept as used by scientists is complex, we can use the basic idea.

It is no coincidence that many of most useful mental models in our latticework originate from science. There is something quite poetic about the way in which human behavior mirrors what occurs at a microscopic level.

For other examples, look to Occam’s Razor, falsification, feedback loops, and equilibrium.

Survival of the Fittest: Groups versus Individuals

If ‘survival of the fittest’ is the prime evolutionary tenet, then why do some behaviors that lead to winning or success, seemingly justified by this concept, ultimately leave us cold?

Taken from Darwin’s theory of evolution, survival of the fittest is often conceptualized as the advantage that accrues with certain traits, allowing an individual to both thrive and survive in their environment by out-competing for limited resources. Qualities such as strength and speed were beneficial to our ancestors, allowing them to survive in demanding environments, and thus our general admiration for these qualities is now understood through this evolutionary lens.

However, in humans this evolutionary concept is often co-opted to defend a wide range of behaviors, not all of them good. Winning by cheating, or stepping on others to achieve goals.

Why is this?

One answer is that humans are not only concerned with our individual survival, but the survival of our group. (Which, of course, leads to improved individual survival, on average.) This relationship between individual and group survival is subject to intense debate among biologists.

Selecting for Unselfishness?

Humans display a wide range of behavior that seems counter-intuitive to the survival of the fittest mentality until you consider that we are an inherently social species, and that keeping our group fit is a wise investment of our time and energy.

One of the behaviors that humans display a lot of is “indirect reciprocity”. Distinguished from “direct reciprocity”, in which I help you and you help me, indirect reciprocity confers no immediate benefit to the one doing the helping. Either I help you, then you help someone else at a later time, or I help you and then someone else, some time in the future, helps me.

Martin A. Nowak and Karl Sigmund have studied this phenomenon in humans for many years. Essentially, they ask the question “How can natural selection promote unselfish behavior?”

Many of their studies have shown that “propensity for indirect reciprocity is widespread. A lot of people choose to do it.”

Furthermore:

Humans are the champions of reciprocity. Experiments and everyday experience alike show that what Adam Smith called ‘our instinct to trade, barter and truck’ relies to a considerable extent on the widespread tendency to return helpful and harmful acts in kind. We do so even if these acts have been directed not to us but to others.

We care about what happens to others, even if the entire event is one that we have no part in. If you consider evolution in terms of survival of the fittest group, rather than individual, this makes sense.

Supporting those who harm others can breed mistrust and instability. And if we don’t trust each other, day to day transactions in our world will be completely undermined. Sending your kids to school, banking, online shopping: We place a huge amount of trust in our fellow humans every day.

If we consider this idea of group survival, we can also see value in a wider range of human attributes and behaviors. It is now not about “I have to be the fittest in every possible way in order to survive“, but recognizing that I want fit people in my group.

In her excellent book, Quiet: The Power of Introverts in a World That Can’t Stop Talking, author Susan Cain explores, among other things, the relevance of introverts to social function. How their contributions benefit the group as a whole. Introverts are people who “like to focus on one task at a time, … listen more than they talk, think before they speak, … [and] tend to dislike conflict.”

Though out of step with the culture of “the extrovert ideal” we are currently living in, introverts contribute significantly to our group fitness. Without them we would be deprived of much of our art and scientific progress.

Cain argues:

Among evolutionary biologists, who tend to subscribe to the vision of lone individuals hell-bent on reproducing their own DNA, the idea that species include individuals whose traits promote group survival is hotly debated and, not long ago, could practically get you kicked out of the academy.

But the idea makes sense. If personality types such as introverts aren’t the fittest for survival, then why did they persist? Possibly because of their value to the group.

Cain looks at the work of Dr. Elaine Aron, who has spent years studying introverts, and is one herself. In explaining the idea of different personality traits as part of group selection in evolution, Aron offers this story in an article posted on her website:

I used to joke that when a group of prehistoric humans were sitting around the campfire and a lion was creeping up on them all, the sensitive ones [introverts] would alert the others to the lion’s prowling and insist that something be done. But the non-sensitive ones [extroverts] would be the ones more likely to go out and face the lion. Hence there are more of them than there are of us, since they are willing and even happy to do impulsive, dangerous things that will kill many of us. But also, they are willing to protect us and hunt for us, if we are not as good at killing large animals, because the group needs us. We have been the healers, trackers, shamans, strategists, and of course the first to sense danger. So together the two types survive better than a group of just one type or the other.

The lesson is this: Groups survive better if they have individuals with different strengths to draw on. The more tools you have, the more likely you can complete a job. The more people you have that are different the more likely you can survive the unexpected.

Which Group?

How then, does one define the group? Who am I willing to help? Arguably, I’m most willing to sacrifice for my children, or family. My immediate little group. But history is full of examples of those who sacrificed significantly for their tribes or sports teams or countries.

We can’t argue that it is just about the survival of our own DNA. That may explain why I will throw myself in front of a speeding car to protect my child, but the beaches of Normandy were stormed by thousands of young, childless men. Soldiers from World War I, when interviewed about why they would jump out of a trench, trying to take a slice of no man’s land, most often said they did it “for the guy next to them”. They initially joined the military out of a sense of “national pride”, or other very non-DNA reasons.

Clearly, human culture is capable of defining “groups” very broadly though a complex system of mythology, creating deep loyalty to “imaginary” groups like sports teams, corporations, nations, or religions.

As technology shrinks our world, our group expands. Technological advancement pushes us into higher degrees of specialization, so that individual survival becomes clearly linked with group survival.

I know that I have a vested interest in doing my part to maintain the health of my group. I am very attached to indoor plumbing and grocery stores, yet don’t participate at all in the giant webs that allow those things to exist in my life. I don’t know anything about the configuration of the municipal sewer system or how to grow raspberries. (Of course, Adam Smith called this process of the individual benefitting the group through specialization the Invisible Hand.)

When we see ourselves as part of a group, we want the group to survive and even thrive. Yet how big can our group be? Is there always an us vs. them? Does our group surviving always have to be at the expense of others? We leave you with the speculation.