Blog

Gates’ Law: How Progress Compounds and Why It Matters

“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”

It’s unclear exactly who first made that statement, when they said it, or how it was phrased. The most probable source is Roy Amara, a Stanford computer scientist. In the 1960s, Amara told colleagues that he believed that “we overestimate the impact of technology in the short-term and underestimate the effect in the long run.” For this reason, variations on that phrase are often known as Amara’s Law. However, Bill Gates made a similar statement (possibly paraphrasing Amara), so it’s also known as Gates’s Law.

You may have seen the same phrase attributed to Arthur C. Clarke, Tony Robbins, or Peter Drucker. There’s a good reason why Amara’s words have been appropriated by so many thinkers—they apply to so much more than technology. Almost universally, we tend to overestimate what can happen in the short term and underestimate what can happen in the long term.

Thinking about the future does not require endless hyperbole or even forecasting, which is usually pointless anyway. Instead, there are patterns we can identify if we take a long-term perspective.

Let’s look at what Bill Gates meant and why it matters.

Moore’s Law

Gates’s Law is often mentioned in conjunction with Moore’s Law. This is generally quoted as some variant of “the number of transistors on an inch of silicon doubles every eighteen months.” However, calling it Moore’s Law is misleading—at least if you think of laws as invariant. It’s more of an observation of a historical trend.

When Gordon Moore, co-founder of Fairchild Semiconductor and Intel, noticed in 1965 that the number of semiconductors on a chip doubled every year, he was not predicting that would continue in perpetuity. Indeed, Moore revised the doubling time to two years a decade later. But the world latched onto his words. Moore’s Law has been variously treated as a target, a limit, a self-fulfilling prophecy, and a physical law as certain as the laws of thermodynamics.

Moore’s Law is now considered to be outdated, after holding true for several decades. That doesn’t mean the concept has gone anywhere. Moore’s Law is often regarded as a general principle in technological development. Certain performance metrics have a defined doubling time, the opposite of a half-life.

Why is Moore’s Law related to Amara’s Law?

Exponential growth is a concept we struggle to conceptualize. As University of Colorado physics professor Albert Allen Bartlett famously put it, “The greatest shortcoming of the human race is our inability to understand the exponential function.”

When we talk about Moore’s Law, we easily underestimate what happens when a value keeps doubling. Sure, it’s not that hard to imagine your laptop getting twice as fast in a year, for instance. Where it gets tricky is when we try to imagine what that means on a longer timescale. What does that mean for your laptop in 10 years? There is a reason your iPhone has more processing power than the first space shuttle.

One of the best illustrations of exponential growth is the legend about a peasant and the emperor of China. In the story, the peasant (sometimes said to be the inventor of chess), visits the emperor with a seemingly modest request: a chessboard with one grain of rice on the first square, then two on the second, four on the third and so on, doubling each time. The emperor agreed to this idiosyncratic request and ordered his men to start counting out rice grains.

“Every fact of science was once damned. Every invention was considered impossible. Every discovery was a nervous shock to some orthodoxy. Every artistic innovation was denounced as fraud and folly. We would own no more, know no more, and be no more than the first apelike hominids if it were not for the rebellious, the recalcitrant, and the intransigent.”

— Robert Anton Wilson

If you haven’t heard this story before, it might seem like the peasant would end up with, at best, enough rice to feed their family that evening. In reality, the request was impossible to fulfill. Doubling one grain 63 times (the number of squares on a chessboard, minus the first one that only held one grain) would mean the emperor had to give the peasant over 18 million trillion grains of rice. To grow just half of that amount, he would have needed to drain the oceans and convert every bit of land on this planet into rice fields. And that’s for half.

In his essay “The Law of Accelerating Returns,” author and inventor Ray Kurzweil uses this story to show how we misunderstand the meaning of exponential growth in technology. For the first few squares, the growth was inconsequential, especially in the eyes of an emperor. It was only once they reached the halfway point that the rate began to snowball dramatically. (It’s no coincidence that Warren Buffett’s authorized biography is called The Snowball, and few people understand exponential growth better than Warren Buffett). It just so happens that by Kurzweil’s estimation, we’re at that inflection point in computing. Since the creation of the first computers, computation power has doubled roughly 32 times. We may underestimate the long-term impact because the idea of this continued doubling is so tricky to imagine.

The Technology Hype Cycle

To understand how this plays out, let’s take a look at the cycle innovations go through after their invention. Known as the Gartner hype cycle, it primarily concerns our perception of technology—not its actual value in our lives.

Hype cycles are obvious in hindsight, but fiendishly difficult to spot while they are happening. It’s important to bear in mind that this model is one way of looking at reality and is not a prediction or a template. Sometimes a step gets missed, sometimes there is a substantial gap between steps, sometimes a step is deceptive.

The hype cycle happens like this:

  • New technology: The media picks up on the existence of a new technology which may not exist in a usable form yet. Nonetheless, the publicity leads to significant interest. At this point, people working on research and development are probably not making any money from it. Lots of mistakes are made. In Everett Rogers’s diffusion of innovations theory, this is known as the innovation stage. If it seems like something new will have a dramatic payoff, it probably won’t last. If it seems we have found the perfect use for a brand-new technology, we may be wrong.
  • The peak of inflated expectations: A few well-publicized success stories lead to inflated expectations. Hype builds and new companies pop up to anticipate the demand. There may be a burst of funding for research and development. Scammers looking to make a quick buck may move into the area. Rogers calls this the syndication stage. It’s here that we overestimate the future applications and impact of the technology.
  • The trough of disillusionment: Prominent failures or a lack of progress break through the hype and lead to disillusionment. People become pessimistic about technology’s potential and mostly lose interest. Reports of scams may contribute to this, as the media uses this as a reason to describe the technology as a fraud. If it seems like new technology is dying, it may just be that its public perception has changed and the technology itself is still developing. Hype does not correlate directly with functionality.
  • The slope of enlightenment: As time passes, people continue to improve technology and find better uses for it. Eventually, it’s clear how it can improve our lives, and mainstream adoption begins. Mechanisms for preventing scams or lawbreaking emerge.
  • The plateau of productivity: The technology becomes mainstream. Development slows. It becomes part of our lives and ceases to seem novel. Those who move into the now saturated market tend to struggle, as a few dominant players take the lion’s share of the available profits. Rogers calls this the diffusion stage.

When we are cresting the peak of inflated expectations, we imagine that the new development will transform our lives within months. In the depths of the trough of disillusionment, we don’t expect it to get anywhere, even allowing years for it to improve. We typically fail to anticipate the significance of the plateau of productivity, even if it exceeds our initial expectations.

Smart people can usually see through the initial hype. But only a handful of people can—through foresight, stubbornness or perhaps pure luck—see through the trough of disillusionment. Most of the initial skeptics feel vindicated by the dramatic drop in interest and expect the innovation to disappear. It takes far greater expertise to support an unpopular technology than to deride a popular one.

Correctly spotting the cycle as it unfolds can be immensely profitable. Misreading it can be devastating. First movers in a new area often struggle to survive the trough, even if they are the ones who do the essential research and development. We tend to assume current trends will continue, so we expect sustained growth during the peak and expect linear decline during the trough.

If we are trying to assess the future impact of a new technology, we need to separate its true value from its public perception. When something is new, the mainstream hype is likely to be more noise than signal. After all, the peak of inflated expectations often happens before the technology is available in a usable form. It’s almost always before the public has access to it. Hype serves a real purpose in the early days: it draws interest, secures funding, attracts people with the right talents to move things forward and generates new ideas. Not all hype is equally important, because not all opinions are equally important. If there’s intense interest within a niche group with relevant expertise, that’s more telling than a general enthusiasm.

The hype cycle doesn’t just happen with technology. It plays out all over the place, and we’re usually fooled by it. Discrepancies between our short- and long-term estimates of achievement are everywhere. Consider the following situations. They’re hypothetical, but similar situations are common.

  • A musician releases an acclaimed debut album which creates enormous interest in their work. When their second album proves disappointing (or never materializes), most people lose interest. Over time, the performer develops a loyal, sustained following of people who accurately assess the merits of their music, not the hype.
  • A promising new pharmaceutical receives considerable attention—until it becomes apparent that there are unexpected side effects, or it isn’t as powerful as expected. With time, clinical trials find alternate uses which may prove even more beneficial. For example, a side effect could be helpful for another use. It’s estimated that over 20% of pharmaceuticals are prescribed for a different purpose than they were initially approved for, with that figure rising as high as 60% in some areas.
  • A propitious start-up receives an inflated valuation after a run of positive media attention. Its founders are lauded and extensively profiled and investors race to get involved. Then there’s an obvious failure—perhaps due to the overconfidence caused by hype—or early products fall flat or take too long to create. Interest wanes. The media gleefully dissects the company’s apparent demise. But the product continues to improve and ultimately becomes a part of our everyday lives.

In the short run, the world is a voting machine affected by whims and marketing. In the long run, it’s a weighing machine where quality and product matter.

The Adjacent Possible

Now that we know how Amara’s Law plays out in real life, the next question is: why does this happen? Why does technology grow in complexity at an exponential rate? And why don’t we see it coming?

One explanation is what Stuart Kauffman describes as “the adjacent possible.” Each new innovation adds to the number of achievable possible (future) innovations. It opens up adjacent possibilities which didn’t exist before, because better tools can be used to make even better tools.

Humanity is about expanding the realm of the possible. Discovering fire meant our ancestors could use the heat to soften or harden materials and make better tools. Inventing the wheel meant the ability to move resources around, which meant new possibilities such as the construction of more advanced buildings using materials from other areas. Domesticating animals meant a way to pull wheeled vehicles with less effort, meaning heavier loads, greater distances and more advanced construction. The invention of writing led to new ways of recording, sharing and developing knowledge which could then foster further innovation. The internet continues to give us countless new opportunities for innovation. Anyone with a new idea can access endless free information, find supporters, discuss their ideas and obtain resources. New doors to the adjacent possible open every day as we find different uses for technology.

“We like to think of our ideas as $40,000 incubators shipped directly from the factory, but in reality, they’ve been cobbled together with spare parts that happened to be sitting in the garage.”

— Steven Johnson, Where Good Ideas Come From

Take the case of GPS, an invention that was itself built out of the debris of its predecessors. In recent years, GPS has opened up new possibilities that didn’t exist before. The system was developed by the US government for military usage. In the 1980s, they decided to start allowing other organizations and individuals to use it. Civilian access to GPS gave us new options. Since then, it has led to numerous innovations that incorporate the system into old ideas: self-driving cars, mobile phone tracking (very useful for solving crime or finding people in emergency situations), tectonic plate trackers that help predict earthquakes, personal navigation systems, self-navigating robots, and many others. None of these would have been possible without some sort of global positioning system. With the invention of GPS, human innovation sped up a little more.

Steven Johnson gives one example of how this happens in Where Good Ideas Come From. In 2008, MIT professor Timothy Presto visited a hospital in Indonesia and found that all eight of the incubators for newborn babies were broken. The incubators had been donated to the hospital by relief organizations, but the staff didn’t know how to fix them. Plus, the incubators were poorly suited to the humid climate and the repair instructions only came in English. Presto realized that donating medical equipment was pointless if local people couldn’t fix it. He and his team began working on designing an incubator that could save the lives of babies for a lot longer than a couple of months.

Instead of continuing to tweak existing designs, Presto and his team devised a completely new incubator that used car parts. While the local people didn’t know how to fix an incubator, they were extremely adept at keeping their cars working no matter what. Named the NeoNurture, it used headlights for warmth, dashboard fans for ventilation, and a motorcycle battery for power. Hospital staff just needed to find someone who was good with cars to fix it—the principles were the same.

Even more, telling is the origin of the incubators Presto and his team reconceptualized. The first incubator for newborn babies was designed by Stephane Tarnier in the late 19th century. While visiting a zoo on his day off, Tarnier noted that newborn chicks were kept in heated boxes. It’s not a big leap to imagine that the issue of infant mortality was permanently on his mind. Tarnier was an obstetrician, working at a time when the infant mortality rate for premature babies was about 66%. He must have been eager to try anything that could reduce that figure and its emotional toll. Tarnier’s rudimentary incubator immediately halved that mortality rate. The technology was right there, in the zoo. It just took someone to connect the dots and realize human babies aren’t that different from chicken babies.

Johnson explains the significance of this: “Good ideas are like the NeoNurture device. They are, inevitably, constrained by the parts and skills that surround them…ideas are works of bricolage; they’re built out of that detritus.” Tarnier could invent the incubator only because someone else had already invented a similar device. Presto and his team could only invent the NeoNurture because Tarnier had come up with the incubator in the first place.

This happens in our lives, as well. If you learn a new skill, the number of skills you could potentially learn increases because some elements may be transferable. If you are introduced to a new person, the number of people you could meet grows, because they may introduce you to others. If you start learning a language, native speakers may be more willing to have conversations with you in it, meaning you can get a broader understanding. If you read a new book, you may find it easier to read other books by linking together the information in them. The list is endless. We can’t imagine what we’re capable of achieving in ten years because we forget about the adjacent possibilities that will emerge.

Accelerating Change

The adjacent possible has been expanding ever since the first person picked up a stone and started shaping it into a tool. Just look at what written and oral forms of communication made possible—no longer did each generation have to learn everything from scratch. Suddenly we could build upon what had come before us.

Some (annoying) people claim that there’s nothing new left. There are no new ideas to be had, no new creations to invent, no new options to explore. In fact, the opposite is true. Innovation is a non-zero-sum game. A crowded market actually means more opportunities to create something new than a barren one. Technology is a feedback loop. The creation of something new begets the creation of something even newer and so on.

Progress is exponential, not linear. So we overestimate the impact of a new technology during the early days when it is just finding its feet, then underestimate its impact in a decade or so when its full uses are emerging. As old limits and constraints melt away, our options explode. The exponential growth of technology is known as accelerating change. It’s a common belief among experts that the rate of change is speeding up and society will change dramatically alongside it.

“Ideas borrow, blend, subvert, develop and bounce off other ideas.”

— John Hegarty, Hegarty On Creativity

In 1999, author and inventor Ray Kurzweil posited the Law of Accelerating Change — that evolutionary systems develop at an exponential rate. While this is most obvious for technology, Kurzweil hypothesized that the principle is relevant in numerous other areas. Moore’s Law, initially referring only to semiconductors, has wider implications.

In an essay on the topic, he writes:

An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth.

Progress is tricky to predict or even to notice as it happens. It’s hard to notice things in a system that we are part of. And it’s hard to notice the incremental change because it lacks stark contrast. The current pace of change is our norm, and we adjust to it. In hindsight, we can see how Amara’s Law plays out.

Look at where the internet was just twenty years ago. A report from the Pew Research Center shows us how to change compounds. In 1998, a mere 41% of Americans used the internet at all—and the report expresses surprise that the users were beginning to include “people without college training, those with modest incomes, and women.” Less than a third of users had bought something online, email was predominantly just for work, and only a third of users looked at online news at least once per week. That’s a third of the 41% using the internet by the way, not of the general population. Wikipedia and Gmail didn’t exist. Internet users in the late nineties reported that their main problem was finding what they needed online.

That is perhaps the biggest change and one we may not have anticipated: the move towards personalization. Finding what we need is no longer a problem. Most of us have the opposite problem and struggle with information overwhelm. Twenty years ago, filter bubbles were barely a problem (at least, not online.) Now, almost everything we encounter online is personalized to ensure it’s ridiculously easy to find what we want. Newsletters, websites, and apps greet us by name. Newsfeeds are organized by our interests. Shopping sites recommend other products we might like. This has increased the amount the internet does for us to a level that would have been hard to imagine in the late 90s. Kevin Kelly, writing in The Inevitable,  describes filtering as one of the key forces that will shape the future.

History reveals an extraordinary acceleration of technological progress. Establishing the precise history of technology is problematic as some inventions occurred in several places at varying times, archaeological records are inevitably incomplete, and dating methods are imperfect. However, accelerating change is a clear pattern. To truly understand the principle of accelerating change, we need to take a quick look at a simple overview of the history of technology.

Early innovations happened slowly. It took us about 30,000 years to invent clothing and about 120,000 years to invent jewelry. It took us about 130,000 years to invent art and about 136,000 years to come up with the bow and arrow. But things began to speed up in the Upper Paleolithic period. Between 50,000 and 10,000 years ago, we developed more sophisticated tools with specialized uses—think harpoons, darts, fishing tools, and needles—early musical instruments, pottery, and the first domesticated animals. Between roughly 11,000 years and the 18th century, the pace truly accelerated. That period essentially led to the creation of civilization, with the foundations of our current world.

More recently, the Industrial Revolution changed everything because it moved us significantly further away from relying on the strength of people and domesticated animals to power means of production. Steam engines and machinery replaced backbreaking labor, meaning more production at a lower cost. The number of adjacent possibilities began to snowball. Machinery enabled mass production and interchangeable parts. Steam-powered trains meant people could move around far more easily, allowing people from different areas to mix together and share ideas. Improved communications did the same. It’s pointless to even try listing the ways technology has changed since then. Regardless of age, we’ve all lived through it and seen the acceleration. Few people dispute that the change is snowballing. The only question is how far that will go.

As Stephen Hawking put it in 1993:

For millions of years, mankind lived just like the animals. Then something happened which unleashed the power of our imagination. We learned to talk and we learned to listen. Speech has allowed the communication of ideas, enabling human beings to work together to build the impossible. Mankind’s greatest achievements have come about by talking, and its greatest failures by not talking. It doesn’t have to be like this. Our greatest hopes could become reality in the future. With the technology at our disposal, the possibilities are unbounded. All we need to do is make sure we keep talking.

But, as we saw with Moore’s Law, exponential growth cannot continue forever. Eventually, we run into fundamental constraints. Hours in the day, people on the planet, availability of a resource, smallest possible size of a semiconductor, attention—there’s always a bottleneck we can’t eliminate.  We reach the point of diminishing returns. Growth slows or stops altogether. We must then either look at alternative routes to improvement or leave things as they are. In Everett Rogers’s diffusion of innovation theory, this is known as the substitution stage, when usage declines and we start looking for substitutes.

This process is not linear. We can’t predict the future because there’s no way to take into account the tiny factors that will have a disproportionate impact in the long-run.

Footnotes
  • 1

    Image credit: tec_estromberg

Renaissance Paragone: An Ancient Tactic for Getting the Most From People

One of the engines behind the Italian Renaissance was the concept of paragonepitting creative efforts against one another in the belief that only with this you could come to see art’s real significance.

At first, the concept drove debates in salons. Eventually, however, it shifted into discussions of art, often among the very people who selected and funded it. In the Medici palaces, for example, rooms were arranged so that paintings would face each other. The idea was that people would directly compare the works, forming and expressing opinions. These competitions shifted the focus from the art to the artist. If one painting was better than another, you needed to know who the artist was so that you could hire them again.

Artists benefited from this arrangement, even if they didn’t win. They learned where they stood in comparison to others, both artistically and socially. Not only did they understand the gap, they learned how to close it, or change the point of comparison.

Da Vinci believed artists thrived under such competition. He once wrote:

You will be ashamed to be counted among draughtsmen if your work is inadequate, and this disgrace must motivate you to profitable study. Second, a healthy envy will stimulate you to become one of those who are praised more than yourself, for the praises of others will spur you on.

Many people want to know where they stand in relation to not only the external competition but to the people they work with every day. A lot of organizations make such comparisons difficult by hiding what matters. While you might know there is a gap between you and your coworker, you don’t know what the chasm looks like. And if you don’t know what it looks like, you don’t know where you are in relation. And if you don’t know where you are, you don’t know how to close the gap. It’s a weird sort of sabotage.

Not everyone responds to competition the same way. Pitting people directly against one another for a promotion might cause people to withdraw. That doesn’t mean they can’t handle it. It doesn’t mean they’re not amazing. Michelangelo once abandoned a competition with Da Vinci to flee to Rome—and we have only to look at the ceiling of the Sistine Chapel to know how he fared.

But a lack of competition can breed laziness in a lot of people. Worse still, that laziness gets rewarded. It’s not intentional. We just stop working as hard as we could. We coast.

Consider the proverbial office worker who sends out a sloppy first draft of a presentation to 15 people for them to “comment” on. What that person really wants is the work done for them. And because of the subtle messages organizations send, coworkers will often comply because they’re team players.

Consider the competition to make a sports team. The people on the bench (people who don’t start) make the starters better because the starters know they can’t get complacent or someone will take their job. Furthermore, the right to be on a team, once granted, isn’t assured. Someone is always vying to take any spot that opens up. That’s the nature of the world.

I’m not suggesting that all organizations promote a professional sport-like mentality. I’m suggesting you think about how you can harness competition to give people the information they need to get better. If they don’t want to get better after they know where they stand, you now know something about them you didn’t know before. I’m not also blindly advocating using competition. It has limitations and drawbacks you need to consider (such as the effects it has on self-preservation and psychological safety).

Footnotes
  • 1

    Image source: Max Pixel

The Lies We Tell

We make up stories in our minds and then against all evidence, defend them tooth and nail. Understanding why we do this is the key to discovering truth and making wiser decisions.

***

Our brains are quirky.

When I put my hand on a hot stove, I have instantly created awareness of a cause and effect relationship—“If I put my hand on a hot stove, it will hurt.” I’ve learned something fundamental about the world. Our brains are right to draw that conclusion. It’s a linear relationship, cause and effects are tightly coupled, feedback is near immediate, and there aren’t many other variables at play.

The world isn’t always this easy to understand. When cause and effect aren’t obvious, we still draw conclusions. Nobel Prize winning psychologist Daniel Kahneman offers an example of how our brains look for, and assume, causality:

“After spending a day exploring beautiful sights in the crowded streets of New York, Jane discovered that her wallet was missing.”

That’s all you get. No background on Jane, or any particulars about where she went. Kahneman presented this miniature story to his test subjects hidden among several other statements. When Kahneman later offered a surprise recall test, “the word pickpocket was more strongly associated with the story than the word sights, even though the latter was actually in the sentence while the former was not.” 1

What happened here?

There’s a bug in the evolutionary code that makes up our brains. We have a hard time distinguishing between when cause and effect is clear,  as with the hot stove or chess, and when it’s not, as in the case of Jane and her wallet. We don’t like not knowing. We also love a story.

Our minds create plausible stories. In the case of Jane, many test subjects thought a pickpocket had taken her wallet, but there are other possible scenarios. More people lose wallets than have them stolen. But our patterns of beliefs take over, such as how we feel about New York or crowds, and we construct cause and effect relationships. We tell ourselves stories that are convincing, cheap, and often wrong. We don’t think about how these stories are created, whether they’re right, or how they persist. And we’re often uncomfortable when someone asks us to explain our reasoning.

Imagine a meeting where we are discussing Jane and her wallet, not unlike any meeting you have this week to figure out what happened and what decisions your organization needs to make next.

You start the meeting by saying “Jane’s wallet was stolen. Here’s what we’re going to do in response.”

But one person in the meeting, Micky, Jane’s second cousin, asks you to explain the situation.

You volunteer what you know. “After spending a day exploring beautiful sights in the crowded streets of New York, Jane discovered that her wallet was missing.” And you quickly launch into improved security measures.

Micky, however, tells herself a different story, because just last week a friend of hers left his wallet at a store. And she knows Jane can sometimes be absentminded. The story she tells herself is that Jane probably lost her wallet in New York. So she asks you, “What makes you think the wallet was stolen?”

The answer is obvious to you. You feel your heart rate start to rise. Frustration sets in.

You tell yourself that Micky is an idiot. This is so obvious. Jane was out. In New York. In a crowd. And we need to put in place something to address this wallet issue so that it doesn’t happen again. You think to yourself that she’s slowing the group down and we need to act now.

What else is happening? It’s likely you looked at the evidence again and couldn’t really explain how you drew your conclusion. Rather than have an honest conversation about the story you told yourself and the story Micky is telling herself, the meeting gets tense and goes nowhere.

The next time you catch someone asking you about your story and you can’t explain it in a falsifiable way, pause, and hit reset. Take your ego out of it. What you really care about is finding the truth, even if that means the story you told yourself is wrong.

Footnotes
  • 1

    Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus & Giroux 2011

The Anatomy of a Great Decision

Making better decisions is one of the best skills we can develop. Good decisions save time, money, and stress. Here, we break down what makes a good decision and what we can do to improve our decision-making processes.

***

Improving our decision-making abilities is a central goal at Farnam Street. Better decisions save time, money, and stress. While it’s an investment now, in the long run, learning principles and developing a multidisciplinary lens that we can apply throughout life is a worthy investment.

As we have said before, a decision should not be judged solely on its outcome. Sometimes good decisions produce bad results. A recruiting process that has resulted in mostly excellent candidates will still occasionally fail to weed out a bad fit. It is impossible to have perfect and complete information for all the variables involved. So we do the best with what we have.

Using a decision journal can move us to that place where we are consistently making better decisions. At its core, the technique of identifying and reflecting on process from beginning to end helps us achieve the two main qualities in better decisions:

  1. Using principles, not tactics
  2. Looking at a situation through a multidisciplinary lens

These qualities are what we need to improve over time. And in the same way compounding interest increases our bank balance, better decisions produce exponentially better results the more of them we make. Hard decisions today, made well, prepare us to make decisions more easily in the future.

When we look around, however, to see what we can learn from others who made great decisions, we often judge based solely on the outcomes. Whether a decision by a family member to buy Coca-Cola stock in the ’80s, or Caesar to cross the Rubicon, we evaluate a decision as good based on how things turned out.

Evaluating decisions on outcomes prevents us from learning. We need to dive into a decision, cut it open and examine its parts. Regardless of what happened, learning how a decision was made is the place to find knowledge. So what does the anatomy of a great decision look like?

The Marshall Plan

After WWII, Europe was in ruins. Much of the infrastructure had been destroyed. Many people were starving, and had lost everything they possessed. Those systems we take for granted, but on which we rely daily—transportation, manufacturing, agriculture—had been devastated. The economies were essentially broken, and the countries that saw a lot of fighting had much to rebuild. But with what money? Many countries were in serious debt. Continued, widespread economic hardships were on the horizon.

In 1947, Secretary of State General George Marshall put forward a plan that has since carried his name, a plan to give a massive amount of money to several European nations. Those countries accepted, the continent was rebuilt, and Marshall is credited with one of the most positive defining acts of economics, politics, and ethics in the last century. But when you look at the thinking that went into the Marshall Plan, the reasoning behind the details, you see that it would have been a great decision regardless of the outcome.

Asking the Right Questions

At the beginning, participants asked questions. What do we want to achieve? What problems are we addressing? What does a successful outcome look like?

From there came the principles, things like: strong economies minimize social unrest; countries that work toward mutual goals are less likely to fight each other; let’s not have another war in Europe anytime soon.

Starting from these principles, decision makers evaluated the situation through a multidisciplinary lens. Economics, politics, humanitarian responsibilities, historical and psychological factors—the plan sought to address issues on many fronts and took a wide perspective into account.

The plan was developed in the State Department of the United States. It was not the work of a single individual, and contributions from many people made it into the final version that Marshall fought for in Congress.

In the end, there were three key decisions made in terms of the structure of the plan:

  1. To give, versus lend, the majority of the aid
  2. To require the nations receiving the aid to work out how to allocate it
  3. To invite Russia to partake

Using Multiple Lenses

The decision to give rather than lend the majority of the aid was the result of looking at the situation through economic, political, and humanitarian lenses. It was also a win-win.

Immediately following the war, the European nations had put significant effort into restarting their economies. But they were doing it with borrowed dollars, needing to import far more than they were capable of exporting. Many economies needed modernization, which was impossible to fund while paying for imports at the same time. Without full economic recovery in Europe, there was great danger of a recession, or even a second depression. Basically, Europe needed money.

But economies are also about people. It is people who produce and consume and develop the economy. So it wasn’t just the countries that needed financial assistance, but the people in them. The designers of the plan knew that hungry, desperate people would only create more social unrest. They saw that if they didn’t give the money to Europe they might very well have to spend it on national security as Europe fell apart.

And we can’t discount the impact of the physical reality of the aftermath of the war that the liberating forces confronted—starving people, towns reduced to rubble. The case for humanitarian assistance was strong.

Letting the World Do the Work For You

The decision to have the participating nations allocate the aid among themselves was the answer to what the historical, political and psychological lenses revealed.

Many people felt that the approach to reparations after WWI was a significant impetus for WWII. The First World War had a similar effect on the economies and infrastructures of the nations involved. In 1918, angry at Germany, France and Britain had demanded huge sums of money. The problem was, it essentially crippled Germany economically, and caused a social and political situation that created enmity among the European nations. Many argue that it was this series of events that produced a situation in which Hitler could come to power.

The creators of the Marshall Plan were aware of this, and it was one of the elements that influenced the design of the terms. If Germany collapsed again, they might be fighting World War III in twenty years.

By asking enemies to work together and approve each other’s share, the plan created a buy-in that defused much of the anger and animosity between the nations. Just a couple of years earlier they had been at war with each other. After sacrificing so much in both lives and money, it was natural that the various peoples were angry over both who started the war, and the many violent and destructive events that were enacted over those six years.

But the US decided to not take sides and extend the alliances of the war. The plan creators realized this wouldn’t help fulfill the principles they had chosen to abide by. Europe working meant Europe working together.

Outcomes Over Optics

Inviting Russia to share in the aid was another important result of applying those political, historical, psychological, and humanitarian lenses.

The end of WWII marked the beginning of the Cold War. More nebulous by nature, starting a couple of years after the liberation of Europe and the dropping of the atomic bomb, this political climate would shape international relations for the next 40 years. The Marshall Plan took into account how best to navigate this complicated territory. Russia had been a valuable ally during the war, holding the eastern front and inflicting considerable damage on Hitler’s efforts. But immediately post-war their actions demonstrated a desire to at least influence, if not control, the political structure of the world. Their version of communism was at direct odds with US democracy, and was thus considered a legitimate threat.

Even though there was very little expectation that Russia would participate, and possibly even less desire to give them money, Russia and its allied countries were invited by both the US and the European nations to participate in the talks involving the implementation of the plan. They chose not to, and followed up with accusing the plan of being a front to American imperialist goals. This was important because it forced Russia’s hand. They could not later claim that the Iron Curtain was something that was thrust on them. It was, instead, something they deliberately chose to build.

The Marshall Plan is remembered as a great decision, not strictly because of its outcomes—though it did contribute to the debatably successful reconstruction of Europe, it did not succeed in preventing the deterioration of relations with Russia—but because it was firmly grounded in principles that were identified and executed through a multidisciplinary lens.

Footnotes
  • 1

    Sutcliffe, Anthony. An Economic and Social History of Western Europe since 1945. London: Longman, 1996.

  • 2

    Unger, Debi and Irwin. George Marshall: A Biography. New York: HarperCollins, 2014.

The Importance of Working With “A” Players

Stop me if this sounds familiar. There is a person who toils alone for years in relative obscurity before finally cracking the code to become a hero. The myth of the lone genius. It’s the stuff of Disney movies.

Of course, we all have moments when we’re alone and something suddenly clicks. We’d do well to remember, though, that in those moments, we are not as independent as we like to think. The people we surround ourselves with matter.

In part, because we tell ourselves the story of the lone genius, we under-appreciate the role of a team. Sure, the individual matters, no doubt. However, the individual contributions are supercharged by the team around them.

We operate in a world where it’s nearly impossible to accomplish anything great as an individual.  When you think about it, you’re the product of an education system, a healthcare system, luck, roads, the internet and so much more. You may be smart but you’re not self-made. And at work, most important achievements require a team of people working together.

The leader’s job is to get the team right. Getting the team right means that people are better as a group than as individuals. Now this is important.  Step back and think about that for a second — the right teams make every individual better than they would be on their own.

Another way to think about this is in terms of energy. If you have 12 people on a team and they each have 10 units of energy, you would expect to get 120 units of output. That’s what an average team will do. Worse teams will do worse. A great team will take the same inputs and get a non-linear outcome. The result won’t be 120; it’ll be 360.No matter where you’re going, great teams will get you there multiples faster than average teams.

Here is a quote by Steve Jobs on the importance of assembling “A” players.

I observed something fairly early on at Apple, which I didn’t know how to explain then, but I’ve thought a lot about it since. Most things in life have a dynamic range in which [the ratio of] “average” to “best” is at most 2:1. For example, if you go to New York City and get an average taxi cab driver, versus the best taxi cab driver, you’ll probably get to your destination with the best taxi driver 30% faster. And an automobile; what’s the difference between the average car and the best? Maybe 20%? The best CD player versus the average CD player? Maybe 20%? So 2:1 is a big dynamic range for most things in life. Now, in software, and it used to be the case in hardware, the difference between the average software developer and the best is 50:1; maybe even 100:1. Very few things in life are like this, but what I was lucky enough to spend my life doing, which is software, is like this. So I’ve built a lot of my success on finding these truly gifted people, and not settling for “B” and “C” players, but really going for the “A” players. And I found something… I found that when you get enough “A” players together, when you go through the incredible work to find these “A” players, they really like working with each other. Because most have never had the chance to do that before. And they don’t work with “B” and “C” players, so it’s self-policing. They only want to hire “A” players. So you build these pockets of “A” players and it just propagates.

Building a team is more complicated than collecting talent1. I once tried to solve a problem by putting a bunch of PhDs’ in a room. While comments like that sounded good and got me a lot of projects above my level, they were rarely effective at delivering actual results.

Statements like “let’s assemble a multidisciplinary team of incredible people” are gold in meetings if you work for an organization. These statements sound intelligent. They are hard to argue with. And, most importantly, they also have no accountability built in, and they are easy to wiggle out of. If things don’t work out, who can fault a plan that meant putting smart people in a room.

Well … I can. It’s a stupid plan.

The combination of individual intelligence does not make for group intelligence. Thinking about this in the context of the Jobs quote above, “A” players provide a lot more than raw intellectual horsepower. Among other things, they also bring drive, integrity, and an ability to make others better.  “A” players want to work with other “A” players. Accepting that statement doesn’t mean they’re all “the best”.

In my experience solving difficult problems, the best talent available rarely led to the best solutions. You needed the best team. And the best team meant you had to exercise judgment and think about the problem. While there was often one individual with the idea that ultimately solved the problem, it wouldn’t have happened without the team.  The ideas others spark in us are more than we can spark in ourselves.

Footnotes
  • 1

    A play on a quote by Bill Belichick

In the face of adversity, are you a Guernsey or a Brahman?

If the mother of a Guernsey and a Brahma calf dies, one of the calves will survive and one will not. One thing makes the difference. And is it the very factor that keeps us from reaching what we want most.

***

Persistence in the face of defeat often makes the difference in outcome.

Ask any farmer, and they will tell you that orphaned Guernsey calves die. It’s not the fact that they die, so much as how it happens, that stays in the mind. An orphaned calf soon gets so hungry she picks a new mother from the herd. The cow promptly kicks the strange calf away. After all, she didn’t give birth to the calf—why should she feed it? The Guernsey calf gives up, lies down, and slowly starves to death.

The orphaned Brahman calf gets a different result. The same scenario plays out, with the calf being kicked out by the reluctant mother. However, in this case, the naturally persistent calf keeps coming, until the potential new mother acquiesces out of exhaustion. As a result of this persistence, the calf survives.

Persistence is hard. It’s hard to get kicked in the face and to keep going. It hits at your self-esteem. You begin to wonder if you have value. You begin to think you might be crazy.

So often we’re told that having a positive attitude is the important thing. You can get through the setbacks if you find the silver linings and believe in what you are doing. But it’s important to remember that persistence and a positive attitude aren’t the same thing. They differ in some pretty fundamental ways.

Positivity is fragile. If you’re positively certain that you’ll be successful, you’ll start to worry the minute things deviate from your plan. Once this worry seeps into your mind, it’s impossible to get out. You’re done. When the going gets tough, positive attitudes often vanish.

Persistence, on the other hand, anticipates roadblocks and challenges. It gears up for the fact that things never go as planned and expects goals to be hard to attain.

If you run into failure, persistence continues, and positivity disappears. Persistence is antifragile and benefits from setbacks, while positivity, like that Guernsey calf, crumbles when it runs into hard times.

When met with setbacks, are you a Guernsey or a Brahman?