Author: Farnam Street

The Danger of Comparing Yourself to Others

The most important things in life are internal not external.

“The big question about how people behave,” says Warren Buffett, “is whether they’ve got an inner scorecard or an outer scorecard. It helps if you can be satisfied with an inner scorecard.” To make his point, Buffett often asks a simple question: Would you rather be the world’s greatest lover, but have everyone think you’re the world’s worst lover? Or would you rather be the world’s worst lover but have everyone think you’re the world’s greatest lover?

Comparing ourselves to others allows them to drive our behavior. This type of comparison is between you and someone else. Sometimes it’s about something genetic, like wishing to be taller, but more often it’s about something the other person is capable of doing that we wish we could do as well. Maybe Sally writes better reports than you, and maybe Bob has a happier relationship with his spouse than you do. Sometimes this comparison is motivating and sometimes it’s destructive.

You can be anything but you can’t be everything. When we compare ourselves to others, we’re often comparing their best features against our average ones. It’s like being right-handed and trying to play an instrument with your left hand. Not only do we naturally want to be better than them, the unconscious realization that we are not often becomes self-destructive.

Comparisons between people are a recipe for unhappiness unless you are the best in the world. Which, let’s be honest, only one person is. Not only are we unhappy but the other people are as well. They are probably comparing themselves to you—maybe you’re better at networking than they are and they’re jealous. At worst, when we compare ourselves to others we end up focusing our energy on bringing them down instead of raising ourselves up.

There is one thing that you’re better at than other people: being you. This is the only game you can really win.

When you start with this mindset the world starts to look better again. No longer are you focused on where you stand relative to others. Instead, your focus and energy is placed on what you’re capable of now and how you can improve yourself.

Life becomes about being a better version of yourself. And when that happens, your effort and energy go toward upgrading your personal operating system every day, not worrying about what your coworkers are doing. You become happier, free from the shackles of false comparisons and focused on the present moment.

When what you do doesn’t meet the expectations of others, too bad. The way they look at you is the same way you were looking at them, though a distorted lens shaped by experiences and expectations. What really matters is what you think about what you do, what your standards are, what you can learn today.

That’s not an excuse to ignore thoughtful opinions—other people might give you a picture of how you fall short of being your best self. Instead, it’s a reminder to compare yourself to who you were this morning. Are you better than you were when you woke up? If not, you’ve wasted a day. It’s less about others and more about how you improve relative to who you were.

When you stop comparing between people and focus internally, you start being better at what really matters: being you. It’s simple but not easy.

The most important things in life are measured internally. Thinking about what matters to you is hard. Playing to someone else’s scoreboard is easy, that’s why a lot of people do it. But winning the wrong game is pointless and empty. You get one life. Play your own game.

Jeff Bezos: Big Things Start Small

An interview with Amazon.com founder Jeff Bezos touches on the timeless lessons he’s learned for business success. The three big ideas are (1) thinking on a different timescale, (2) putting the customer first, and (3) inventing.

What we’re really focused on is thinking long-term, putting the customer at the center of our universe and inventing. Those are the three big ideas to think long-term because a lot of invention doesn’t work. If you’re going to invent, it means you’re going to experiment, you have to think long-term. These three ideas, customer-centricity, long-term thinking and a passion for invention, those go together. That’s how we do it and by the way, we have a lot of fun doing it that way.

Ballet or Rock Concert?

When asked about the pressures of running a public company and meeting quarterly earnings expectations he said:

Well, I think that if you’re straight forward and clear about the way that you’re going to operate, then you can operate in whatever way you choose. We don’t even take a position on whether our way is the right way, we just claim it’s our way, but Warren Buffet has a great saying along these lines. He says, “You can hold a ballet and that can be successful and you can hold a rock concert and that can be successful. Just don’t hold a ballet and advertise it as a rock concert. You need to be clear with all of your stakeholders, with are you holding a ballet or are you holding a rock concert and then people get to self-select in.”

Big Things Start Small

While there is no one recipe that fits all, there are elements of what Amazon does that help.

[I]nside our culture, we understand that even though we have some big businesses, new businesses start out small. It would be very easy for say the person who runs a US books category to say, “Why are we doing these experiments with things? I mean that generated a tiny bit of revenue last year. Why don’t we instead, focus those resources and all that brain power on the books category, which is a big business for us?” Instead, that would be a natural thing to have happen, but instead inside Amazon, when a new business reaches some small milestone of sales, email messages go around and everybody’s giving virtual high fives for reaching that milestone. I think it’s because we know from our past experiences that big things start small. The biggest oak starts from an acorn and if you want to do anything new, you’ve got to be willing to let that acorn grow into a little sapling and then finally into a small tree and maybe one day it will be a big business on its own.

Step By Step Ferociously

The Latin phrase gradatim ferociter Is a Bezos favorite. What does it mean?

Well it means step by step ferociously and it’s the motto for Blue Origin. Basically you can’t skip steps, you have to put one foot in front of the other, things take time, there are no shortcuts but you want to do those steps with passion and ferocity.

Loving What You Do

Not every day is going to be fun and easy. That’s why they call it work.

I have a lot of passions and interests but one of them is at Amazon, the rate of change is so high and I love that. I love the pace of change. I love the fact that I get to work with these big, smart teams. The people I work with are so smart and they’re self-selected for loving to invent on behalf of customers.

It’s not, do I love every moment of every day? No, that’s why they call it work. There are things that I don’t enjoy, but if I’m really objective about it and I look at it, I’m so lucky to be working alongside all these passionate people and I love it. Why would I go sit on a beach?

 

Footnotes
  • 1

    Source of interview: https://twitter.com/producthunt/status/1125038440372932608?s=11

Resonance: How to Open Doors For Other People

It’s only polite.

Hold the door open for others, and they will open doors for you.

We are far more interdependent than we would like to admit. We biologically need to connect. “Limbic resonance” is a term used by Thomas Lewis, Fari Amini, and Richard Lannon in their book, A General Theory of Love, to express the ability to share deep emotional states. The limbic lobe of the brain is what makes a mammalian brain what it is. Without it, a mammal would be reduced to a reptilian brain with the connective capacity of a snake or lizard. This is why reptiles are often felt to be scary—unreachable and heartless.

Resonance is not only a mammalian capacity but an outright necessity. Our infants will die if not provided with the warmth of connection with another being, despite being provided with all their physiological needs. This has been illustrated in inhumane 13th-century human ‘experiments’ by Frederick the Great depriving babies of human connection, and more recently by Harry Harlow in rhesus monkeys. Baby monkeys choose to spend 17 hours a day with a soft cloth mother figure that does not provide food compared to only one hour a day with a wire mother figure that actually provides milk. Connection is a far superior sustenance.

Via Life

An oft-quoted study by psychologist John Gottman suggests a partner’s ability to answer “emotional bids” to be strongly predictive of divorce. The divorce rate is higher in couples where partners do not resonate or fail to engage and respond to requests for attention. Those who divorced after a six-year follow-up were observed to have turned towards the other on only 30% of occasions a bid was made, whilst couples who were still together averaged closer to 90%. Furthermore, in A General Theory of Love, the authors convincingly argue that what we are actually doing is synchronising ourselves with one another, with deep impacts on our emotional and physical health.

This would be in keeping with the results of the well-known Harvard Study of Adult Development, which followed a large cohort of people over a lifetime. These types of studies are rare because they’re expensive and hard to carry out. This study was well worth investing in, with one clear overall conclusion: good relationships keep us happier and healthier. Its director, psychiatrist Robert Waldinger, states:

Well, the lessons aren’t about wealth or fame or working harder and harder. The clearest message that we get from this 75-year study is this: Good relationships keep us happier and healthier. Period.

We’ve learned three big lessons about relationships. The first is that social connections are really good for us, and that loneliness kills. It turns out that people who are more socially connected to family, to friends, to community, are happier, they’re physically healthier, and they live longer than people who are less well connected. And the experience of loneliness turns out to be toxic. People who are more isolated than they want to be from others find that they are less happy, their health declines earlier in midlife, their brain functioning declines sooner and they live shorter lives than people who are not lonely.

So what now? Where does that leave us?

People feel connected when they are understood and appreciated. My friend’s aunt taught her this when they walked together down a busy road. Her aunt stopped to talk to a homeless man. With no money to give him, she started asking questions about his dog, chatting to him about her own dog. The interaction took 30 seconds. The man’s eyes shone back bright, engaged. As they walked away, my friend’s aunt whispered, “People want to be recognized. It reminds them they exist. Never take that away from anyone.” Lesson learned.

Listen, Summarize, Show

I work hard to live that lesson through the following: listen, summarize, show. True, sustained listening is one of the hardest skills to achieve. I’ve met only a handful of people with the ability. A simple way to focus your attention is to listen with the intention of summarizing the other person’s point of view. This stops you from using your mental energy to work out your reply, and helps store the other’s words in your memory as well as identify any gaps in your understanding so you can ask questions to clarify.

The nature of these questions in themselves will show to the other person that they are heard and effort is being made to take them seriously. Just as it is not enough to know, when it comes to human relationships, it is not enough to understand. What is crucial is to show you understand. If empathy is recognizing another’s perspective, consideration for the other needs to be externalized from you for it to exist and build rapport.

Summarizing and asking questions is a way of feeding back your resonance. Cutting short the conversation, stating opinions, value judgements, your own solutions, or even a lazy “I see” or “interesting” does not demonstrate resonance. In fact, you can use “I understand” as a red flag for someone who does not understand. Often, this is followed by an action that shows a thorough lack of comprehension.

Connect Where It Matters

To resonate with others, we need to connect when it matters. This nurtures both us and others, and also earns trust. Just as in cooking, timing is everything.

This is where the metaphorical doors come in. How do you feel when someone holds the door open for you—especially when you’ve got your hands full? When would you hold open a door for another person?

We may kindly open a door, to find the person has no intention of walking through it and continues down the stairwell because they’re heading to the floor below. In this case, we did not understand their needs. We may even find ourselves bending over backwards for another, without consequence. This is the equivalent of opening doors willy-nilly down a long corridor without anyone walking through them.

At worst, we might inadvertently (or dare I say, even intentionally) slam a door in someone’s face. That will hurt—even more so if we had offered to hold it for them and they were counting on it to be open. Holding a door open at the right time represents tending to a perceived need and meeting expectations.

All people want to be understood and appreciated. By connecting in this way, they trust you understand them and are actually looking out for their interests. You are attentive and willing to open doors for them. The power of resonance will keep you happy and healthy and open doors for you.

Gates’ Law: How Progress Compounds and Why It Matters

“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”

It’s unclear exactly who first made that statement, when they said it, or how it was phrased. The most probable source is Roy Amara, a Stanford computer scientist. In the 1960s, Amara told colleagues that he believed that “we overestimate the impact of technology in the short-term and underestimate the effect in the long run.” For this reason, variations on that phrase are often known as Amara’s Law. However, Bill Gates made a similar statement (possibly paraphrasing Amara), so it’s also known as Gates’s Law.

You may have seen the same phrase attributed to Arthur C. Clarke, Tony Robbins, or Peter Drucker. There’s a good reason why Amara’s words have been appropriated by so many thinkers—they apply to so much more than technology. Almost universally, we tend to overestimate what can happen in the short term and underestimate what can happen in the long term.

Thinking about the future does not require endless hyperbole or even forecasting, which is usually pointless anyway. Instead, there are patterns we can identify if we take a long-term perspective.

Let’s look at what Bill Gates meant and why it matters.

Moore’s Law

Gates’s Law is often mentioned in conjunction with Moore’s Law. This is generally quoted as some variant of “the number of transistors on an inch of silicon doubles every eighteen months.” However, calling it Moore’s Law is misleading—at least if you think of laws as invariant. It’s more of an observation of a historical trend.

When Gordon Moore, co-founder of Fairchild Semiconductor and Intel, noticed in 1965 that the number of semiconductors on a chip doubled every year, he was not predicting that would continue in perpetuity. Indeed, Moore revised the doubling time to two years a decade later. But the world latched onto his words. Moore’s Law has been variously treated as a target, a limit, a self-fulfilling prophecy and a physical law as certain as the laws of thermodynamics.

Moore’s Law is now considered to be outdated, after holding true for several decades. That doesn’t mean the concept has gone anywhere. Moore’s Law is often regarded as a general principle in technological development. Certain performance metrics have a defined doubling time, the opposite of a half-life.

Why is Moore’s Law related to Amara’s Law?

Exponential growth is a concept we struggle to conceptualize. As University of Colorado physics professor Albert Allen Bartlett famously put it, “The greatest shortcoming of the human race is our inability to understand the exponential function.”

When we talk about Moore’s Law, we easily underestimate what happens when a value keeps doubling. Sure, it’s not that hard to imagine your laptop getting twice as fast in a year, for instance. Where it gets tricky is when we try to imagine what that means on a longer timescale. What does that mean for your laptop in 10 years? There is a reason your iPhone has more processing power than the first space shuttle.

One of the best illustrations of exponential growth is the legend about a peasant and the emperor of China. In the story, the peasant (sometimes said to be the inventor of chess), visits the emperor with a seemingly modest request: a chessboard with one grain of rice on the first square, then two on the second, four on the third and so on, doubling each time. The emperor agreed to this idiosyncratic request and ordered his men to start counting out rice grains.

“Every fact of science was once damned. Every invention was considered impossible. Every discovery was a nervous shock to some orthodoxy. Every artistic innovation was denounced as fraud and folly. We would own no more, know no more, and be no more than the first apelike hominids if it were not for the rebellious, the recalcitrant, and the intransigent.”

— Robert Anton Wilson

If you haven’t heard this story before, it might seem like the peasant would end up with, at best, enough rice to feed their family that evening. In reality, the request was impossible to fulfil. Doubling one grain 63 times (the number of squares on a chessboard, minus the first one that only held one grain) would mean the emperor had to give the peasant over 18 million trillion grains of rice. To grow just half of that amount, he would have needed to drain the oceans and convert every bit of land on this planet into rice fields. And that’s for half.

In his essay “The Law of Accelerating Returns,” author and inventor Ray Kurzweil uses this story to show how we misunderstand the meaning of exponential growth in technology. For the first few squares, the growth was inconsequential, especially in the eyes of an emperor. It was only once they reached the halfway point that the rate began to snowball dramatically. (It’s no coincidence that Warren Buffett’s authorized biography is called The Snowball, and few people understand exponential growth better then Warren Buffett). It just so happens that by Kurzweil’s estimation, we’re at that inflection point in computing. Since the creation of the first computers, computation power has doubled roughly 32 times. We may underestimate the long-term impact because the idea of this continued doubling is so tricky to imagine.

The Technology Hype Cycle

To understand how this plays out, let’s take a look at the cycle innovations go through after their invention. Known as the Gartner hype cycle, it primarily concerns our perception of technology—not its actual value in our lives.

Hype cycles are obvious in hindsight, but fiendishly difficult to spot while they are happening. It’s important to bear in mind that this model is one way of looking at reality and is not a prediction or a template. Sometimes a step gets missed, sometimes there is a substantial gap between steps, sometimes a step is deceptive.

The hype cycle happens like this:

  • New technology: The media picks up on the existence of a new technology which may not exist in a usable form yet. Nonetheless, the publicity leads to significant interest. At this point, people working on research and development are probably not making any money from it. Lots of mistakes are made. In Everett Rogers’s diffusion of innovations theory, this is known as the innovation stage. If it seems like something new will have a dramatic payoff, it probably won’t last. If it seems we have found the perfect use for a brand-new technology, we may be wrong.
  • The peak of inflated expectations: A few well-publicized success stories lead to inflated expectations. Hype builds and new companies pop up to anticipate the demand. There may be a burst of funding for research and development. Scammers looking to make a quick buck may move into the area. Rogers calls this the syndication stage. It’s here that we overestimate the future applications and impact of the technology.
  • The trough of disillusionment: Prominent failures or a lack of progress break through the hype and lead to disillusionment. People become pessimistic about the technology’s potential and mostly lose interest. Reports of scams may contribute to this, as the media uses this as a reason to describe the technology as a fraud. If it seems like a new technology is dying, it may just be that its public perception has changed and the technology itself is still developing. Hype does not correlate directly with functionality.
  • The slope of enlightenment: As time passes, people continue to improve the technology and find better uses for it. Eventually, it’s clear how it can improve our lives and mainstream adoption begins. Mechanisms for preventing scams or lawbreaking emerge.
  • The plateau of productivity: The technology becomes mainstream. Development slows. It becomes part of our lives and ceases to seem novel. Those who move into the now saturated market tend to struggle, as a few dominant players take the lion’s share of the available profits. Rogers calls this the diffusion stage.

When we are cresting the peak of inflated expectations, we imagine that the new development will transform our lives within months. In the depths of the trough of disillusionment, we don’t expect it to get anywhere, even allowing years for it to improve. We typically fail to anticipate the significance of the plateau of productivity, even if it exceeds our initial expectations.

Smart people can usually see through the initial hype. But only a handful of people can—through foresight, stubbornness or perhaps pure luck—see through the trough of disillusionment. Most of the initial sceptics feel vindicated by the dramatic drop in interest and expect the innovation to disappear. It takes far greater expertise to support an unpopular technology than to deride a popular one.

Correctly spotting the cycle as it unfolds can be immensely profitable. Misreading it can be devastating. First movers in a new area often struggle to survive the trough, even if they are the ones who do the essential research and development. We tend to assume current trends will continue, so we expect sustained growth during the peak and expect linear decline during the trough.

If we are trying to assess the future impact of a new technology, we need to separate its true value from its public perception. When something is new, the mainstream hype is likely to be more noise than signal. After all, the peak of inflated expectations often happens before the technology is available in a usable form. It’s almost always before the public has access to it. Hype serves a real purpose in the early days: it draws interest, secures funding, attracts people with the right talents to move things forward and generates new ideas. Not all hype is equally important, because not all opinions are equally important. If there’s intense interest within a niche group with relevant expertise, that’s more telling than a general enthusiasm.

The hype cycle doesn’t just happen with technology. It plays out all over the place, and we’re usually fooled by it. Discrepancies between our short- and long-term estimates of achievement are everywhere. Consider the following situations. They’re hypothetical, but similar situations are common.

  • A musician releases an acclaimed debut album which creates enormous interest in their work. When their second album proves disappointing (or never materializes), most people lose interest. Over time, the performer develops a loyal, sustained following of people who accurately assess the merits of their music, not the hype.
  • A promising new pharmaceutical receives considerable attention—until it becomes apparent that there are unexpected side effects, or it isn’t as powerful as expected. With time, clinical trials find alternate uses which may prove even more beneficial. For example, a side effect could be helpful for another use. It’s estimated that over 20% of pharmaceuticals are prescribed for a different purpose than they were initially approved for, with that figure rising as high as 60% in some areas.
  • A propitious start-up receives an inflated valuation after a run of positive media attention. Its founders are lauded and extensively profiled and investors race to get involved. Then there’s an obvious failure—perhaps due to the overconfidence caused by hype—or early products fall flat or take too long to create. Interest wanes. The media gleefully dissects the company’s apparent demise. But the product continues to improve and ultimately becomes a part of our everyday lives.

In the short run the world is a voting machine affected by whims and marketing. In the long run it’s a weighing machine were quality and product matter.

The Adjacent Possible

Now that we know how Amara’s Law plays out in real life, the next question is: why does this happen? Why does technology grow in complexity at an exponential rate? And why don’t we see it coming?

One explanation is what Stuart Kauffman describes as “the adjacent possible.” Each new innovation adds to the number of achievable possible (future) innovations. It opens up adjacent possibilities which didn’t exist before, because better tools can be used to make even better tools.

Humanity is about expanding the realm of the possible. Discovering fire meant our ancestors could use the heat to soften or harden materials and make better tools. Inventing the wheel meant the ability to move resources around, which meant new possibilities such as the construction of more advanced buildings using materials from other areas. Domesticating animals meant a way to pull wheeled vehicles with less effort, meaning heavier loads, greater distances and more advanced construction. The invention of writing led to new ways of recording, sharing and developing knowledge which could then foster further innovation. The internet continues to give us countless new opportunities for innovation. Anyone with a new idea can access endless free information, find supporters, discuss their ideas and obtain resources. New doors to the adjacent possible open every day as we find different uses for technology.

“We like to think of our ideas as $40,000 incubators shipped direct from the factory, but in reality, they’ve been cobbled together with spare parts that happened to be sitting in the garage.”

— Steven Johnson, Where Good Ideas Come From

Take the case of GPS, an invention that was itself built out of the debris of its predecessors. In recent years, GPS has opened up new possibilities that didn’t exist before. The system was developed by the US government for military usage. In the 1980s, they decided to start allowing other organizations and individuals to use it. Civilian access to GPS gave us new options. Since then, it has led to numerous innovations that incorporate the system into old ideas: self-driving cars, mobile phone tracking (very useful for solving crime or finding people in emergency situations), tectonic plate trackers that help predict earthquakes, personal navigation systems, self-navigating robots and many others. None of these would have been possible without some sort of global positioning system. With the invention of GPS, human innovation sped up a little more.

Steven Johnson gives one example of how this happens in Where Good Ideas Come From. In 2008, MIT professor Timothy Presto visited a hospital in Indonesia and found that all eight of the incubators for newborn babies were broken. The incubators had been donated to the hospital by relief organizations, but the staff didn’t know how to fix them. Plus, the incubators were poorly suited to the humid climate and the repair instructions only came in English. Presto realized that donating medical equipment was pointless if local people couldn’t fix it. He and his team began working to design an incubator that could save the lives of babies for a lot longer than a couple of months.

Instead of continuing to tweak existing designs, Presto and his team devised a completely new incubator that used car parts. While the local people didn’t know how to fix an incubator, they were extremely adept at keeping their cars working no matter what. Named the NeoNurture, it used headlights for warmth, dashboard fans for ventilation, and a motorcycle battery for power. Hospital staff just needed to find someone who was good with cars to fix it—the principles were the same.

Even more telling is the origin of the incubators Presto and his team reconceptualized. The first incubator for newborn babies was designed by Stephane Tarnier in the late 19th century. While visiting a zoo on his day off, Tarnier noted that newborn chicks were kept in heated boxes. It’s not a big leap to imagine that the issue of infant mortality was permanently on his mind. Tarnier was an obstetrician, working at a time when the infant mortality rate for premature babies was about 66%. He must have been eager to try anything that could reduce that figure and its emotional toll. Tarnier’s rudimentary incubator immediately halved that mortality rate. The technology was right there, in the zoo. It just took someone to connect the dots and realize human babies aren’t that different from chicken babies.

Johnson explains the significance of this: “Good ideas are like the NeoNurture device. They are, inevitably, constrained by the parts and skills that surround them…ideas are works of bricolage; they’re built out of that detritus.” Tarnier could invent the incubator only because someone else had already invented a similar device. Presto and his team could only invent the NeoNurture because Tarnier had come up with the incubator in the first place.

This happens in our lives as well. If you learn a new skill, the number of skills you could potentially learn increases because some elements may be transferable. If you are introduced to a new person, the number of people you could meet grows, because they may introduce you to others. If you start learning a language, native speakers may be more willing to have conversations with you in it, meaning you can get a broader understanding. If you read a new book, you may find it easier to read other books by linking together the information in them. The list is endless. We can’t imagine what we’re capable of achieving in ten years because we forget about the adjacent possibilities that will emerge.

Accelerating Change

The adjacent possible has been expanding ever since the first person picked up a stone and started shaping it into a tool. Just look at what written and oral forms of communication made possible—no longer did each generation have to learn everything from scratch. Suddenly we could build upon what had come before us.

Some (annoying) people claim that there’s nothing new left. There are no new ideas to be had, no new creations to invent, no new options to explore. In fact, the opposite is true. Innovation is a non-zero-sum game. A crowded market actually means more opportunities to create something new than a barren one. Technology is a feedback loop. The creation of something new begets the creation of something even newer and so on.

Progress is exponential, not linear. So we overestimate the impact of a new technology during the early days when it is just finding its feet, then underestimate its impact in a decade or so when its full uses are emerging. As old limits and constraints melt away, our options explode. The exponential growth of technology is known as accelerating change. It’s a common belief among experts that the rate of change is speeding up and society will change dramatically alongside it.

“Ideas borrow, blend, subvert, develop and bounce off other ideas.”

— John Hegarty, Hegarty On Creativity

In 1999, author and inventor Ray Kurzweil posited the Law of Accelerating Change — that evolutionary systems develop at an exponential rate. While this is most obvious for technology, Kurzweil hypothesized that the principle is relevant in numerous other areas. Moore’s Law, initially referring only to semiconductors, has wider implications.

In an essay on the topic, he writes:

An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth.

Progress is tricky to predict or even to notice as it happens. It’s hard to notice things in a system that we are part of. And it’s hard to notice incremental change because it lacks stark contrast. The current pace of change is our norm and we adjust to it. In hindsight, we can see how Amara’s Law plays out.

Look at where the internet was just twenty years ago. A report from the Pew Research Center shows us how change compounds. In 1998, a mere 41% of Americans used the internet at all—and the report expresses surprise that the users were beginning to include “people without college training, those with modest incomes, and women.” Less than a third of users had bought something online, email was predominantly just for work, and only a third of users looked at online news at least once per week. That’s a third of the 41% using the internet by the way, not of the general population. Wikipedia and Gmail didn’t exist. Internet users in the late nineties reported that their main problem was finding what they needed online.

That is perhaps the biggest change and one we may not have anticipated: the move towards personalization. Finding what we need is no longer a problem. Most of us have the opposite problem and struggle with information overwhelm. Twenty years ago, filter bubbles were barely a problem (at least, not online.) Now, almost everything we encounter online is personalized to ensure it’s ridiculously easy to find what we want. Newsletters, websites, and apps greet us by name. Newsfeeds are organized by our interests. Shopping sites recommend other products we might like. This has increased the amount the internet does for us to a level that would have been hard to imagine in the late 90s. Kevin Kelly, writing in The Inevitable,  describes filtering as one of the key forces that will shape the future.

History reveals an extraordinary acceleration of technological progress. Establishing the precise history of technology is problematic as some inventions occurred in several places at varying times, archaeological records are inevitably incomplete and dating methods are imperfect. However, accelerating change is a clear pattern. To truly understand the principle of accelerating change, we need to take a quick look at a simple overview of the history of technology.

Early innovations happened slowly. It took us about 30,000 years to invent clothing and about 120,000 years to invent jewelry. It took us about 130,000 years to invent art and about 136,000 years to come up with the bow and arrow. But things began to speed up in the Upper Paleolithic period. Between 50,000 and 10,000 years ago, we developed more sophisticated tools with specialized uses—think harpoons, darts, fishing tools and needles—early musical instruments, pottery, and the first domesticated animals. Between roughly 11,000 years and the 18th century, the pace truly accelerated. That time period essentially led to the creation of civilization, with the foundations of our current world.

More recently, the Industrial Revolution changed everything because it moved us significantly further away from relying on the strength of people and domesticated animals to power means of production. Steam engines and machinery replaced backbreaking labor, meaning more production at a lower cost. The number of adjacent possibilities began to snowball. Machinery enabled mass production and interchangeable parts. Steam powered trains meant people could move around far more easily, allowing people from different areas to mix together and share ideas. Improved communications did the same. It’s pointless to even try listing the ways technology has changed since then. Regardless of age, we’ve all lived through it and seen the acceleration. Few people dispute that the change is snowballing. The only question is how far that will go.

As Stephen Hawking put it in 1993:

For millions of years, mankind lived just like the animals. Then something happened which unleashed the power of our imagination. We learned to talk and we learned to listen. Speech has allowed the communication of ideas, enabling human beings to work together to build the impossible. Mankind’s greatest achievements have come about by talking, and its greatest failures by not talking. It doesn’t have to be like this. Our greatest hopes could become reality in the future. With the technology at our disposal, the possibilities are unbounded. All we need to do is make sure we keep talking.

But, as we saw with Moore’s Law, exponential growth cannot continue forever. Eventually we run into fundamental constraints. Hours in the day, people on the planet, availability of a resource, smallest possible size of a semiconductor, attention—there’s always a bottleneck we can’t eliminate.  We reach the point of diminishing returns. Growth slows or stops altogether. We must then either look at alternative routes to improvement, or leave things as they are. In Everett Rogers’s diffusion of innovation theory, this is known as the substitution stage, when usage declines and we start looking for substitutes.

This process is not linear. We can’t predict the future because there’s no way to take into account the tiny factors that will have a disproportionate impact in the long-run.

Footnotes
  • 1

    Image credit: tec_estromberg

Renaissance Paragone: An Ancient Tactic for Getting the Most From People

One of the engines behind the Italian Renaissance was the concept of paragonepitting creative efforts against one another in the belief that only with this you could come to see art’s real significance.

At first, the concept drove debates in salons. Eventually, however, it shifted into discussions of art, often among the very people who selected and funded it. In the Medici palaces, for example, rooms were arranged so that paintings would face each other. The idea was that people would directly compare the works, forming and expressing opinions. These competitions shifted the focus from the art to the artist. If one painting was better than another, you needed to know who the artist was so that you could hire them again.

Artists benefited from this arrangement, even if they didn’t win. They learned where they stood in comparison to others, both artistically and socially. Not only did they understand the gap, they learned how to close it, or change the point of comparison.

Da Vinci believed artists thrived under such competition. He once wrote:

You will be ashamed to be counted among draughtsmen if your work is inadequate, and this disgrace must motivate you to profitable study. Second, a healthy envy will stimulate you to become one of those who are praised more than yourself, for the praises of others will spur you on.

Many people want to know where they stand in relation to not only the external competition but to the people they work with every day. A lot of organizations make such comparisons difficult by hiding what matters. While you might know there is a gap between you and your coworker, you don’t know what the chasm looks like. And if you don’t know what it looks like, you don’t know where you are in relation. And if you don’t know where you are, you don’t know how to close the gap. It’s a weird sort of sabotage.

Not everyone responds to competition the same way. Pitting people directly against one another for a promotion might cause people to withdraw. That doesn’t mean they can’t handle it. It doesn’t mean they’re not amazing. Michelangelo once abandoned a competition with Da Vinci to flee to Rome—and we have only to look at the ceiling of the Sistine Chapel to know how he fared.

But a lack of competition can breed laziness in a lot of people. Worse still, that laziness gets rewarded. It’s not intentional. We just stop working as hard as we could. We coast.

Consider the proverbial office worker who sends out a sloppy first draft of a presentation to 15 people for them to “comment” on. What that person really wants is the work done for them. And because of the subtle messages organizations send, coworkers will often comply because they’re team players.

Consider the competition to make a sports team. The people on the bench (people who don’t start) make the starters better because the starters know they can’t get complacent or someone will take their job. Furthermore, the right to be on a team, once granted, isn’t assured. Someone is always vying to take any spot that opens up. That’s the nature of the world.

I’m not suggesting that all organizations promote a professional sport-like mentality. I’m suggesting you think about how you can harness competition to give people the information they need to get better. If they don’t want to get better after they know where they stand, you now know something about them you didn’t know before. I’m not also blindly advocating using competition. It has limitations and drawbacks you need to consider (such as the effects it has on self-preservation and psychological safety).

Footnotes
  • 1

    Image source: Max Pixel

The Lies We Tell

We make up stories in our minds and then against all evidence, defend them tooth and nail. Understanding why we do this is the key to discovering truth and making wiser decisions.

***

Our brains are quirky.

When I put my hand on a hot stove, I have instantly created awareness of a cause and effect relationship—“If I put my hand on a hot stove, it will hurt.” I’ve learned something fundamental about the world. Our brains are right to draw that conclusion. It’s a linear relationship, cause and effects are tightly coupled, feedback is near immediate, and there aren’t many other variables at play.

The world isn’t always this easy to understand. When cause and effect aren’t obvious, we still draw conclusions. Nobel Prize winning psychologist Daniel Kahneman offers an example of how our brains look for, and assume, causality:

“After spending a day exploring beautiful sights in the crowded streets of New York, Jane discovered that her wallet was missing.”

That’s all you get. No background on Jane, or any particulars about where she went. Kahneman presented this miniature story to his test subjects hidden among several other statements. When Kahneman later offered a surprise recall test, “the word pickpocket was more strongly associated with the story than the word sights, even though the latter was actually in the sentence while the former was not.” 1

What happened here?

There’s a bug in the evolutionary code that makes up our brains. We have a hard time distinguishing between when cause and effect is clear,  as with the hot stove or chess, and when it’s not, as in the case of Jane and her wallet. We don’t like not knowing. We also love a story.

Our minds create plausible stories. In the case of Jane, many test subjects thought a pickpocket had taken her wallet, but there are other possible scenarios. More people lose wallets than have them stolen. But our patterns of beliefs take over, such as how we feel about New York or crowds, and we construct cause and effect relationships. We tell ourselves stories that are convincing, cheap, and often wrong. We don’t think about how these stories are created, whether they’re right, or how they persist. And we’re often uncomfortable when someone asks us to explain our reasoning.

Imagine a meeting where we are discussing Jane and her wallet, not unlike any meeting you have this week to figure out what happened and what decisions your organization needs to make next.

You start the meeting by saying “Jane’s wallet was stolen. Here’s what we’re going to do in response.”

But one person in the meeting, Micky, Jane’s second cousin, asks you to explain the situation.

You volunteer what you know. “After spending a day exploring beautiful sights in the crowded streets of New York, Jane discovered that her wallet was missing.” And you quickly launch into improved security measures.

Micky, however, tells herself a different story, because just last week a friend of hers left his wallet at a store. And she knows Jane can sometimes be absentminded. The story she tells herself is that Jane probably lost her wallet in New York. So she asks you, “What makes you think the wallet was stolen?”

The answer is obvious to you. You feel your heart rate start to rise. Frustration sets in.

You tell yourself that Micky is an idiot. This is so obvious. Jane was out. In New York. In a crowd. And we need to put in place something to address this wallet issue so that it doesn’t happen again. You think to yourself that she’s slowing the group down and we need to act now.

What else is happening? It’s likely you looked at the evidence again and couldn’t really explain how you drew your conclusion. Rather than have an honest conversation about the story you told yourself and the story Micky is telling herself, the meeting gets tense and goes nowhere.

The next time you catch someone asking you about your story and you can’t explain it in a falsifiable way, pause, and hit reset. Take your ego out of it. What you really care about is finding the truth, even if that means the story you told yourself is wrong.

Footnotes
  • 1

    Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus & Giroux 2011