Category: Technology

Gates’ Law: How Progress Compounds and Why It Matters

“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”

It’s unclear exactly who first made that statement, when they said it, or how it was phrased. The most probable source is Roy Amara, a Stanford computer scientist. In the 1960s, Amara told colleagues that he believed that “we overestimate the impact of technology in the short-term and underestimate the effect in the long run.” For this reason, variations on that phrase are often known as Amara’s Law. However, Bill Gates made a similar statement (possibly paraphrasing Amara), so it’s also known as Gates’s Law.

You may have seen the same phrase attributed to Arthur C. Clarke, Tony Robbins, or Peter Drucker. There’s a good reason why Amara’s words have been appropriated by so many thinkers—they apply to so much more than technology. Almost universally, we tend to overestimate what can happen in the short term and underestimate what can happen in the long term.

Thinking about the future does not require endless hyperbole or even forecasting, which is usually pointless anyway. Instead, there are patterns we can identify if we take a long-term perspective.

Let’s look at what Bill Gates meant and why it matters.

Moore’s Law

Gates’s Law is often mentioned in conjunction with Moore’s Law. This is generally quoted as some variant of “the number of transistors on an inch of silicon doubles every eighteen months.” However, calling it Moore’s Law is misleading—at least if you think of laws as invariant. It’s more of an observation of a historical trend.

When Gordon Moore, co-founder of Fairchild Semiconductor and Intel, noticed in 1965 that the number of semiconductors on a chip doubled every year, he was not predicting that would continue in perpetuity. Indeed, Moore revised the doubling time to two years a decade later. But the world latched onto his words. Moore’s Law has been variously treated as a target, a limit, a self-fulfilling prophecy, and a physical law as certain as the laws of thermodynamics.

Moore’s Law is now considered to be outdated, after holding true for several decades. That doesn’t mean the concept has gone anywhere. Moore’s Law is often regarded as a general principle in technological development. Certain performance metrics have a defined doubling time, the opposite of a half-life.

Why is Moore’s Law related to Amara’s Law?

Exponential growth is a concept we struggle to conceptualize. As University of Colorado physics professor Albert Allen Bartlett famously put it, “The greatest shortcoming of the human race is our inability to understand the exponential function.”

When we talk about Moore’s Law, we easily underestimate what happens when a value keeps doubling. Sure, it’s not that hard to imagine your laptop getting twice as fast in a year, for instance. Where it gets tricky is when we try to imagine what that means on a longer timescale. What does that mean for your laptop in 10 years? There is a reason your iPhone has more processing power than the first space shuttle.

One of the best illustrations of exponential growth is the legend about a peasant and the emperor of China. In the story, the peasant (sometimes said to be the inventor of chess), visits the emperor with a seemingly modest request: a chessboard with one grain of rice on the first square, then two on the second, four on the third and so on, doubling each time. The emperor agreed to this idiosyncratic request and ordered his men to start counting out rice grains.

“Every fact of science was once damned. Every invention was considered impossible. Every discovery was a nervous shock to some orthodoxy. Every artistic innovation was denounced as fraud and folly. We would own no more, know no more, and be no more than the first apelike hominids if it were not for the rebellious, the recalcitrant, and the intransigent.”

— Robert Anton Wilson

If you haven’t heard this story before, it might seem like the peasant would end up with, at best, enough rice to feed their family that evening. In reality, the request was impossible to fulfill. Doubling one grain 63 times (the number of squares on a chessboard, minus the first one that only held one grain) would mean the emperor had to give the peasant over 18 million trillion grains of rice. To grow just half of that amount, he would have needed to drain the oceans and convert every bit of land on this planet into rice fields. And that’s for half.

In his essay “The Law of Accelerating Returns,” author and inventor Ray Kurzweil uses this story to show how we misunderstand the meaning of exponential growth in technology. For the first few squares, the growth was inconsequential, especially in the eyes of an emperor. It was only once they reached the halfway point that the rate began to snowball dramatically. (It’s no coincidence that Warren Buffett’s authorized biography is called The Snowball, and few people understand exponential growth better than Warren Buffett). It just so happens that by Kurzweil’s estimation, we’re at that inflection point in computing. Since the creation of the first computers, computation power has doubled roughly 32 times. We may underestimate the long-term impact because the idea of this continued doubling is so tricky to imagine.

The Technology Hype Cycle

To understand how this plays out, let’s take a look at the cycle innovations go through after their invention. Known as the Gartner hype cycle, it primarily concerns our perception of technology—not its actual value in our lives.

Hype cycles are obvious in hindsight, but fiendishly difficult to spot while they are happening. It’s important to bear in mind that this model is one way of looking at reality and is not a prediction or a template. Sometimes a step gets missed, sometimes there is a substantial gap between steps, sometimes a step is deceptive.

The hype cycle happens like this:

  • New technology: The media picks up on the existence of a new technology which may not exist in a usable form yet. Nonetheless, the publicity leads to significant interest. At this point, people working on research and development are probably not making any money from it. Lots of mistakes are made. In Everett Rogers’s diffusion of innovations theory, this is known as the innovation stage. If it seems like something new will have a dramatic payoff, it probably won’t last. If it seems we have found the perfect use for a brand-new technology, we may be wrong.
  • The peak of inflated expectations: A few well-publicized success stories lead to inflated expectations. Hype builds and new companies pop up to anticipate the demand. There may be a burst of funding for research and development. Scammers looking to make a quick buck may move into the area. Rogers calls this the syndication stage. It’s here that we overestimate the future applications and impact of the technology.
  • The trough of disillusionment: Prominent failures or a lack of progress break through the hype and lead to disillusionment. People become pessimistic about technology’s potential and mostly lose interest. Reports of scams may contribute to this, as the media uses this as a reason to describe the technology as a fraud. If it seems like new technology is dying, it may just be that its public perception has changed and the technology itself is still developing. Hype does not correlate directly with functionality.
  • The slope of enlightenment: As time passes, people continue to improve technology and find better uses for it. Eventually, it’s clear how it can improve our lives, and mainstream adoption begins. Mechanisms for preventing scams or lawbreaking emerge.
  • The plateau of productivity: The technology becomes mainstream. Development slows. It becomes part of our lives and ceases to seem novel. Those who move into the now saturated market tend to struggle, as a few dominant players take the lion’s share of the available profits. Rogers calls this the diffusion stage.

When we are cresting the peak of inflated expectations, we imagine that the new development will transform our lives within months. In the depths of the trough of disillusionment, we don’t expect it to get anywhere, even allowing years for it to improve. We typically fail to anticipate the significance of the plateau of productivity, even if it exceeds our initial expectations.

Smart people can usually see through the initial hype. But only a handful of people can—through foresight, stubbornness or perhaps pure luck—see through the trough of disillusionment. Most of the initial skeptics feel vindicated by the dramatic drop in interest and expect the innovation to disappear. It takes far greater expertise to support an unpopular technology than to deride a popular one.

Correctly spotting the cycle as it unfolds can be immensely profitable. Misreading it can be devastating. First movers in a new area often struggle to survive the trough, even if they are the ones who do the essential research and development. We tend to assume current trends will continue, so we expect sustained growth during the peak and expect linear decline during the trough.

If we are trying to assess the future impact of a new technology, we need to separate its true value from its public perception. When something is new, the mainstream hype is likely to be more noise than signal. After all, the peak of inflated expectations often happens before the technology is available in a usable form. It’s almost always before the public has access to it. Hype serves a real purpose in the early days: it draws interest, secures funding, attracts people with the right talents to move things forward and generates new ideas. Not all hype is equally important, because not all opinions are equally important. If there’s intense interest within a niche group with relevant expertise, that’s more telling than a general enthusiasm.

The hype cycle doesn’t just happen with technology. It plays out all over the place, and we’re usually fooled by it. Discrepancies between our short- and long-term estimates of achievement are everywhere. Consider the following situations. They’re hypothetical, but similar situations are common.

  • A musician releases an acclaimed debut album which creates enormous interest in their work. When their second album proves disappointing (or never materializes), most people lose interest. Over time, the performer develops a loyal, sustained following of people who accurately assess the merits of their music, not the hype.
  • A promising new pharmaceutical receives considerable attention—until it becomes apparent that there are unexpected side effects, or it isn’t as powerful as expected. With time, clinical trials find alternate uses which may prove even more beneficial. For example, a side effect could be helpful for another use. It’s estimated that over 20% of pharmaceuticals are prescribed for a different purpose than they were initially approved for, with that figure rising as high as 60% in some areas.
  • A propitious start-up receives an inflated valuation after a run of positive media attention. Its founders are lauded and extensively profiled and investors race to get involved. Then there’s an obvious failure—perhaps due to the overconfidence caused by hype—or early products fall flat or take too long to create. Interest wanes. The media gleefully dissects the company’s apparent demise. But the product continues to improve and ultimately becomes a part of our everyday lives.

In the short run, the world is a voting machine affected by whims and marketing. In the long run, it’s a weighing machine where quality and product matter.

The Adjacent Possible

Now that we know how Amara’s Law plays out in real life, the next question is: why does this happen? Why does technology grow in complexity at an exponential rate? And why don’t we see it coming?

One explanation is what Stuart Kauffman describes as “the adjacent possible.” Each new innovation adds to the number of achievable possible (future) innovations. It opens up adjacent possibilities which didn’t exist before, because better tools can be used to make even better tools.

Humanity is about expanding the realm of the possible. Discovering fire meant our ancestors could use the heat to soften or harden materials and make better tools. Inventing the wheel meant the ability to move resources around, which meant new possibilities such as the construction of more advanced buildings using materials from other areas. Domesticating animals meant a way to pull wheeled vehicles with less effort, meaning heavier loads, greater distances and more advanced construction. The invention of writing led to new ways of recording, sharing and developing knowledge which could then foster further innovation. The internet continues to give us countless new opportunities for innovation. Anyone with a new idea can access endless free information, find supporters, discuss their ideas and obtain resources. New doors to the adjacent possible open every day as we find different uses for technology.

“We like to think of our ideas as $40,000 incubators shipped directly from the factory, but in reality, they’ve been cobbled together with spare parts that happened to be sitting in the garage.”

— Steven Johnson, Where Good Ideas Come From

Take the case of GPS, an invention that was itself built out of the debris of its predecessors. In recent years, GPS has opened up new possibilities that didn’t exist before. The system was developed by the US government for military usage. In the 1980s, they decided to start allowing other organizations and individuals to use it. Civilian access to GPS gave us new options. Since then, it has led to numerous innovations that incorporate the system into old ideas: self-driving cars, mobile phone tracking (very useful for solving crime or finding people in emergency situations), tectonic plate trackers that help predict earthquakes, personal navigation systems, self-navigating robots, and many others. None of these would have been possible without some sort of global positioning system. With the invention of GPS, human innovation sped up a little more.

Steven Johnson gives one example of how this happens in Where Good Ideas Come From. In 2008, MIT professor Timothy Presto visited a hospital in Indonesia and found that all eight of the incubators for newborn babies were broken. The incubators had been donated to the hospital by relief organizations, but the staff didn’t know how to fix them. Plus, the incubators were poorly suited to the humid climate and the repair instructions only came in English. Presto realized that donating medical equipment was pointless if local people couldn’t fix it. He and his team began working on designing an incubator that could save the lives of babies for a lot longer than a couple of months.

Instead of continuing to tweak existing designs, Presto and his team devised a completely new incubator that used car parts. While the local people didn’t know how to fix an incubator, they were extremely adept at keeping their cars working no matter what. Named the NeoNurture, it used headlights for warmth, dashboard fans for ventilation, and a motorcycle battery for power. Hospital staff just needed to find someone who was good with cars to fix it—the principles were the same.

Even more, telling is the origin of the incubators Presto and his team reconceptualized. The first incubator for newborn babies was designed by Stephane Tarnier in the late 19th century. While visiting a zoo on his day off, Tarnier noted that newborn chicks were kept in heated boxes. It’s not a big leap to imagine that the issue of infant mortality was permanently on his mind. Tarnier was an obstetrician, working at a time when the infant mortality rate for premature babies was about 66%. He must have been eager to try anything that could reduce that figure and its emotional toll. Tarnier’s rudimentary incubator immediately halved that mortality rate. The technology was right there, in the zoo. It just took someone to connect the dots and realize human babies aren’t that different from chicken babies.

Johnson explains the significance of this: “Good ideas are like the NeoNurture device. They are, inevitably, constrained by the parts and skills that surround them…ideas are works of bricolage; they’re built out of that detritus.” Tarnier could invent the incubator only because someone else had already invented a similar device. Presto and his team could only invent the NeoNurture because Tarnier had come up with the incubator in the first place.

This happens in our lives, as well. If you learn a new skill, the number of skills you could potentially learn increases because some elements may be transferable. If you are introduced to a new person, the number of people you could meet grows, because they may introduce you to others. If you start learning a language, native speakers may be more willing to have conversations with you in it, meaning you can get a broader understanding. If you read a new book, you may find it easier to read other books by linking together the information in them. The list is endless. We can’t imagine what we’re capable of achieving in ten years because we forget about the adjacent possibilities that will emerge.

Accelerating Change

The adjacent possible has been expanding ever since the first person picked up a stone and started shaping it into a tool. Just look at what written and oral forms of communication made possible—no longer did each generation have to learn everything from scratch. Suddenly we could build upon what had come before us.

Some (annoying) people claim that there’s nothing new left. There are no new ideas to be had, no new creations to invent, no new options to explore. In fact, the opposite is true. Innovation is a non-zero-sum game. A crowded market actually means more opportunities to create something new than a barren one. Technology is a feedback loop. The creation of something new begets the creation of something even newer and so on.

Progress is exponential, not linear. So we overestimate the impact of a new technology during the early days when it is just finding its feet, then underestimate its impact in a decade or so when its full uses are emerging. As old limits and constraints melt away, our options explode. The exponential growth of technology is known as accelerating change. It’s a common belief among experts that the rate of change is speeding up and society will change dramatically alongside it.

“Ideas borrow, blend, subvert, develop and bounce off other ideas.”

— John Hegarty, Hegarty On Creativity

In 1999, author and inventor Ray Kurzweil posited the Law of Accelerating Change — that evolutionary systems develop at an exponential rate. While this is most obvious for technology, Kurzweil hypothesized that the principle is relevant in numerous other areas. Moore’s Law, initially referring only to semiconductors, has wider implications.

In an essay on the topic, he writes:

An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth.

Progress is tricky to predict or even to notice as it happens. It’s hard to notice things in a system that we are part of. And it’s hard to notice the incremental change because it lacks stark contrast. The current pace of change is our norm, and we adjust to it. In hindsight, we can see how Amara’s Law plays out.

Look at where the internet was just twenty years ago. A report from the Pew Research Center shows us how to change compounds. In 1998, a mere 41% of Americans used the internet at all—and the report expresses surprise that the users were beginning to include “people without college training, those with modest incomes, and women.” Less than a third of users had bought something online, email was predominantly just for work, and only a third of users looked at online news at least once per week. That’s a third of the 41% using the internet by the way, not of the general population. Wikipedia and Gmail didn’t exist. Internet users in the late nineties reported that their main problem was finding what they needed online.

That is perhaps the biggest change and one we may not have anticipated: the move towards personalization. Finding what we need is no longer a problem. Most of us have the opposite problem and struggle with information overwhelm. Twenty years ago, filter bubbles were barely a problem (at least, not online.) Now, almost everything we encounter online is personalized to ensure it’s ridiculously easy to find what we want. Newsletters, websites, and apps greet us by name. Newsfeeds are organized by our interests. Shopping sites recommend other products we might like. This has increased the amount the internet does for us to a level that would have been hard to imagine in the late 90s. Kevin Kelly, writing in The Inevitable,  describes filtering as one of the key forces that will shape the future.

History reveals an extraordinary acceleration of technological progress. Establishing the precise history of technology is problematic as some inventions occurred in several places at varying times, archaeological records are inevitably incomplete, and dating methods are imperfect. However, accelerating change is a clear pattern. To truly understand the principle of accelerating change, we need to take a quick look at a simple overview of the history of technology.

Early innovations happened slowly. It took us about 30,000 years to invent clothing and about 120,000 years to invent jewelry. It took us about 130,000 years to invent art and about 136,000 years to come up with the bow and arrow. But things began to speed up in the Upper Paleolithic period. Between 50,000 and 10,000 years ago, we developed more sophisticated tools with specialized uses—think harpoons, darts, fishing tools, and needles—early musical instruments, pottery, and the first domesticated animals. Between roughly 11,000 years and the 18th century, the pace truly accelerated. That period essentially led to the creation of civilization, with the foundations of our current world.

More recently, the Industrial Revolution changed everything because it moved us significantly further away from relying on the strength of people and domesticated animals to power means of production. Steam engines and machinery replaced backbreaking labor, meaning more production at a lower cost. The number of adjacent possibilities began to snowball. Machinery enabled mass production and interchangeable parts. Steam-powered trains meant people could move around far more easily, allowing people from different areas to mix together and share ideas. Improved communications did the same. It’s pointless to even try listing the ways technology has changed since then. Regardless of age, we’ve all lived through it and seen the acceleration. Few people dispute that the change is snowballing. The only question is how far that will go.

As Stephen Hawking put it in 1993:

For millions of years, mankind lived just like the animals. Then something happened which unleashed the power of our imagination. We learned to talk and we learned to listen. Speech has allowed the communication of ideas, enabling human beings to work together to build the impossible. Mankind’s greatest achievements have come about by talking, and its greatest failures by not talking. It doesn’t have to be like this. Our greatest hopes could become reality in the future. With the technology at our disposal, the possibilities are unbounded. All we need to do is make sure we keep talking.

But, as we saw with Moore’s Law, exponential growth cannot continue forever. Eventually, we run into fundamental constraints. Hours in the day, people on the planet, availability of a resource, smallest possible size of a semiconductor, attention—there’s always a bottleneck we can’t eliminate.  We reach the point of diminishing returns. Growth slows or stops altogether. We must then either look at alternative routes to improvement or leave things as they are. In Everett Rogers’s diffusion of innovation theory, this is known as the substitution stage, when usage declines and we start looking for substitutes.

This process is not linear. We can’t predict the future because there’s no way to take into account the tiny factors that will have a disproportionate impact in the long-run.

Footnotes
  • 1

    Image credit: tec_estromberg

Why the Printing Press and the Telegraph Were as Impactful as the Internet

What makes a communications technology revolutionary? One answer to this is to ask whether it fundamentally changes the way society is organized. This can be a very hard question to answer, because true fundamental changes alter society in such a way that it becomes difficult to speak of past society without imposing our present understanding.

In her seminal work, The Printing Press as An Agent of Change, Elizabeth Eisenstein argues just that:

When ideas are detached from the media used to transmit them, they are also cut off from the historical circumstances that shape them, and it becomes difficult to perceive the changing context within which they must be viewed.

Today we rightly think of the internet and the mobile phone, but long ago, the printing press and the telegraph both had just as heavy an impact on the development of society.

Printing Press

Thinking of the time before the telegraph, when communications had to be hand delivered, is quaint. Trying to conceive the world before the uniformity of communication brought about by the printing press is almost unimaginable.

Eisenstein argues that the printing press “is of special historical significance because it produced fundamental alterations in prevailing patterns of continuity and change.”

Before the printing press there were no books, not in the sense that we understand them. There were manuscripts that were copied by scribes, which contained inconsistencies and embellishments, and modifications that suited who the scribe was working for. The printing press halted the evolution of symbols: For the first time maps and numbers were fixed.

Furthermore, because pre-press scholars had to go to manuscripts, Eisenstein says we should “recognize the novelty of being able to assemble diverse records and reference guides, and of being able to study them without having to transcribe them at the same time” that was afforded by the printing press.

This led to new ways of being able to compare and thus develop knowledge, by reducing the friction of getting to the old knowledge:

More abundantly stocked bookshelves obviously increased opportunities to consult and compare different texts. Merely by making more scrambled data available, by increasing the output of Aristotelian, Alexandrian and Arabic texts, printers encouraged efforts to unscramble these data.

Eisenstein argues that many of the great thinkers of the 16th century, such as Descartes and Montaigne, would have been unlikely to have produced what they did without the changes wrought by the printing press. She says of Montaigne, “that he could see more books by spending a few months in his Bordeaux tower-study than earlier scholars had seen after a lifetime of travel.”

The printing press increased the speed of communication and the spread of knowledge: Far less man hours were needed to turn out 50 printed books than 50 scribed manuscripts.

Telegraph

Henry Ford famously said of life before the car “If I had asked people what they wanted, they would have said faster horses“. This sentiment could be equally applied to the telegraph, a communications technology that came about 400 years after the printing press.

Before the telegraph, the speed of communication was dependent on the speed of the physical object doing the transporting – the horse, or the ship. Societies were thus organized around the speed of communication available to them, from the way business was conducted and wars were fought to the way interpersonal communication was conducted.

Let’s consider, for example, the way the telegraph changed the conduct of war.

Prior to the telegraph, countries shared detailed knowledge of their plans with their citizens in order to boost morale, knowing that their plans would arrive at the enemy the same time their ships did. Post-telegraph, communications could arrive far faster than soldiers: This was something to consider!

In addition, as Tom Standage considers in his book The Victorian Internet, the telegraph altered the command structure in battle. “For who was better placed to make strategic decisions: the commander at the scene or his distant superiors?”

The telegraph brought changes similar in many ways to the printing press: It allowed for an accumulation of knowledge and increased the availability of this knowledge; more people had access to more information.

And society was forever altered as the new speed of communication made it fundamentally impossible to not use the telegraph, just as it is near impossible not to use a mobile phone or the Internet today.

Once the telegraph was widespread, there was no longer a way to do business without using it. Having up to the minute stock quotes changed the way businesses evaluated their holdings. Being able to communicate with various offices across the country created centralization and middle management. These elements became part of doing business so that it became nonsensical to talk about developing any aspect of business independent of the effect of electronic communication.

A Final Thought on Technology Uptake

One can argue that the more revolutionary an invention is, the slower the initial uptake into society, as society must do a fair amount of reorganizing to integrate the invention.

Such was the case for both the telegraph and printing press, as they allowed for things that were never before possible. Not being possible, they were rarely considered. Being rarely considered, there wasn’t a large populace pining for them to happen. So when new options presented themselves, no one was rushing to embrace them, because there was no general appreciation of their potential. This is, of course, a fundamental aspect of revolutionary technology. Everyone has to figure out how (and why) to use it.

In The Victorian Internet, Standage says of William Cooke and Samuel Morse, the British and American inventors, respectively, of the telegraph:

[They] had done the impossible and constructed working telegraphs. Surely the world would fall at their feet. Building the prototypes, however, turned out to be the easy part. Convincing people of their significance was far more of a challenge.

It took years for people to see advantages with the telegraph. Even after the first lines were built, and the accuracy and speed of the communications they could carry verified, Morse realized that “everybody still thought of the telegraph as a novelty, as nothing more than an amusing subject for a newspaper article, rather than the revolutionary new form of communication that he envisaged.”

The new technology might confer great benefits, but it took a lot of work building the infrastructure, both physical and mental, to take any advantage of them.

The printing press faced similar challenges. In fact, books printed from Gutenberg until 1501 have their own term, incunabula, which reflects the transition from manuscript to book. Eisenstein writes: “Printers and scribes copied each other’s products for several decades and duplicated the same texts for the same markets during the age of incunabula.”

The momentum took a while to build. When it did, the changes were remarkable.

But looking at these two technologies serves as a reminder of what revolutionary means in this context: The use by and value to society cannot be anticipated. Therefore, great and unpredictable shifts are caused when they are adopted and integrated into everyday life.

Don’t Let Your (Technology) Tools Use You

“In an information-rich world, the wealth of information means a dearth of something else:
a scarcity of whatever it is that information consumes.
What information consumes is rather obvious: it consumes the attention of its recipients.
Hence a wealth of information creates a poverty of attention and a need to allocate
that attention efficiently among the overabundance of information sources that might consume it.”
Herbert Simon

***

A shovel is just a shovel. You shovel things with it. You can break up weeds and dirt. (You can also whack someone with it.) I’m not sure I’ve seen a shovel used for much else.

Modern technological tools aren’t really like that.

What is an iPhone, functionally? Sure, it’s got the phone thing down, but it’s also a GPS, a note-taker, an emailer, a text messager, a newspaper, a video-game device, a taxi-calling service, a flashlight, a web browser, a library, a book…you get the point. It does a lot.

This all seems pretty wonderful. To perform those functions 20 years ago, you needed a map and a sense of direction, a notepad, a personal computer, a cell phone, an actual newspaper, a Playstation, a phone and the willingness to talk to a person, an actual flashlight, an actual library, an actual book…you get the point. As Mark Andreessen puts it, the world is being eaten by software. One simple (looking) device and a host of software can perform the functions served by a bunch of big clunky tools of the past.

So far, we’ve been convinced that use of the New Tools is mostly “upside,” that our embrace of them should be wholehearted. Much of this is for good reason. Do you remember how awful using a map was? Yuck.

The problem is that our New Tools are winning the battle of attention. We’ve gotten to the point where the tools use us as much as we use them. This new reality means we need to re-examine our relationship with our New Tools.

Don't Let Your Tools Use You

Down the Rabbit Hole

Here’s a typical situation.

You’re on your computer finishing the client presentation you have to give in two days. Your phone lights up and makes a chimney noise — you’ve got a text message. “Hey, have you seen that new Dracula movie?” asks your friend. It only takes a few messages before the two of you begin to disagree on whether Transylvania is actually a real place. Off to Google!

After a few quick clicks, you get to Wikipedia, which tells you that yes, Transylvania is a region of Romania which the author Bram Stoker used as Count Dracula’s birthplace. Reading the Wikipedia entry costs you about 20 minutes. As you read, you find out that Bram Stoker was actually Irish. Irish! An Irish guy wrote Dracula? How did I not know this? Curiosity stoked, you look up Irish novelists, the history of Gothic literature, the original vampire stories…down and down the rabbit hole you go.

Eventually your thirst for trivia is exhausted, and you close the Wikipedia tab to text your friend how wrong they are in regards to Transylvania. You click the Home button to leave your text conversation, which lets you see the Twitter icon. I wonder how many people retweeted my awesome joke about ventriloquism? You pull it up and start “The Scroll.” Hah! Greg is hilarious. Are you serious, Bill Gates? Damn — I wish I read as much as Shane Parrish. You go and go. Your buddy tweets a link to an interesting-looking article about millennials — “10 Ways Millennials are Ruining the Workplace”. God, they are so self-absorbed. Click.

You decide to check Facebook and see if that girl from the cocktail party on Friday commented on your status. She didn’t, but Wow, Susanne went to Hawaii? You look at 35 pictures Susanne posted in her first three hours in Hawaii. Wait, who’s that guy she’s with? You click his name and go to his Facebook page. On down the rabbit hole you fall…

Now it’s been two hours since you left your presentation to respond to the text message, and you find yourself physically tired from the rapid scanning and clicking, scanning and clicking, scanning and clicking of the past two hours. Sad, you go get a coffee, go for a short walk, and decide: Now, I will focus. No more distraction.

Ten minutes in, your phone buzzes. That girl from the cocktail party commented on your status…

Attention for Sale

We’ve all been there. When we come up for air, it can feel like the aftermath of a mob crowd. What did I just do?

The tools we’re now addicted to have been engineered for a simple purpose: To keep us addicted to them. The service they provide is secondary to the addiction. Yes, Facebook is a networking tool. Yes, Twitter is a communication tool. Yes, Instagram is an excellent food-photography tool. But unless they get us hooked and keep us hooked, their business models are broken.

Don’t believe us?

Take stock of the metrics by which people value or assess these companies. Clicks. Views. Engagement. Return visits. Length of stay. The primary source of value for these products is how much you use them and what they can sell to you while you’re there. Increasing their value is a simple (but not easy) proposition: Either get usage up or figure out more effective ways to sell to you while you’re there.

As Herbert Simon might have predicted, our attention is for sale, and we’re ceding it a little at a time as the tools get better and better at fulfilling their function. There’s a version of natural selection going on, where the only consumer technology products that survive are the enormously addictive ones. The trait which produces maximum fitness is addictiveness itself. If you’re not using a tool constantly, it has no value to advertisers or data sellers, and thus they cannot raise capital to survive. And even if it’s an app or tool that you buy, one that you have to pay money for upfront, they must hook you on Version 1 if you’re going to be expected to buy Versions 2, 3, and 4.

This ecosystem ensures that each generation of consumer tech products – hardware or software – gets better and better at keeping you hooked. These services have learned, through a process of evolution, to drown users in positive feedback and create intense habitual usage. They must – because any other outcome is death. Facebook doesn’t want you to go on once a month to catch up on your correspondence. You must be engaged. The service does not care whether it’s unnecessarily eating into your life.

Snap Back to Reality

It’s up to us to take our lives back then. We must comprehend that the New Tools have a tremendous downside in their loss of focused attention, and that we’re giving it up willingly in a sort of Faustian bargain for entertainment, connectedness, and novelty.

Psychologist Mihaly Csikszentmihalyi pioneered the concept of Flow, where we enter an enjoyable state of rapt attention to our work and produce a high level of creative output. It’s a wonderful feeling, but the New Tools have learned to provide the same sensation without the actual results. We don’t end up with a book, or a presentation, or a speech, or a quilt, or a hand-crafted table. We end up two hours later in the day.

***

The first step towards a solution must be to understand the reality of this new ecosystem.

It follows Garrett Hardin’s “First Law of Ecology”: You can never merely do one thing. The New Tools are not like the Old Tools, where you pick up the shovel, do your shoveling, and then put the shovel back in the garage. The iPhone is not designed that way. It’s designed to keep you going, as are most of the other New Tools. You probably won’t send one text. You probably won’t watch one video. You probably won’t read one article. You’re not supposed to!

The rational response to this new reality depends a lot on who you are and what you need the tools for. Some people can get rid of 50% or more of their New Tools very easily. You don’t have to toss out your iPhone for a StarTAC, but because software is doing the real work, you can purposefully reduce the capability of the hardware by reducing your exposure to certain software.

As you shed certain tools, expect a homeostatic response from your network. Don’t be mistaken: If you’re a Snapchatter or an Instagrammer or simply an avid texter, getting rid of those services will give rise to consternation. They are, after all, networking tools. Your network will notice. You’ll need a bit of courage to face your friends and tell them, with a straight face, that you won’t be Instagramming anymore because you’re afraid of falling down the rabbit hole. But if you’ve got the courage, you’ll probably find that after a week or two of adjustment your life will go on just fine.

The second and more mild type of response would be to appreciate the chain-smoking nature of these products and to use them more judiciously. Understand that every time you look at your iPhone or connect to the Internet, the rabbit hole is there waiting for you to tumble down. If you can grasp that, you’ll realize that you need to be suspicious of the “quick check.” Either learn to batch Internet and phone time into concentrated blocks or slowly re-learn how to ignore the desire to follow up on every little impulse that comes to mind. (Or preferably, do both.)

A big part of this is turning off any sort of “push” notification, which must be the most effective attention-diverter ever invented by humanity. A push notification is anything that draws your attention to the tool without your conscious input. It’s when your phone buzzes for a text message, or an image comes on the screen when you get an email, or your phone tells you that you’ve got a Facebook comment. Anything that desperately induces you to engage. You need to turn them off. (Yes, including text message notifications – your friends will get used to waiting).

E-mail can be the worst offender; it’s the earliest and still one of the most effective digital rabbit holes. To push back, close your email client when you’re not using it. That way, you’ll have to open it to send or read an email. Then go ahead and change the settings on your phone’s email client so you have to “fetch” emails yourself, rather than having them pushed at you. Turn off anything that tells you an email has arrived.

Once you stop being notified by your tools, you can start to engage with them on your own terms and focus on your real work for a change; focus on the stuff actually producing some value in your life and in the world. When the big stuff is done, you can give yourself a half-hour or an hour to check your Facebook page, check your Instagram page, follow up on Wikipedia, check your emails, and respond to your text messages. This isn’t as good a solution as deleting many of the apps altogether, but it does allow you to engage with these tools on your own terms.

However you choose to address the world of New Tools, you’re way ahead if you simply recognize their power over your attention. Getting lost in hyperlinks and Facebook feeds doesn’t mean you’re weak, it just means the tools you’re using are designed, at their core, to help you get lost. Instead of allowing yourself to go to work for them, resolve to make them work for you.

Marshall McLuhan: The Here And Now

mcluhan-5301

“In a culture like ours, long accustomed to splitting and dividing all things as a means of control, it is sometimes a bit of a shock to be reminded that, in operational and practical fact, the medium is the message.”

***

In this passage from Understanding Media, Marshall McLuhan, reminds us of the difficulty that frictionless connection brings with it and how technological media advances have worked not to preserve but rather to ‘abolish history.’

Perfection of the means of communication has meant instantaneity. Such an instantaneous network of communication is the body-mind unity of each of us. When a city or a society achieves a diversity and equilibrium of awareness analogous to the body-mind network, it has what we tend to regard as a high culture.

But the instantaneity of communication makes free speech and thought difficult if not impossible, and for many reasons. Radio extends the range of the casual speaking voice, but it forbids that many should speak. And when what is said has such range of control, it is forbidden to speak any but the most acceptable words and notions. Power and control are in all cases paid for by loss of freedom and flexibility.

Today the entire globe has a unity in point of mutual interawareness, which exceeds in rapidity the former flow of information in a small city—say Elizabethan London with its eighty or ninety thousand inhabitants. What happens to existing societies when they are brought into such intimate contact by press, picture stories, newsreels, and jet propulsion? What happens when the Neolithic Eskimo is compelled to share the time and space arrangements of technological man? What happens in our minds as we become familiar with the diversity of human cultures which have come into existence under innumerable circumstances, historical and geographical? Is what happens comparable to that social revolution which we call the American melting pot?

When the telegraph made possible a daily cross section of the globe transferred to the page of newsprint, we already had our mental melting pot for cosmic man—the world citizen.The mere format of the page of newsprint was more revolutionary in its intellectual and emotional consequences than anything that could be said about any part of the globe.

When we juxtapose news items from Tokyo, London, New York, Chile, Africa, and New Zealand, we are not just manipulating space. The events so brought together belong to cultures widely separated in time. The modern world abridges all historical times as readily as it reduces space. Everywhere and every age have become here and now. History has been abolished by our new media.

The Glass Cage: Automation and US

People have worried about losing their jobs to robots for decades now. But how is growing automation really going to change us? Let’s take a look at the limitations of automation and the uniquely human skills that will remain valuable.

***

The impact of technology is all around us. Maybe we’re at another Gutenberg moment and maybe we’re not.

Marshall McLuhan said it best.

When any new form comes into the foreground of things, we naturally look at it through the old stereos. We can’t help that. This is normal, and we’re still trying to see how will our previous forms of political and educational patterns persist under television. We’re just trying to fit the old things into the new form, instead of asking what is the new form going to do to all the assumptions we had before.

He also wrote that “a new medium is never an addition to an old one, nor does it leave the old one in peace.”

In The Glass Cage: Automation and US, Nick Carr, one of my favorite writers, enters the debate about the impact automation has on us, “examining the personal as well as the economic consequences of our growing dependence on computers.”

We know that the nature of jobs is going to change in the future thanks to technology. Tyler Cowen argues “If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch.”

Carr’s book shows another side to the argument – the broader human consequences to living in a world where computers and software do the things we used to do.

Computer automation makes our lives easier, our chores less burdensome. We’re often able to accomplish more in less time—or to do things we simply couldn’t do before. But automation also has deeper, hidden effects. As aviators have learned, not all of them are beneficial. Automation can take a toll on our work, our talents, and our lives. It can narrow our perspectives and limit our choices. It can open us to surveillance and manipulation. As computers become our constant companions, our familiar, obliging helpmates, it seems wise to take a closer look at exactly how they’re changing what we do and who we are.

On the autonomous automobile, for example, Carr argues that while they have a ways to go before they start chauffeuring us around, there are broader questions that need to be answered first.

Although Google has said it expects commercial versions of its car to be on sale by the end of the decade, that’s probably wishful thinking. The vehicle’s sensor systems remain prohibitively expensive, with the roof-mounted laser apparatus alone going for eighty thousand dollars. Many technical challenges remain to be met, such as navigating snowy or leaf-covered roads, dealing with unexpected detours, and interpreting the hand signals of traffic cops and road workers. Even the most powerful computers still have a hard time distinguishing a bit of harmless road debris (a flattened cardboard box, say) from a dangerous obstacle (a nail-studded chunk of plywood). Most daunting of all are the many legal, cultural, and ethical hurdles a driverless car faces-Where, for instance, will culpability and liability reside should a computer-driven automobile cause an accident that kills or injures someone? With the car’s owner? With the manufacturer that installed the self-driving system? With the programmers who wrote the software? Until such thorny questions get sorted out, fully automated cars are unlikely to grace dealer showrooms.

Tacit and Explicit Knowledge

Self-driving cars are just one example of a technology that forces us “to change our thinking about what computers and robots can and can’t do.”

Up until that fateful October day, it was taken for granted that many important skills lay beyond the reach of automation. Computers could do a lot of things, but they couldn’t do everything. In an influential 2004 book, The New Division of Labor: How Computers Are Creating the Next Job Market, economists Frank Levy and Richard Murnane argued, convincingly, that there were practical limits to the ability of software programmers to replicate human talents, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed specifically to the example of driving a car on the open road, a talent that requires the instantaneous interpretation of a welter of visual signals and an ability to adapt seamlessly to shifting and often unanticipated situations. We hardly know how we pull off such a feat ourselves, so the idea that programmers could reduce all of driving’s intricacies, intangibilities, and contingencies to a set of instructions, to lines of software code, seemed ludicrous. “Executing a left turn across oncoming traffic,” Levy and Murnane wrote, “involves so many factors that it is hard to imagine the set of rules that can replicate a drivers behavior.” It seemed a sure bet, to them and to pretty much everyone else, that steering wheels would remain firmly in the grip of human hands.

In assessing computers’ capabilities, economists and psychologists have long drawn on a basic distinction between two kinds of knowledge: tacit and explicit. Tacit knowledge, which is also sometimes called procedural knowledge, refers to all the stuff we do without actively thinking about it: riding a bike, snagging a fly ball, reading a book, driving a car. These aren’t innate skills—we have to learn them, and some people are better at them than others—but they can’t be expressed as a simple recipe, a sequence of precisely defined steps. When you make a turn through a busy intersection in your car, neurological studies have shown, many areas of your brain are hard at work, processing sensory stimuli, making estimates of time and distance, and coordinating your arms and legs. But if someone asked you to document everything involved in making that turn, you wouldn’t be able to, at least not without resorting to generalizations and abstractions.The ability resides deep in your nervous system outside the ambit of your conscious mind. The mental processing goes on without your awareness.

Much of our ability to size up situations and make quick judgments about them stems from the fuzzy realm of tacit knowledge. Most of our creative and artistic skills reside there too. Explicit knowledge, which is also known as declarative knowledge, is the stuff you can actually write down: how to change a flat tire, how to fold an origami crane, how to solve a quadratic equation. These are processes that can be broken down into well-defined steps. One person can explain them to another person through written or oral instructions: do this, then this, then this.

Because a software program is essentially a set of precise, written instructions—do this, then this, then this—we’ve assumed that while computers can replicate skills that depend on explicit knowledge, they’re not so good when it comes to skills that flow from tacit knowledge. How do you translate the ineffable into lines of code, into the rigid, step-by-step instructions of an algorithm? The boundary between the explicit and the tacit has always been a rough one—a lot of our talents straddle the line—but it seemed to offer a good way to define the limits of automation and, in turn, to mark out the exclusive precincts of the human. The sophisticated jobs Levy and Murnane identified as lying beyond the reach of computers—in addition to driving, they pointed to teaching and medical diagnosis—were a mix of the mental and the manual, but they all drew on tacit knowledge.

Google’s car resets the boundary between human and computer, and it does so more dramatically, more decisively, than have earlier breakthroughs in programming. It tells us that our idea of the limits of automation has always been something of a fiction. Were not as special as we think we are. While the distinction between tacit and explicit knowledge remains a useful one in the realm of human psychology, it has lost much of its relevance to discussions of automation.

Tomorrowland

That doesn’t mean that computers now have tacit knowledge, or that they’ve started to think the way we think, or that they’ll soon be able to do everything people can do. They don’t, they haven’t, and they won’t. Artificial intelligence is not human intelligence. People are mindful; computers are mindless. But when it comes to performing demanding tasks, whether with the brain or the body, computers are able to replicate our ends without replicating our means. When a driverless car makes a left turn in traffic, it’s not tapping into a well of intuition and skill; it’s following a program. But while the strategies are different, the outcomes, for practical purposes, are the same. The superhuman speed with which computers can follow instructions, calculate probabilities, and receive and send data means that they can use explicit knowledge to perform many of the complicated tasks that we do with tacit knowledge. In some cases, the unique strengths of computers allow them to perform what we consider to be tacit skills better than we can perform them ourselves. In a world of computer-controlled cars, you wouldn’t need traffic lights or stop signs. Through the continuous, high-speed exchange of data, vehicles would seamlessly coordinate their passage through even the busiest of intersections—just as computers today regulate the flow of inconceivable numbers of data packets along the highways and byways of the internet. What’s ineffable in our own minds becomes altogether effable in the circuits of a microchip.

Many of the cognitive talents we’ve considered uniquely human, it turns out, are anything but. Once computers get quick enough, they can begin to replicate our ability to spot patterns, make judgments, and learn from experience.

It’s not only vocations that are increasingly being computerized, avocations are too.

Thanks to the proliferation of smartphones, tablets, and other small, affordable, and even wearable computers, we now depend on software to carry out many of our daily chores and pastimes. We launch apps to aid us in shopping, cooking, exercising, even finding a mate and raising a child. We follow turn-by-turn GPS instructions to get from one place to the next. We use social networks to maintain friendships and express our feelings. We seek advice from recommendation engines on what to watch, read, and listen to. We look to Google, or to Apple’s Siri, to answer our questions and solve our problems. The computer is becoming our all-purpose tool for navigating, manipulating, and understanding the world, in both its physical and its social manifestations. Just think what happens these days when people misplace their smartphones or lose their connections to the net. Without their digital assistants, they feel helpless.

As Katherine Hayles, a literature professor at Duke University, observed in her 2012 book How We Think, “When my computer goes down or my Internet connection fails, I feel lost, disoriented, unable to work—in fact, I feel as if my hands have been amputated.”

While our dependence on computers is “disconcerting at times,” we welcome it.

We’re eager to celebrate and show off our whizzy new gadgets and apps—and not only because they’re so useful and so stylish. There’s something magical about computer automation. To watch an iPhone identify an obscure song playing over the sound system in a bar is to experience something that would have been inconceivable to any previous generation.

Miswanting

The trouble with automation is “that it often gives us what we don’t need at the cost of what we do.”

To understand why that’s so, and why we’re eager to accept the bargain, we need to take a look at how certain cognitive biases—flaws in the way we think—can distort our perceptions. When it comes to assessing the value of labor and leisure, the mind’s eye can’t see straight.

Mihaly Csikszentmihalyi, a psychology professor and author of the popular 1990 book Flow, has described a phenomenon that he calls “the paradox of work.” He first observed it in a study conducted in the 1980s with his University of Chicago colleague Judith LeFevre. They recruited a hundred workers, blue-collar and white-collar, skilled and unskilled, from five businesses around Chicago. They gave each an electronic pager (this was when cell phones were still luxury goods) that they had programmed to beep at seven random moments a day over the course of a week. At each beep, the subjects would fill out a short questionnaire. They’d describe the activity they were engaged in at that moment, the challenges they were facing, the skills they were deploying, and the psychological state they were in, as indicated by their sense of motivation, satisfaction, engagement, creativity, and so forth. The intent of this “experience sampling,” as Csikszentmihalyi termed the technique, was to see how people spend their time, on the job and off, and how their activities influence their “quality of experience.”

The results were surprising. People were happier, felt more fulfilled by what they were doing, while they were at work than during their leisure hours. In their free time, they tended to feel bored and anxious. And yet they didn’t like to be at work. When they were on the job, they expressed a strong desire to be off the job, and when they were off the job, the last thing they wanted was to go back to work. “We have,” reported Csikszentmihalyi and LeFevre, “the paradoxical situation of people having many more positive feelings at work than in leisure, yet saying that they wish to be doing something else when they are at work, not when they are in leisure.” We’re terrible, the experiment revealed, at anticipating which activities will satisfy us and which will leave us discontented. Even when we’re in the midst of doing something, we don’t seem able to judge its psychic consequences accurately.

Those are symptoms of a more general affliction, on which psychologists have bestowed the poetic name miswanting. We’re inclined to desire things we don’t like and to like things we don’t desire. “When the things we want to happen do not improve our happiness, and when the things we want not to happen do,” the cognitive psychologists Daniel Gilbert and Timothy Wilson have observed, “it seems fair to say we have wanted badly.” And as slews of gloomy studies show, we’re forever wanting badly. There’s also a social angle to our tendency to misjudge work and leisure. As Csikszentmihalyi and LeFevre discovered in their experiments, and as most of us know from our own experience, people allow themselves to be guided by social conventions—in this case, the deep-seated idea that being “at leisure” is more desirable, and carries more status, than being “at work”—rather than by their true feelings. “Needless to say,” the researchers concluded, “such a blindness to the real state of affairs is likely to have unfortunate consequences for both individual wellbeing and the health of society.” As people act on their skewed perceptions, they will “try to do more of those activities that provide the least positive experiences and avoid the activities that are the source of their most positive and intense feelings.” That’s hardly a recipe for the good life.

It’s not that the work we do for pay is intrinsically superior to the activities we engage in for diversion or entertainment. Far from it. Plenty of jobs are dull and even demeaning, and plenty of hobbies and pastimes are stimulating and fulfilling. But a job imposes a structure on our time that we lose when we’re left to our own devices. At work, were pushed to engage in the kinds of activities that human beings find most satisfying. We’re happiest when we’re absorbed in a difficult task, a task that has clear goals and that challenges us not only to exercise our talents but to stretch them. We become so immersed in the flow of our work, to use Csikszentmihalyi s term, that we tune out distractions and transcend the anxieties and worries that plague our everyday lives. Our usually wayward attention becomes fixed on what we’re doing. “Every action, movement, and thought follows inevitably from the previous one,” explains Csikszentmihalyi. “Your whole being is involved, and you’re using your skills to the utmost.” Such states of deep absorption can be produced by all manner of effort, from laying tile to singing in a choir to racing a dirt bike. You don’t have to be earning a wage to enjoy the transports of flow.

More often than not, though, our discipline flags and our mind wanders when we’re not on the job. We may yearn for the workday to be over so we can start spending our pay and having some fun, but most of us fritter away our leisure hours. We shun hard work and only rarely engage in challenging hobbies. Instead, we watch TV or go to the mall or log on to Facebook. We get lazy. And then we get bored and fretful. Disengaged from any outward focus, our attention turns inward, and we end up locked in what Emerson called the jail of self-consciousness. Jobs, even crummy ones, are “actually easier to enjoy than free time,” says Csikszentmihalyi, because they have the “built-in” goals and challenges that “encourage one to become involved in one’s work, to concentrate and lose oneself in it.” But that’s not what our deceiving minds want us to believe. Given the opportunity, we’ll eagerly relieve ourselves of the rigors of labor. We’ll sentence ourselves to idleness.

Automation offers us innumerable promises. Our lives, we think, will be greater if more things are automated. Yet as Carr explores in The Glass Cage, automation extracts a cost. Removing “complexity from jobs, diminishing the challenge they present and hence the level of engagement they promote.” This doesn’t mean that Carr is anti-automation. He’s not. He just wants us to see another side.

“All too often,” Carr warns, “automation frees us from that which makes us feel free.”

12