Category: Technology

Why Life Can’t Be Simpler

We’d all like life to be simpler. But we also don’t want to sacrifice our options and capabilities. Tesler’s law of the conservation of complexity, a rule from design, explains why we can’t have both. Here’s how the law can help us create better products and services by rethinking simplicity.

“Why can’t life be simple?”

We’ve all likely asked ourselves that at least once. After all, life is complicated. Every day, we face processes that seem almost infinitely recursive. Each step requires the completion of a different task to make it possible, which in itself requires another task. We confront tools requiring us to memorize reams of knowledge and develop additional skills just to use them. Endeavors that seem like they should be simple, like getting utilities connected in a new home or figuring out the controls for a fridge, end up having numerous perplexing steps.

When we wish for things to be simpler, we usually mean we want products and services to have fewer steps, fewer controls, fewer options, less to learn. But at the same time, we still want all of the same features and capabilities. These two categories of desires are often at odds with each other and distort how we understand the complex.

***

Conceptual Models

In Living with Complexity, Donald A. Norman explains that complexity is all in the mind. Our perception of a product or service as simple or complex has its basis in the conceptual model we have of it. Norman writes that “A conceptual model is the underlying belief structure held by a person about how something works . . . Conceptual models are extremely important tools for organizing and understanding otherwise complex things.”

For example, on many computers, you can drag and drop a file into a folder. Both the file and the folder often have icons that represent their real-world namesakes. For the user, this process is simple; it provides a clear conceptual model. When people first started using graphical interfaces, real-world terms and icons made it easier to translate what they were doing. But the process only seems simple because of this effective conceptual model. It doesn’t represent what happens on the computer, where files and folders don’t exist. Computers store data wherever is convenient and may split files across multiple locations.

When we want something to be simpler, what we truly need is a better conceptual model of it. Once we know how to use them, complex tools end up making our lives simpler because they provide the precise functionality we want. A computer file is a great conceptual model because it hijacked something people already understood: physical files and folders. It would have been much harder for them to develop a whole new conceptual model reflecting how computers actually store files. What’s important to note is that giving users this simple conceptual model didn’t change how things work behind the scenes.

Removing functionality doesn’t make something simpler, because it removes options. Simple tools have a limited ability to simplify processes. Trying to do something complex with a simple tool is more complex than doing the same thing with a more complex tool.

A useful analogy here is the hand tools used by craftspeople, such as a silversmith’s planishing hammer (a tool used to shape and smooth the surface of metal). Norman highlights that these tools seem simple to the untrained eye. But using them requires great skill and practice. A craftsperson needs to know how to select them from the whole constellation of specialized tools they possess.

In itself, a planishing hammer might seem far, far simpler than, say, a digital photo editing program. Look again, Norman says. We have to compare the photo editing tool with the silversmith’s whole workbench. Both take a lot of time and practice to master. Both consist of many tools that are individually simple. Learning how and when to use them is the complex part.

Norman writes, “Whether something is complicated is in the mind of the beholder. ” Looking at a workbench of tools or a digital photo editing program, a novice sees complexity. A professional sees a range of different tools, each of which is simple to use. They know when to use each to make a process easier. Having fewer options would make their life more complex, not simpler, because they wouldn’t be able to break what they need to do down into individually simple steps. A professional’s experience-honed conceptual model helps them navigate a wide range of tools.

***

The conservation of complexity

To do difficult things in the simplest way, we need a lot of options.

Complexity is necessary because it gives us the functionality we need. A useful framework for understanding this is Tesler’s law of the conservation of complexity, which states:

The total complexity of a system is a constant. If you make a user’s interaction with a system simpler, the complexity behind the scenes increases.

The law originates from Lawrence Tesler (1945–2020), a computer scientist specializing in human-computer interactions who worked at Xerox, Apple, Amazon, and Yahoo! Tesler was influential in the development of early graphical interfaces, and he was the co-creator of the copy-and-paste functionality.

Complexity is like energy. It cannot be created or destroyed, only moved somewhere else. When a product or service becomes simpler for users, engineers and designers have to work harder. Norman writes, “With technology, simplifications at the level of usage invariably result in added complexity of the underlying mechanism. ” For example, the files and folders conceptual model for computer interfaces doesn’t change how files are stored, but by putting in extra work to translate the process into something recognizable, designers make navigating them easier for users.

Whether something looks simple or is simple to use says little about its overall complexity. “What is simple on the surface can be incredibly complex inside: what is simple inside can result in an incredibly complex surface. So from whose point of view do we measure complexity? ”

***

Out of control

Every piece of functionality requires a control—something that makes something happen. The more complex something is, the more controls it needs—whether they are visible to the user or not. Controls may be directly accessible to a user, as with the home button on an iPhone, or they may be behind the scenes, as with an automated thermostat.

From a user’s standpoint, the simplest products and services are those that are fully automated and do not require any intervention (unless something goes wrong.)

As long as you pay your bills, the water supply to your house is probably fully automated. When you turn on a tap, you don’t need to have requested there to be water in the pipes first. The companies that manage the water supply handle the complexity.

Or, if you stay in an expensive hotel, you might find your room is always as you want it, with your minifridge fully stocked with your favorites and any toiletries you forgot provided. The staff work behind the scenes to make this happen, without you needing to make requests.

On the other end of the spectrum, we have products and services that require users to control every last step.

A professional photographer is likely to use a camera that needs them to manually set every last setting, from white balance to shutter speed. This means the camera itself doesn’t need automation, but the user needs to operate controls for everything, giving them full control over the results. An amateur photographer might use a camera that automatically chooses these settings so all they need to do is point and shoot. In this case, the complexity transfers to the camera’s inner workings.

In the restaurants inside IKEA stores, customers typically perform tasks such as filling up drinks and clearing away dishes themselves. This means less complexity for staff and much lower prices compared to restaurants where staff do these things.

***

Lessons from the conservation of complexity

The first lesson from Tesler’s law of the conservation of complexity is that how simple something looks is not a reflection of how simple it is to use. Removing controls can mean users need to learn complex sequences to use the same features—similar to how languages with fewer sounds have longer words. One way to conceptualize the movement of complexity is through the notion of trade-offs. If complexity is constant, then there are trade-offs depending on where that complexity is moved.

A very basic example of complexity trade-offs can be found in the history of arithmetic. For centuries, many counting systems all over the world employed tools using stones or beads like a tabula (the Romans) or soroban (the Japanese) to facilitate adding and subtracting numbers. They were easy to use, but not easily portable. Then the Hindu-Arabic system came along (the one we use today) and by virtue of employing columns, and thus not requiring any moving parts, offered a much more portable counting system. However, the portability came with a cost.

Paul Lockhart explains in Arithmetic, “With the Hindu-Arabic system the writing and calculating are inextricably linked. Instead of moving stones or sliding beads, our manipulations become transmutations of the symbols themselves. That means we need to know things. We need to know that one more than 2 is 3, for instance. In other words, the price we pay [for portability] is massive amounts of memorization.” Thus, there is a trade-off. The simpler arithmetic system requires more complexity in terms of the memorization required of the users. We all went through the difficult process of learning mathematical symbols early in life. Although they might seem simple to us now, that’s just because we’re so accustomed to them.

Although perceived simplicity may have greater appeal at first, users are soon frustrated if it means greater operational complexity. Norman writes:

Perceived simplicity is not at all the same as simplicity of usage: operational simplicity. Perceived simplicity decreases with the number of visible controls and displays. Increase the number of visible alternatives and the perceived simplicity drops. The problem is that operational simplicity can be drastically improved by adding more controls and displays. The very things that make something easier to learn and to use can also make it be perceived as more difficult.

Even if it receives a negative reaction before usage, operational simplicity is the more important goal. For example, in a company, having a clearly stated directly responsible person for each project might seem more complex than letting a project be a team effort that falls to whoever is best suited to each part. But in practice, this adds complexity when someone tries to move forward with it or needs to know who should hear feedback about problems.

A second lesson is that things don’t always need to be incredibly simple for users. People have an intuitive sense that complexity has to go somewhere. When using a product or service is too simple, users can feel suspicious or like they’ve been robbed of control. They know that a lot more is going on behind the scenes, they just don’t know what it is. Sometimes we need to preserve a minimum level of complexity so that users feel like an actual participant. According to legend, cake mixes require the addition of a fresh egg because early users found that dried ones felt a bit too lazy and low effort.

An example of desirable minimum complexity is help with homework. For many parents, helping their children with their homework often feels like unnecessary complexity. It is usually subjects and facts they haven’t thought about in years, and they find themselves having to relearn them in order to help their kids. It would be far simpler if the teachers could cover everything in class to a degree that each child needed no additional practice. However, the complexity created by involving parents in the homework process helps make parents more aware of what their children are learning. In addition, they often get insight into areas of both struggle and interest, can identify ways to better connect with their children, and learn where they may want to teach them some broader life skills.

When we seek to make things simpler for other people, we should recognize that there be a point of diminishing negative returns wherein further simplification leads to a worse experience. Simplicity is not an end in itself—other things like speed, usability, and time-saving are. We shouldn’t simplify things from the user standpoint for the sake of it.

If changes don’t make something better for users, we’re just creating unnecessary behind-the-scenes complexity. People want to feel in control, especially when it comes to something important. We want to learn a bit about what’s happening, and an overly simple process teaches us nothing.

A third lesson is that products and services are only as good as what happens when they break. Handling a problem with something that has lots of controls on the user side may be easier for the user. They’re used to being involved in it. If something has been fully automated up until the point where it breaks, users don’t know how to react. The change is jarring, and they may freeze or overreact. Seeing as fully automated things fade into the background, this may be their most salient and memorable interaction with a product or service. If handling a problem is difficult for the user—for example, if there’s a lack of rapid support or instructions available or it’s hard to ascertain what went wrong in the first place—they may come away with a negative overall impression, even if everything worked fine for years beforehand.

A big challenge in the development of self-driving cars is that a driver needs to be able to take over if the car encounters a problem. But if someone hasn’t had to operate the car manually for a while, they may panic or forget what to do. So it’s a good idea to limit how long the car drives itself for. The same is purportedly true for airplane pilots. If the plane does too much of the work, the pilot won’t cope well in an emergency.

A fourth lesson is the importance of thinking about how the level of control you give your customers or users influences your workload. For a graphic designer, asking a client to detail exactly how they want their logo to look makes their work simpler. But it might be hard work for the client, who might not know what they want or may make poor choices. A more experienced designer might ask a client for much less information and instead put the effort into understanding their overall brand and deducing their needs from subtle clues, then figuring out the details themselves. The more autonomy a manager gives their team, the lower their workload, and vice versa.

If we accept that complexity is a constant, we need to always be mindful of who is bearing the burden of that complexity.

 

The Spiral of Silence

Our desire to fit in with others means we don’t always say what we think. We only express opinions that seem safe. Here’s how the spiral of silence works and how we can discover what people really think.

***

Be honest: How often do you feel as if you’re really able to express your true opinions without fearing judgment? How often do you bite your tongue because you know you hold an unpopular view? How often do you avoid voicing any opinion at all for fear of having misjudged the situation?

Even in societies with robust free speech protections, most people don’t often say what they think. Instead they take pains to weigh up the situation and adjust their views accordingly. This comes down to the “spiral of silence,” a human communication theory developed by German researcher Elisabeth Noelle-Neumann in the 1960s and ’70s. The theory explains how societies form collective opinions and how we make decisions surrounding loaded topics.

Let’s take a look at how the spiral of silence works and how understanding it can give us a more realistic picture of the world.

***

How the spiral of silence works

According to Noelle-Neumann’s theory, our willingness to express an opinion is a direct result of how popular or unpopular we perceive it to be. If we think an opinion is unpopular, we will avoid expressing it. If we think it is popular, we will make a point of showing we think the same as others.

Controversy is also a factor—we may be willing to express an unpopular uncontroversial opinion but not an unpopular controversial one. We perform a complex dance whenever we share views on anything morally loaded.

Our perception of how “safe” it is to voice a particular view comes from the clues we pick up, consciously or not, about what everyone else believes. We make an internal calculation based on signs like what the mainstream media reports, what we overhear coworkers discussing on coffee breaks, what our high school friends post on Facebook, or prior responses to things we’ve said.

We also weigh up the particular context, based on factors like how anonymous we feel or whether our statements might be recorded.

As social animals, we have good reason to be aware of whether voicing an opinion might be a bad idea. Cohesive groups tend to have similar views. Anyone who expresses an unpopular opinion risks social exclusion or even ostracism within a particular context or in general. This may be because there are concrete consequences, such as losing a job or even legal penalties. Or there may be less official social consequences, like people being less friendly or willing to associate with you. Those with unpopular views may suppress them to avoid social isolation.

Avoiding social isolation is an important instinct. From an evolutionary biology perspective, remaining part of a group is important for survival, hence the need to at least appear to share the same views as anyone else. The only time someone will feel safe to voice a divergent opinion is if they think the group will share it or be accepting of divergence, or if they view the consequences of rejection as low. But biology doesn’t just dictate how individuals behave—it ends up shaping communities. It’s almost impossible for us to step outside of that need for acceptance.

A feedback loop pushes minority opinions towards less and less visibility—hence why Noelle-Neumann used the word “spiral.” Each time someone voices a majority opinion, they reinforce the sense that it is safe to do so. Each time someone receives a negative response for voicing a minority opinion, it signals to anyone sharing their view to avoid expressing it.

***

An example of the spiral of silence

A 2014 Pew Research survey of 1,801 American adults examined the prevalence of the spiral of silence on social media. Researchers asked people about their opinions on one public issue: Edward Snowden’s 2013 revelations of US government surveillance of citizens’ phones and emails. They selected this issue because, while controversial, prior surveys suggested a roughly even split in public opinion surrounding whether the leaks were justified and whether such surveillance was reasonable.

Asking respondents about their willingness to share their opinions in different contexts highlighted how the spiral of silence plays out. 86% of respondents were willing to discuss the issue in person, but only about half as many were willing to post about it on social media. Of the 14% who would not consider discussing the Snowden leaks in person, almost none (0.3%) were willing to turn to social media instead.

Both in person and online, respondents reported far greater willingness to share their views with people they knew agreed with them—three times as likely in the workplace and twice as likely in a Facebook discussion.

***

The implications of the spiral of silence

The end result of the spiral of silence is a point where no one publicly voices a minority opinion, regardless of how many people believe it. The first implication of this is that the picture we have of what most people believe is not always accurate. Many people nurse opinions they would never articulate to their friends, coworkers, families, or social media followings.

A second implication is that the possibility of discord makes us less likely to voice an opinion at all, assuming we are not trying to drum up conflict. In the aforementioned Pew survey, people were more comfortable discussing a controversial story in person than online. An opinion voiced online has a much larger potential audience than one voiced face to face, and it’s harder to know exactly who will see it. Both of these factors increase the risk of someone disagreeing.

If we want to gauge what people think about something, we need to remove the possibility of negative consequences. For example, imagine a manager who often sets overly tight deadlines, causing immense stress to their team. Everyone knows this is a problem and discusses it among themselves, recognizing that more realistic deadlines would be motivating, and unrealistic ones are just demoralizing. However, no one wants to say anything because they’ve heard the manager say that people who can’t handle pressure don’t belong in that job. If the manager asks for feedback about their leadership style, they’re not going to hear what they need to hear if they know who it comes from.

A third implication is that what seems like a sudden change in mainstream opinions can in fact be the result of a shift in what is acceptable to voice, not in what people actually think. A prominent public figure getting away with saying something controversial may make others feel safe to do the same. A change in legislation may make people comfortable saying what they already thought.

For instance, if recreational marijuana use is legalized where someone lives, they might freely remark to a coworker that they consume it and consider it harmless. Even if that was true before the legislation change, saying so would have been too fraught, so they might have lied or avoided the topic. The result is that mainstream opinions can appear to change a great deal in a short time.

A fourth implication is that highly vocal holders of a minority opinion can end up having a disproportionate influence on public discourse. This is especially true if that minority is within a group that already has a lot of power.

While this was less the case during Noelle-Neumann’s time, the internet makes it possible for a vocal minority to make their opinions seem far more prevalent than they actually are—and therefore more acceptable. Indeed, the most extreme views on any spectrum can end up seeming most normal online because people with a moderate take have less of an incentive to make themselves heard.

In anonymous environments, the spiral of silence can end up reversing itself, making the most fringe views the loudest.

When Technology Takes Revenge

While runaway cars and vengeful stitched-together humans may be the stuff of science fiction, technology really can take revenge on us. Seeing technology as part of a complex system can help us avoid costly unintended consequences. Here’s what you need to know about revenge effects.

***

By many metrics, technology keeps making our lives better. We live longer, healthier, richer lives with more options than ever before for things like education, travel, and entertainment. Yet there is often a sense that we have lost control of our technology in many ways, and thus we end up victims of its unanticipated impacts.

Edward Tenner argues in Why Things Bite Back: Technology and the Revenge of Unintended Consequences that we often have to deal with “revenge effects.” Tenner coined this term to describe the ways in which technologies can solve one problem while creating additional worse problems, new types of problems, or shifting the harm elsewhere. In short, they bite back.

Although Why Things Bite Back was written in the late 1990s and many of its specific examples and details are now dated, it remains an interesting lens for considering issues we face today. The revenge effects Tenner describes haunt us still. As the world becomes more complex and interconnected, it’s easy to see that the potential for unintended consequences will increase.

Thus, when we introduce a new piece of technology, it would be wise to consider whether we are interfering with a wider system. If that’s the case, we should consider what might happen further down the line. However, as Tenner makes clear, once the factors involved get complex enough, we cannot anticipate them with any accuracy.

Neither Luddite nor alarmist in nature, the notion of revenge effects can help us better understand the impact of intervening with complex systems But we need to be careful. Although second-order thinking is invaluable, it cannot predict the future with total accuracy. Understanding revenge effects is primarily a reminder of the value of caution and not of specific risks.

***

Types of revenge effects

There are four different types of revenge effects, described here as follows:

  1. Repeating effects: occur when more efficient processes end up forcing us to do the same things more often, meaning they don’t free up more of our time. Better household appliances have led to higher standards of cleanliness, meaning people end up spending the same amount of time—or more—on housework.
  2. Recomplicating effects: occur when processes become more and more complex as the technology behind them improves. Tenner gives the now-dated example of phone numbers becoming longer with the move away from rotary phones. A modern example might be lighting systems that need to be operated through an app, meaning a visitor cannot simply flip a switch.
  3. Regenerating effects: occur when attempts to solve a problem end up creating additional risks. Targeting pests with pesticides can make them increasingly resistant to harm or kill off their natural predators. Widespread use of antibiotics to control certain conditions has led to be resistant strains of bacteria that are harder to treat.
  4. Rearranging effects: occur when costs are transferred elsewhere so risks shift and worsen. Air conditioning units on subways cool down the trains—while releasing extra heat and making the platforms warmer. Vacuum cleaners can throw dust mite pellets into the air, where they remain suspended and are more easily breathed in. Shielding beaches from waves transfers the water’s force elsewhere.

***

Recognizing unintended consequences

The more we try to control our tools, the more they can retaliate.

Revenge effects occur when the technology for solving a problem ends up making it worse due to unintended consequences that are almost impossible to predict in advance. A smartphone might make it easier to work from home, but always being accessible means many people end up working more.

Things go wrong because technology does not exist in isolation. It interacts with complex systems, meaning any problems spread far from where they begin. We can never merely do one thing.

Tenner writes: “Revenge effects happen because new structures, devices, and organisms react with real people in real situations in ways we could not foresee.” He goes on to add that “complexity makes it impossible for anyone to understand how the system might act: tight coupling spreads problems once they begin.”

Prior to the Industrial Revolution, technology typically consisted of tools that served as an extension of the user. They were not, Tenner argues, prone to revenge effects because they did not function as parts in an overall system like modern technology. He writes that “a machine can’t appear to have a will of its own unless it is a system, not just a device. It needs parts that interact in unexpected and sometimes unstable and unwanted ways.”

Revenge effects often involve the transformation of defined, localized risks into nebulous, gradual ones involving the slow accumulation of harm. Compared to visible disasters, these are much harder to diagnose and deal with.

Large localized accidents, like a plane crash, tend to prompt the creation of greater safety standards, making us safer in the long run. Small cumulative ones don’t.

Cumulative problems, compared to localized ones, aren’t easy to measure or even necessarily be concerned about. Tenner points to the difference between reactions in the 1990s to the risk of nuclear disasters compared to global warming. While both are revenge effects, “the risk from thermonuclear weapons had an almost built-in maintenance compulsion. The deferred consequences of climate change did not.”

Many revenge effects are the result of efforts to improve safety. “Our control of the acute has indirectly promoted chronic problems”, Tenner writes. Both X-rays and smoke alarms cause a small number of cancers each year. Although they save many more lives and avoiding them is far riskier, we don’t get the benefits without a cost. The widespread removal of asbestos has reduced fire safety, and disrupting the material is often more harmful than leaving it in place.

***

Not all effects exact revenge

A revenge effect is not a side effect—defined as a cost that goes along with a benefit. The value of being able to sanitize a public water supply has significant positive health outcomes. It also has a side effect of necessitating an organizational structure that can manage and monitor that supply.

Rather, a revenge effect must actually reverse the benefit for at least a small subset of users. For example, the greater ease of typing on a laptop compared to a typewriter has led to an increase in carpal tunnel syndrome and similar health consequences. It turns out that the physical effort required to press typewriter keys and move the carriage protected workers from some of the harmful effects of long periods of time spent typing.

Likewise, a revenge effect is not just a tradeoff—a benefit we forgo in exchange for some other benefit. As Tenner writes:

If legally required safety features raise airline fares, that is a tradeoff. But suppose, say, requiring separate seats (with child restraints) for infants, and charging a child’s fare for them, would lead many families to drive rather than fly. More children could in principle die from transportation accidents than if the airlines had continued to permit parents to hold babies on their laps. This outcome would be a revenge effect.

***

In support of caution

In the conclusion of Why Things Bite Back, Tenner writes:

We seem to worry more than our ancestors, surrounded though they were by exploding steamboat boilers, raging epidemics, crashing trains, panicked crowds, and flaming theaters. Perhaps this is because the safer life imposes an ever increasing burden of attention. Not just in the dilemmas of medicine but in the management of natural hazards, in the control of organisms, in the running of offices, and even in the playing of games there are, not necessarily more severe, but more subtle and intractable problems to deal with.

While Tenner does not proffer explicit guidance for dealing with the phenomenon he describes, one main lesson we can draw from his analysis is that revenge effects are to be expected, even if they cannot be predicted. This is because “the real benefits usually are not the ones that we expected, and the real perils are not those we feared.”

Chains of cause and effect within complex systems are stranger than we can often imagine. We should expect the unexpected, rather than expecting particular effects.

While we cannot anticipate all consequences, we can prepare for their existence and factor it into our estimation of the benefits of new technology. Indeed, we should avoid becoming overconfident about our ability to see the future, even when we use second-order thinking. As much as we might prepare for a variety of impacts, revenge effects may be dependent on knowledge we don’t yet possess. We should expect larger revenge effects the more we intensify something (e.g., making cars faster means worse crashes).

Before we intervene in a system, assuming it can only improve things, we should be aware that our actions can do the opposite or do nothing at all. Our estimations of benefits are likely to be more realistic if we are skeptical at first.

If we bring more caution to our attempts to change the world, we are better able to avoid being bitten.

 

A Primer on Algorithms and Bias

The growing influence of algorithms on our lives means we owe it to ourselves to better understand what they are and how they work. Understanding how the data we use to inform algorithms influences the results they give can help us avoid biases and make better decisions.

***

Algorithms are everywhere: driving our cars, designing our social media feeds, dictating which mixer we end up buying on Amazon, diagnosing diseases, and much more.

Two recent books explore algorithms and the data behind them. In Hello World: Being Human in the Age of Algorithms, mathematician Hannah Fry shows us the potential and the limitations of algorithms. And Invisible Women: Data Bias in a World Designed for Men by writer, broadcaster, and feminist activist Caroline Criado Perez demonstrates how we need to be much more conscientious of the quality of the data we feed into them.

Humans or algorithms?

First, what is an algorithm? Explanations of algorithms can be complex. Fry explains that at their core, they are defined as step-by-step procedures for solving a problem or achieving a particular end. We tend to use the term to refer to mathematical operations that crunch data to make decisions.

When it comes to decision-making, we don’t necessarily have to choose between doing it ourselves and relying wholly on algorithms. The best outcome may be a thoughtful combination of the two.

We all know that in certain contexts, humans are not the best decision-makers. For example, when we are tired, or when we already have a desired outcome in mind, we may ignore relevant information. In Thinking, Fast and Slow, Daniel Kahneman gave multiple examples from his research with Amos Tversky that demonstrated we are heavily influenced by cognitive biases such as availability and anchoring when making certain types of decisions. It’s natural, then, that we would want to employ algorithms that aren’t vulnerable to the same tendencies. In fact, their main appeal for use in decision-making is that they can override our irrationalities.

Algorithms, however, aren’t without their flaws. One of the obvious ones is that because algorithms are written by humans, we often code our biases right into them. Criado Perez offers many examples of algorithmic bias.

For example, an online platform designed to help companies find computer programmers looks through activity such as sharing and developing code in online communities, as well as visiting Japanese manga (comics) sites. People visiting certain sites with frequency received higher scores, thus making them more visible to recruiters.

However, Criado Perez presents the analysis of this recruiting algorithm by Cathy O’Neil, scientist and author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, who points out that “women, who do 75% of the world’s unpaid care work, may not have the spare leisure time to spend hours chatting about manga online . . . and if, like most of techdom, that manga site is dominated by males and has a sexist tone, a good number of women in the industry will probably avoid it.”

Criado Perez postulates that the authors of the recruiting algorithm didn’t intend to encode a bias that discriminates against women. But, she says, “if you aren’t aware of how those biases operate, if you aren’t collecting data and taking a little time to produce evidence-based processes, you will continue to blindly perpetuate old injustices.”

Fry also covers algorithmic bias and asserts that “wherever you look, in whatever sphere you examine, if you delve deep enough into any system at all, you’ll find some kind of bias.” We aren’t perfect—and we shouldn’t expect our algorithms to be perfect, either.

In order to have a conversation about the value of an algorithm versus a human in any decision-making context, we need to understand, as Fry explains, that “algorithms require a clear, unambiguous idea of exactly what we want them to achieve and a solid understanding of the human failings they are replacing.”

Garbage in, garbage out

No algorithm is going to be successful if the data it uses is junk. And there’s a lot of junk data in the world. Far from being a new problem, Criado Perez argues that “most of recorded human history is one big data gap.” And that has a serious negative impact on the value we are getting from our algorithms.

Criado Perez explains the situation this way: We live in “a world [that is] increasingly reliant on and in thrall to data. Big data. Which in turn is panned for Big Truths by Big Algorithms, using Big Computers. But when your data is corrupted by big silences, the truths you get are half-truths, at best.”

A common human bias is one regarding the universality of our own experience. We tend to assume that what is true for us is generally true across the population. We have a hard enough time considering how things may be different for our neighbors, let alone for other genders or races. It becomes a serious problem when we gather data about one subset of the population and mistakenly assume that it represents all of the population.

For example, Criado Perez examines the data gap in relation to incorrect information being used to inform decisions about safety and women’s bodies. From personal protective equipment like bulletproof vests that don’t fit properly and thus increase the chances of the women wearing them getting killed to levels of exposure to toxins that are unsafe for women’s bodies, she makes the case that without representative data, we can’t get good outputs from our algorithms. She writes that “we continue to rely on data from studies done on men as if they apply to women. Specifically, Caucasian men aged twenty-five to thirty, who weigh 70 kg. This is ‘Reference Man’ and his superpower is being able to represent humanity as whole. Of course, he does not.” Her book contains a wide variety of disciplines and situations where the gender gap in data leads to increased negative outcomes for women.

The limits of what we can do

Although there is a lot we can do better when it comes to designing algorithms and collecting the data sets that feed them, it’s also important to consider their limits.

We need to accept that algorithms can’t solve all problems, and there are limits to their functionality. In Hello World, Fry devotes a chapter to the use of algorithms in justice. Specifically, algorithms designed to provide information to judges about the likelihood of a defendant committing further crimes. Our first impulse is to say, “Let’s not rely on bias here. Let’s not have someone’s skin color or gender be a key factor for the algorithm.” After all, we can employ that kind of bias just fine ourselves. But simply writing bias out of an algorithm is not as easy as wishing it so. Fry explains that “unless the fraction of people who commit crimes is the same in every group of defendants, it is mathematically impossible to create a test which is equally accurate at predicting across the board and makes false positive and false negative mistakes at the same rate for every group of defendants.”

Fry comes back to such limits frequently throughout her book, exploring them in various disciplines. She demonstrates to the reader that “there are boundaries to the reach of algorithms. Limits to what can be quantified.” Perhaps a better understanding of those limits is needed to inform our discussions of where we want to use algorithms.

There are, however, other limits that we can do something about. Both authors make the case for more education about algorithms and their input data. Lack of understanding shouldn’t hold us back. Algorithms that have a significant impact on our lives specifically need to be open to scrutiny and analysis. If an algorithm is going to put you in jail or impact your ability to get a mortgage, then you ought to be able to have access to it.

Most algorithm writers and the companies they work for wave the “proprietary” flag and refuse to open themselves up to public scrutiny. Many algorithms are a black box—we don’t actually know how they reach the conclusions they do. But Fry says that shouldn’t deter us. Pursuing laws (such as the data access and protection rights being instituted in the European Union) and structures (such as an algorithm-evaluating body playing a role similar to the one the U.S. Food and Drug Administration plays in evaluating whether pharmaceuticals can be made available to the U.S. market) will help us decide as a society what we want and need our algorithms to do.

Where do we go from here?

Algorithms aren’t going away, so it’s best to acquire the knowledge needed to figure out how they can help us create the world we want.

Fry suggests that one way to approach algorithms is to “imagine that we designed them to support humans in their decisions, rather than instruct them.” She envisions a world where “the algorithm and the human work together in partnership, exploiting each other’s strengths and embracing each other’s flaws.”

Part of getting to a world where algorithms provide great benefit is to remember how diverse our world really is and make sure we get data that reflects the realities of that diversity. We can either actively change the algorithm, or we change the data set. And if we do the latter, we need to make sure we aren’t feeding our algorithms data that, for example, excludes half the population. As Criado Perez writes, “when we exclude half of humanity from the production of knowledge, we lose out on potentially transformative insights.”

Given how complex the world of algorithms is, we need all the amazing insights we can get. Algorithms themselves perhaps offer the best hope, because they have the inherent flexibility to improve as we do.

Fry gives this explanation: “There’s nothing inherent in [these] algorithms that means they have to repeat the biases of the past. It all comes down to the data you give them. We can choose to be ‘crass empiricists’ (as Richard Berk put it ) and follow the numbers that are already there, or we can decide that the status quo is unfair and tweak the numbers accordingly.”

We can get excited about the possibilities that algorithms offer us and use them to create a world that is better for everyone.

The Ingredients For Innovation

Inventing new things is hard. Getting people to accept and use new inventions is often even harder. For most people, at most times, technological stagnation has been the norm. What does it take to escape from that and encourage creativity?

***

“Technological progress requires above all tolerance toward the unfamiliar and the eccentric.”

— Joel Mokyr, The Lever of Riches

Writing in The Lever of Riches: Technological Creativity and Economic Progress, economic historian Joel Mokyr asks why, when we look at the past, some societies have been considerably more creative than others at particular times. Some have experienced sudden bursts of progress, while others have stagnated for long periods of time. By examining the history of technology and identifying the commonalities between the most creative societies and time periods, Mokyr offers useful lessons we can apply as both individuals and organizations.

What does it take for a society to be technologically creative?

When trying to explain something as broad and complex as technological creativity, it’s important not to fall prey to the lure of a single explanation. There are many possible reasons for anything that happens, and it’s unwise to believe explanations that are too tidy. Mokyr disregards some of the common simplistic explanations for technological creativity, such as that war prompts creativity or people with shorter life spans are less likely to expend time on invention.

Mokyr explores some of the possible factors that contribute to a society’s technological creativity. In particular, he seeks to explain why Europe experienced such a burst of technological creativity from around 1500 to the Industrial Revolution, when prior to that it had lagged far behind the rest of the world. Mokyr explains that “invention occurs at the level of the individual, and we should address the factors that determine individual creativity. Individuals, however, do not live in a vacuum. What makes them implement, improve and adapt new technologies, or just devise small improvements in the way they carry out their daily work depends on the institutions and the attitudes around them.” While environment isn’t everything, certain conditions are necessary for technological creativity.

He identifies the three following key factors in an environment that impact the occurrence of invention and innovation.

The social infrastructure

First of all, the society needs a supply of “ingenious and resourceful innovators who are willing and able to challenge their physical environment for their own improvement.” Fostering these attributes requires factors like good nutrition, religious beliefs that are not overly conservative, and access to education. It is in part about the absence of negative factors—necessitous people have less capacity for creativity. Mokyr writes: “The supply of talent is surely not completely exogenous; it responds to incentives and attitudes. The question that must be confronted is why in some societies talent is unleashed upon technical problems that eventually change the entire productive economy, whereas in others this kind of talent is either repressed or directed elsewhere.”

One partial explanation for Europe’s creativity from 1500 to the Industrial Revolution is that it was often feasible for people to relocate to a different country if the conditions in their current one were suboptimal. A creative individual finding themselves under a conservative government seeking to maintain the technological status quo was able to move elsewhere.

The ability to move around was also part of the success of the Abbasid Caliphate, an empire that stretched from India to the Iberian Peninsula from about 750 to 1250. Economists Maristella Botticini and Zvi Eckstein write in The Chosen Few: How Education Shaped Jewish History, 70–1492 that “it was relatively easy to move or migrate” within the Abbasid empire, especially with its “common language (Arabic) and a uniform set of institutions and laws over an immense area, greatly [favoring] trade and commerce.”

It also matters whether creative people are channeled into technological fields or into other fields, like the military. In Britain during and prior to the Industrial Revolution, Mokyr considers invention to have been the main possible path for creative individuals, as other areas like politics leaned towards conformism.

The social incentives

Second, there need to be incentives in place to encourage innovation. This is of extra importance for macroinventions – completely new inventions, not improvements on existing technology – which can require a great leap of faith. The person who comes up with a faster horse knows it has a market; the one who comes up with a car does not. Such incentives are most often financial, but not always. Awards, positions of power, and recognition also count. Mokyr explains that diverse incentives encourage the patience needed for creativity: “Sustained innovation requires a set of individuals willing to absorb large risks, sometimes to wait many years for the payoff (if any.)”

Patent systems have long served as an incentive, allowing inventors to feel confident they will profit from their work. Patents first appeared in northern Italy in the early fifteenth century; Venice implemented a formal system in 1474. According to Mokyr, the monopoly rights mining contractors received over the discovery of hitherto unknown mineral resources provided inspiration for the patent system.

However, Mokyr points out that patents were not always as effective as inventors hoped. Indeed, they may have provided the incentive without any actual protection. Many inventors ended up spending unproductive time and money on patent litigation, which in some cases outweighed their profits, discouraged them from future endeavors, or left them too drained to invent more. Eli Whitney, inventor of the cotton gin, claimed his legal costs outweighed his profits. Mokyr proposes that though patent laws may be imperfect, they are, on balance, good for society as they incentivize invention while not altogether preventing good ideas from circulating and being improved upon by others.

The ability to make money from inventions is also related to geographic factors. In a country with good communication and transport systems, with markets in different areas linked, it is possible for something new to sell further afield. A bigger prospective market means stronger financial incentives. The extensive, accessible, and well-maintained trade routes during the Abbasid empire allowed for innovations to diffuse throughout the region. And during the Industrial Revolution in Britain, railroads helped bring developments to the entire country, ensuring inventors didn’t just need to rely on their local market.

The social attitude

Third, a technologically creative society must be diverse and tolerant. People must be open to new ideas and outré individuals. They must not only be willing to consider fresh ideas from within their own society but also happy to take inspiration from (or to outright steal) those coming from elsewhere. If a society views knowledge coming from other countries as suspect or even dangerous, unable to see its possible value, it is at a disadvantage. If it eagerly absorbs external influences and adapts them for its own purposes, it is at an advantage. Europeans were willing to pick up on ideas from each other. and elsewhere in the world. As Mokyr puts it, “Inventions such as the spinning wheel, the windmill, and the weight-driven clock recognized no boundaries”

In the Abbasid empire, there was an explosion of innovation that drew on the knowledge gained from other regions. Botticini and Eckstein write:

“The Abbasid period was marked by spectacular developments in science, technology, and the liberal arts. . . . The Muslim world adopted papermaking from China, improving Chinese technology with the invention of paper mills many centuries before paper was known in the West. Muslim engineers made innovate industrial uses of hydropower, tidal power, wind power, steam power, and fossil fuels. . . . Muslim engineers invented crankshafts and water turbines, employed gears in mills and water-raising machines, and pioneered the use of dams as a source of waterpower. Such advances made it possible to mechanize many industrial tasks that had previously been performed by manual labor.”

Within societies, certain people and groups seek to maintain the status quo because it is in their interests to do so. Mokyr writes that “Some of these forces protect vested interests that might incur losses if innovations were introduced, others are simply don’t-rock-the-boat kind of forces.” In order for creative technology to triumph, it must be able to overcome those forces. While there is always going to be conflict, the most creative societies are those where it is still possible for the new thing to take over. If those who seek to maintain the status quo have too much power, a society will end up stagnating in terms of technology. Ways of doing things can prevail not because they are the best, but because there is enough interest in keeping them that way.

In some historical cases in Europe, it was easier for new technologies to spread in the countryside, where the lack of guilds compensated for the lower density of people. City guilds had a huge incentive to maintain the status quo. The inventor of the ribbon loom in Danzig in 1579 was allegedly drowned by the city council, while “in the fifteenth century, the scribes guild of Paris succeeded in delaying the introduction of printing in Paris by 20 years.”

Indeed, tolerance could be said to matter more for technological creativity than education. As Mokyr repeatedly highlights, many inventors and innovators throughout history were not educated to a high level—or even at all. Up until relatively recently, most technology preceded the science explaining how it actually worked. People tinkered, looking to solve problems and experiment.

Unlike modern times, Mokyr explains, for most of history technology did not emerge from “specialized research laboratories paid for by research and development budgets and following strategies mapped out by corporate planners well-informed by marketing analysts. Technological change occurred mostly through new ideas and suggestions occurring if not randomly, then in a highly unpredictable fashion.”

When something worked, it worked, even if no one knew why or the popular explanation later proved incorrect. Steam engines are one such example. The notion that all technologies function under the same set of physical laws was not standard until Galileo. People need space to be a bit weird.

Those who were scientists and academics during some of Europe’s most creative periods worked in a different manner than what we expect today, often working on the practical problems they faced themselves. Mokyr gives Galileo as an example, as he “built his own telescopes and supplemented his salary as a professor at the University of Padua by making and repairing instruments.” The distinction between one who thinks and one who makes was not yet clear at the time of the Renaissance. Wherever and whenever making has been a respectable activity for thinkers, creativity flourishes.

Seeing as technological creativity requires a particular set of circumstances, it is not the norm. Throughout history, Mokyr writes, “Technological progress was neither continuous nor persistent. Genuinely creative societies were rare, and their bursts of creativity usually short-lived.”

Not only did people need to be open to new ideas, they also needed to be willing to actually start using new technologies. This often required a big leap of faith. If you’re a farmer just scraping by, trying a new way of ploughing your fields could mean starving to death if it doesn’t work out. Innovations can take a long time to defuse, with riskier ones taking the longest.

How can we foster the right environment?

So what can we learn from The Lever of Riches that we can apply as individuals and in organizations?

The first lesson is that creativity does not occur in a vacuum. It requires certain necessary conditions to occur. If we want to come up with new ideas as individuals, we should consider ourselves as part of a system. In particular, we need to consider what might impede us and what can encourage us. We need to eradicate anything that will get in the way of our thinking, such as limiting beliefs or lack of sleep.

We need to be clear on what motivates us to be creative, ensuring what we endeavor to do will be worthwhile enough to drive us through the associated effort. When we find ourselves creatively blocked, it’s often because we’re not in touch with what inspires us to create in the first place.

Within an organization, such factors are equally important. If you want your employees to be creative, it’s important to consider the system they’re part of. Is there anything blocking their thinking? Is a good incentive structure in place (bearing in mind incentives are not solely financial)?

Another lesson is that tolerance for divergence is essential for encouraging creativity. This may seem like part of the first lesson, but it’s crucial enough to consider in isolation.

As individuals, when we seek to come up with new ideas, we need to ask ourselves the following questions: Am I exposing myself to new material and inspirations or staying within a filter bubble? Am I open to unusual ways of thinking? Am I spending too much time around people who discourage deviation from the status quo? Am I being tolerant of myself, allowing myself to make mistakes and have bad ideas in service of eventually having good ones? Am I spending time with unorthodox people who encourage me to think differently?

Within organizations, it’s worth asking the following questions: Are new ideas welcomed or shot down? Is it in the interests of many to protect the status quo? Are ideas respected regardless of their source? Are people encouraged to question norms?

A final lesson is that the forces of inertia are always acting to discourage creativity. Invention is not the natural state of things—it is an exception. Technological stagnation is the norm. In most places, at most times, people have not come up with new technology. It takes a lot for individuals to be willing to wrestle something new from nothing or to question if something in existence can be made better. But when those acts do occur, they can have an immeasurable impact on our world.

Gates’ Law: How Progress Compounds and Why It Matters

“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”

It’s unclear exactly who first made that statement, when they said it, or how it was phrased. The most probable source is Roy Amara, a Stanford computer scientist. In the 1960s, Amara told colleagues that he believed that “we overestimate the impact of technology in the short-term and underestimate the effect in the long run.” For this reason, variations on that phrase are often known as Amara’s Law. However, Bill Gates made a similar statement (possibly paraphrasing Amara), so it’s also known as Gates’s Law.

You may have seen the same phrase attributed to Arthur C. Clarke, Tony Robbins, or Peter Drucker. There’s a good reason why Amara’s words have been appropriated by so many thinkers—they apply to so much more than technology. Almost universally, we tend to overestimate what can happen in the short term and underestimate what can happen in the long term.

Thinking about the future does not require endless hyperbole or even forecasting, which is usually pointless anyway. Instead, there are patterns we can identify if we take a long-term perspective.

Let’s look at what Bill Gates meant and why it matters.

Moore’s Law

Gates’s Law is often mentioned in conjunction with Moore’s Law. This is generally quoted as some variant of “the number of transistors on an inch of silicon doubles every eighteen months.” However, calling it Moore’s Law is misleading—at least if you think of laws as invariant. It’s more of an observation of a historical trend.

When Gordon Moore, co-founder of Fairchild Semiconductor and Intel, noticed in 1965 that the number of semiconductors on a chip doubled every year, he was not predicting that would continue in perpetuity. Indeed, Moore revised the doubling time to two years a decade later. But the world latched onto his words. Moore’s Law has been variously treated as a target, a limit, a self-fulfilling prophecy, and a physical law as certain as the laws of thermodynamics.

Moore’s Law is now considered to be outdated, after holding true for several decades. That doesn’t mean the concept has gone anywhere. Moore’s Law is often regarded as a general principle in technological development. Certain performance metrics have a defined doubling time, the opposite of a half-life.

Why is Moore’s Law related to Amara’s Law?

Exponential growth is a concept we struggle to conceptualize. As University of Colorado physics professor Albert Allen Bartlett famously put it, “The greatest shortcoming of the human race is our inability to understand the exponential function.”

When we talk about Moore’s Law, we easily underestimate what happens when a value keeps doubling. Sure, it’s not that hard to imagine your laptop getting twice as fast in a year, for instance. Where it gets tricky is when we try to imagine what that means on a longer timescale. What does that mean for your laptop in 10 years? There is a reason your iPhone has more processing power than the first space shuttle.

One of the best illustrations of exponential growth is the legend about a peasant and the emperor of China. In the story, the peasant (sometimes said to be the inventor of chess), visits the emperor with a seemingly modest request: a chessboard with one grain of rice on the first square, then two on the second, four on the third and so on, doubling each time. The emperor agreed to this idiosyncratic request and ordered his men to start counting out rice grains.

“Every fact of science was once damned. Every invention was considered impossible. Every discovery was a nervous shock to some orthodoxy. Every artistic innovation was denounced as fraud and folly. We would own no more, know no more, and be no more than the first apelike hominids if it were not for the rebellious, the recalcitrant, and the intransigent.”

— Robert Anton Wilson

If you haven’t heard this story before, it might seem like the peasant would end up with, at best, enough rice to feed their family that evening. In reality, the request was impossible to fulfill. Doubling one grain 63 times (the number of squares on a chessboard, minus the first one that only held one grain) would mean the emperor had to give the peasant over 18 million trillion grains of rice. To grow just half of that amount, he would have needed to drain the oceans and convert every bit of land on this planet into rice fields. And that’s for half.

In his essay “The Law of Accelerating Returns,” author and inventor Ray Kurzweil uses this story to show how we misunderstand the meaning of exponential growth in technology. For the first few squares, the growth was inconsequential, especially in the eyes of an emperor. It was only once they reached the halfway point that the rate began to snowball dramatically. (It’s no coincidence that Warren Buffett’s authorized biography is called The Snowball, and few people understand exponential growth better than Warren Buffett). It just so happens that by Kurzweil’s estimation, we’re at that inflection point in computing. Since the creation of the first computers, computation power has doubled roughly 32 times. We may underestimate the long-term impact because the idea of this continued doubling is so tricky to imagine.

The Technology Hype Cycle

To understand how this plays out, let’s take a look at the cycle innovations go through after their invention. Known as the Gartner hype cycle, it primarily concerns our perception of technology—not its actual value in our lives.

Hype cycles are obvious in hindsight, but fiendishly difficult to spot while they are happening. It’s important to bear in mind that this model is one way of looking at reality and is not a prediction or a template. Sometimes a step gets missed, sometimes there is a substantial gap between steps, sometimes a step is deceptive.

The hype cycle happens like this:

  • New technology: The media picks up on the existence of a new technology which may not exist in a usable form yet. Nonetheless, the publicity leads to significant interest. At this point, people working on research and development are probably not making any money from it. Lots of mistakes are made. In Everett Rogers’s diffusion of innovations theory, this is known as the innovation stage. If it seems like something new will have a dramatic payoff, it probably won’t last. If it seems we have found the perfect use for a brand-new technology, we may be wrong.
  • The peak of inflated expectations: A few well-publicized success stories lead to inflated expectations. Hype builds and new companies pop up to anticipate the demand. There may be a burst of funding for research and development. Scammers looking to make a quick buck may move into the area. Rogers calls this the syndication stage. It’s here that we overestimate the future applications and impact of the technology.
  • The trough of disillusionment: Prominent failures or a lack of progress break through the hype and lead to disillusionment. People become pessimistic about technology’s potential and mostly lose interest. Reports of scams may contribute to this, as the media uses this as a reason to describe the technology as a fraud. If it seems like new technology is dying, it may just be that its public perception has changed and the technology itself is still developing. Hype does not correlate directly with functionality.
  • The slope of enlightenment: As time passes, people continue to improve technology and find better uses for it. Eventually, it’s clear how it can improve our lives, and mainstream adoption begins. Mechanisms for preventing scams or lawbreaking emerge.
  • The plateau of productivity: The technology becomes mainstream. Development slows. It becomes part of our lives and ceases to seem novel. Those who move into the now saturated market tend to struggle, as a few dominant players take the lion’s share of the available profits. Rogers calls this the diffusion stage.

When we are cresting the peak of inflated expectations, we imagine that the new development will transform our lives within months. In the depths of the trough of disillusionment, we don’t expect it to get anywhere, even allowing years for it to improve. We typically fail to anticipate the significance of the plateau of productivity, even if it exceeds our initial expectations.

Smart people can usually see through the initial hype. But only a handful of people can—through foresight, stubbornness or perhaps pure luck—see through the trough of disillusionment. Most of the initial skeptics feel vindicated by the dramatic drop in interest and expect the innovation to disappear. It takes far greater expertise to support an unpopular technology than to deride a popular one.

Correctly spotting the cycle as it unfolds can be immensely profitable. Misreading it can be devastating. First movers in a new area often struggle to survive the trough, even if they are the ones who do the essential research and development. We tend to assume current trends will continue, so we expect sustained growth during the peak and expect linear decline during the trough.

If we are trying to assess the future impact of a new technology, we need to separate its true value from its public perception. When something is new, the mainstream hype is likely to be more noise than signal. After all, the peak of inflated expectations often happens before the technology is available in a usable form. It’s almost always before the public has access to it. Hype serves a real purpose in the early days: it draws interest, secures funding, attracts people with the right talents to move things forward and generates new ideas. Not all hype is equally important, because not all opinions are equally important. If there’s intense interest within a niche group with relevant expertise, that’s more telling than a general enthusiasm.

The hype cycle doesn’t just happen with technology. It plays out all over the place, and we’re usually fooled by it. Discrepancies between our short- and long-term estimates of achievement are everywhere. Consider the following situations. They’re hypothetical, but similar situations are common.

  • A musician releases an acclaimed debut album which creates enormous interest in their work. When their second album proves disappointing (or never materializes), most people lose interest. Over time, the performer develops a loyal, sustained following of people who accurately assess the merits of their music, not the hype.
  • A promising new pharmaceutical receives considerable attention—until it becomes apparent that there are unexpected side effects, or it isn’t as powerful as expected. With time, clinical trials find alternate uses which may prove even more beneficial. For example, a side effect could be helpful for another use. It’s estimated that over 20% of pharmaceuticals are prescribed for a different purpose than they were initially approved for, with that figure rising as high as 60% in some areas.
  • A propitious start-up receives an inflated valuation after a run of positive media attention. Its founders are lauded and extensively profiled and investors race to get involved. Then there’s an obvious failure—perhaps due to the overconfidence caused by hype—or early products fall flat or take too long to create. Interest wanes. The media gleefully dissects the company’s apparent demise. But the product continues to improve and ultimately becomes a part of our everyday lives.

In the short run, the world is a voting machine affected by whims and marketing. In the long run, it’s a weighing machine where quality and product matter.

The Adjacent Possible

Now that we know how Amara’s Law plays out in real life, the next question is: why does this happen? Why does technology grow in complexity at an exponential rate? And why don’t we see it coming?

One explanation is what Stuart Kauffman describes as “the adjacent possible.” Each new innovation adds to the number of achievable possible (future) innovations. It opens up adjacent possibilities which didn’t exist before, because better tools can be used to make even better tools.

Humanity is about expanding the realm of the possible. Discovering fire meant our ancestors could use the heat to soften or harden materials and make better tools. Inventing the wheel meant the ability to move resources around, which meant new possibilities such as the construction of more advanced buildings using materials from other areas. Domesticating animals meant a way to pull wheeled vehicles with less effort, meaning heavier loads, greater distances and more advanced construction. The invention of writing led to new ways of recording, sharing and developing knowledge which could then foster further innovation. The internet continues to give us countless new opportunities for innovation. Anyone with a new idea can access endless free information, find supporters, discuss their ideas and obtain resources. New doors to the adjacent possible open every day as we find different uses for technology.

“We like to think of our ideas as $40,000 incubators shipped directly from the factory, but in reality, they’ve been cobbled together with spare parts that happened to be sitting in the garage.”

— Steven Johnson, Where Good Ideas Come From

Take the case of GPS, an invention that was itself built out of the debris of its predecessors. In recent years, GPS has opened up new possibilities that didn’t exist before. The system was developed by the US government for military usage. In the 1980s, they decided to start allowing other organizations and individuals to use it. Civilian access to GPS gave us new options. Since then, it has led to numerous innovations that incorporate the system into old ideas: self-driving cars, mobile phone tracking (very useful for solving crime or finding people in emergency situations), tectonic plate trackers that help predict earthquakes, personal navigation systems, self-navigating robots, and many others. None of these would have been possible without some sort of global positioning system. With the invention of GPS, human innovation sped up a little more.

Steven Johnson gives one example of how this happens in Where Good Ideas Come From. In 2008, MIT professor Timothy Presto visited a hospital in Indonesia and found that all eight of the incubators for newborn babies were broken. The incubators had been donated to the hospital by relief organizations, but the staff didn’t know how to fix them. Plus, the incubators were poorly suited to the humid climate and the repair instructions only came in English. Presto realized that donating medical equipment was pointless if local people couldn’t fix it. He and his team began working on designing an incubator that could save the lives of babies for a lot longer than a couple of months.

Instead of continuing to tweak existing designs, Presto and his team devised a completely new incubator that used car parts. While the local people didn’t know how to fix an incubator, they were extremely adept at keeping their cars working no matter what. Named the NeoNurture, it used headlights for warmth, dashboard fans for ventilation, and a motorcycle battery for power. Hospital staff just needed to find someone who was good with cars to fix it—the principles were the same.

Even more, telling is the origin of the incubators Presto and his team reconceptualized. The first incubator for newborn babies was designed by Stephane Tarnier in the late 19th century. While visiting a zoo on his day off, Tarnier noted that newborn chicks were kept in heated boxes. It’s not a big leap to imagine that the issue of infant mortality was permanently on his mind. Tarnier was an obstetrician, working at a time when the infant mortality rate for premature babies was about 66%. He must have been eager to try anything that could reduce that figure and its emotional toll. Tarnier’s rudimentary incubator immediately halved that mortality rate. The technology was right there, in the zoo. It just took someone to connect the dots and realize human babies aren’t that different from chicken babies.

Johnson explains the significance of this: “Good ideas are like the NeoNurture device. They are, inevitably, constrained by the parts and skills that surround them…ideas are works of bricolage; they’re built out of that detritus.” Tarnier could invent the incubator only because someone else had already invented a similar device. Presto and his team could only invent the NeoNurture because Tarnier had come up with the incubator in the first place.

This happens in our lives, as well. If you learn a new skill, the number of skills you could potentially learn increases because some elements may be transferable. If you are introduced to a new person, the number of people you could meet grows, because they may introduce you to others. If you start learning a language, native speakers may be more willing to have conversations with you in it, meaning you can get a broader understanding. If you read a new book, you may find it easier to read other books by linking together the information in them. The list is endless. We can’t imagine what we’re capable of achieving in ten years because we forget about the adjacent possibilities that will emerge.

Accelerating Change

The adjacent possible has been expanding ever since the first person picked up a stone and started shaping it into a tool. Just look at what written and oral forms of communication made possible—no longer did each generation have to learn everything from scratch. Suddenly we could build upon what had come before us.

Some (annoying) people claim that there’s nothing new left. There are no new ideas to be had, no new creations to invent, no new options to explore. In fact, the opposite is true. Innovation is a non-zero-sum game. A crowded market actually means more opportunities to create something new than a barren one. Technology is a feedback loop. The creation of something new begets the creation of something even newer and so on.

Progress is exponential, not linear. So we overestimate the impact of a new technology during the early days when it is just finding its feet, then underestimate its impact in a decade or so when its full uses are emerging. As old limits and constraints melt away, our options explode. The exponential growth of technology is known as accelerating change. It’s a common belief among experts that the rate of change is speeding up and society will change dramatically alongside it.

“Ideas borrow, blend, subvert, develop and bounce off other ideas.”

— John Hegarty, Hegarty On Creativity

In 1999, author and inventor Ray Kurzweil posited the Law of Accelerating Change — that evolutionary systems develop at an exponential rate. While this is most obvious for technology, Kurzweil hypothesized that the principle is relevant in numerous other areas. Moore’s Law, initially referring only to semiconductors, has wider implications.

In an essay on the topic, he writes:

An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth.

Progress is tricky to predict or even to notice as it happens. It’s hard to notice things in a system that we are part of. And it’s hard to notice the incremental change because it lacks stark contrast. The current pace of change is our norm, and we adjust to it. In hindsight, we can see how Amara’s Law plays out.

Look at where the internet was just twenty years ago. A report from the Pew Research Center shows us how to change compounds. In 1998, a mere 41% of Americans used the internet at all—and the report expresses surprise that the users were beginning to include “people without college training, those with modest incomes, and women.” Less than a third of users had bought something online, email was predominantly just for work, and only a third of users looked at online news at least once per week. That’s a third of the 41% using the internet by the way, not of the general population. Wikipedia and Gmail didn’t exist. Internet users in the late nineties reported that their main problem was finding what they needed online.

That is perhaps the biggest change and one we may not have anticipated: the move towards personalization. Finding what we need is no longer a problem. Most of us have the opposite problem and struggle with information overwhelm. Twenty years ago, filter bubbles were barely a problem (at least, not online.) Now, almost everything we encounter online is personalized to ensure it’s ridiculously easy to find what we want. Newsletters, websites, and apps greet us by name. Newsfeeds are organized by our interests. Shopping sites recommend other products we might like. This has increased the amount the internet does for us to a level that would have been hard to imagine in the late 90s. Kevin Kelly, writing in The Inevitable,  describes filtering as one of the key forces that will shape the future.

History reveals an extraordinary acceleration of technological progress. Establishing the precise history of technology is problematic as some inventions occurred in several places at varying times, archaeological records are inevitably incomplete, and dating methods are imperfect. However, accelerating change is a clear pattern. To truly understand the principle of accelerating change, we need to take a quick look at a simple overview of the history of technology.

Early innovations happened slowly. It took us about 30,000 years to invent clothing and about 120,000 years to invent jewelry. It took us about 130,000 years to invent art and about 136,000 years to come up with the bow and arrow. But things began to speed up in the Upper Paleolithic period. Between 50,000 and 10,000 years ago, we developed more sophisticated tools with specialized uses—think harpoons, darts, fishing tools, and needles—early musical instruments, pottery, and the first domesticated animals. Between roughly 11,000 years and the 18th century, the pace truly accelerated. That period essentially led to the creation of civilization, with the foundations of our current world.

More recently, the Industrial Revolution changed everything because it moved us significantly further away from relying on the strength of people and domesticated animals to power means of production. Steam engines and machinery replaced backbreaking labor, meaning more production at a lower cost. The number of adjacent possibilities began to snowball. Machinery enabled mass production and interchangeable parts. Steam-powered trains meant people could move around far more easily, allowing people from different areas to mix together and share ideas. Improved communications did the same. It’s pointless to even try listing the ways technology has changed since then. Regardless of age, we’ve all lived through it and seen the acceleration. Few people dispute that the change is snowballing. The only question is how far that will go.

As Stephen Hawking put it in 1993:

For millions of years, mankind lived just like the animals. Then something happened which unleashed the power of our imagination. We learned to talk and we learned to listen. Speech has allowed the communication of ideas, enabling human beings to work together to build the impossible. Mankind’s greatest achievements have come about by talking, and its greatest failures by not talking. It doesn’t have to be like this. Our greatest hopes could become reality in the future. With the technology at our disposal, the possibilities are unbounded. All we need to do is make sure we keep talking.

But, as we saw with Moore’s Law, exponential growth cannot continue forever. Eventually, we run into fundamental constraints. Hours in the day, people on the planet, availability of a resource, smallest possible size of a semiconductor, attention—there’s always a bottleneck we can’t eliminate.  We reach the point of diminishing returns. Growth slows or stops altogether. We must then either look at alternative routes to improvement or leave things as they are. In Everett Rogers’s diffusion of innovation theory, this is known as the substitution stage, when usage declines and we start looking for substitutes.

This process is not linear. We can’t predict the future because there’s no way to take into account the tiny factors that will have a disproportionate impact in the long-run.

Footnotes
  • 1

    Image credit: tec_estromberg