Tag: Technology

Why Life Can’t Be Simpler

We’d all like life to be simpler. But we also don’t want to sacrifice our options and capabilities. Tesler’s law of the conservation of complexity, a rule from design, explains why we can’t have both. Here’s how the law can help us create better products and services by rethinking simplicity.

“Why can’t life be simple?”

We’ve all likely asked ourselves that at least once. After all, life is complicated. Every day, we face processes that seem almost infinitely recursive. Each step requires the completion of a different task to make it possible, which in itself requires another task. We confront tools requiring us to memorize reams of knowledge and develop additional skills just to use them. Endeavors that seem like they should be simple, like getting utilities connected in a new home or figuring out the controls for a fridge, end up having numerous perplexing steps.

When we wish for things to be simpler, we usually mean we want products and services to have fewer steps, fewer controls, fewer options, less to learn. But at the same time, we still want all of the same features and capabilities. These two categories of desires are often at odds with each other and distort how we understand the complex.

***

Conceptual Models

In Living with Complexity, Donald A. Norman explains that complexity is all in the mind. Our perception of a product or service as simple or complex has its basis in the conceptual model we have of it. Norman writes that “A conceptual model is the underlying belief structure held by a person about how something works . . . Conceptual models are extremely important tools for organizing and understanding otherwise complex things.”

For example, on many computers, you can drag and drop a file into a folder. Both the file and the folder often have icons that represent their real-world namesakes. For the user, this process is simple; it provides a clear conceptual model. When people first started using graphical interfaces, real-world terms and icons made it easier to translate what they were doing. But the process only seems simple because of this effective conceptual model. It doesn’t represent what happens on the computer, where files and folders don’t exist. Computers store data wherever is convenient and may split files across multiple locations.

When we want something to be simpler, what we truly need is a better conceptual model of it. Once we know how to use them, complex tools end up making our lives simpler because they provide the precise functionality we want. A computer file is a great conceptual model because it hijacked something people already understood: physical files and folders. It would have been much harder for them to develop a whole new conceptual model reflecting how computers actually store files. What’s important to note is that giving users this simple conceptual model didn’t change how things work behind the scenes.

Removing functionality doesn’t make something simpler, because it removes options. Simple tools have a limited ability to simplify processes. Trying to do something complex with a simple tool is more complex than doing the same thing with a more complex tool.

A useful analogy here is the hand tools used by craftspeople, such as a silversmith’s planishing hammer (a tool used to shape and smooth the surface of metal). Norman highlights that these tools seem simple to the untrained eye. But using them requires great skill and practice. A craftsperson needs to know how to select them from the whole constellation of specialized tools they possess.

In itself, a planishing hammer might seem far, far simpler than, say, a digital photo editing program. Look again, Norman says. We have to compare the photo editing tool with the silversmith’s whole workbench. Both take a lot of time and practice to master. Both consist of many tools that are individually simple. Learning how and when to use them is the complex part.

Norman writes, “Whether something is complicated is in the mind of the beholder. ” Looking at a workbench of tools or a digital photo editing program, a novice sees complexity. A professional sees a range of different tools, each of which is simple to use. They know when to use each to make a process easier. Having fewer options would make their life more complex, not simpler, because they wouldn’t be able to break what they need to do down into individually simple steps. A professional’s experience-honed conceptual model helps them navigate a wide range of tools.

***

The conservation of complexity

To do difficult things in the simplest way, we need a lot of options.

Complexity is necessary because it gives us the functionality we need. A useful framework for understanding this is Tesler’s law of the conservation of complexity, which states:

The total complexity of a system is a constant. If you make a user’s interaction with a system simpler, the complexity behind the scenes increases.

The law originates from Lawrence Tesler (1945–2020), a computer scientist specializing in human-computer interactions who worked at Xerox, Apple, Amazon, and Yahoo! Tesler was influential in the development of early graphical interfaces, and he was the co-creator of the copy-and-paste functionality.

Complexity is like energy. It cannot be created or destroyed, only moved somewhere else. When a product or service becomes simpler for users, engineers and designers have to work harder. Norman writes, “With technology, simplifications at the level of usage invariably result in added complexity of the underlying mechanism. ” For example, the files and folders conceptual model for computer interfaces doesn’t change how files are stored, but by putting in extra work to translate the process into something recognizable, designers make navigating them easier for users.

Whether something looks simple or is simple to use says little about its overall complexity. “What is simple on the surface can be incredibly complex inside: what is simple inside can result in an incredibly complex surface. So from whose point of view do we measure complexity? ”

***

Out of control

Every piece of functionality requires a control—something that makes something happen. The more complex something is, the more controls it needs—whether they are visible to the user or not. Controls may be directly accessible to a user, as with the home button on an iPhone, or they may be behind the scenes, as with an automated thermostat.

From a user’s standpoint, the simplest products and services are those that are fully automated and do not require any intervention (unless something goes wrong.)

As long as you pay your bills, the water supply to your house is probably fully automated. When you turn on a tap, you don’t need to have requested there to be water in the pipes first. The companies that manage the water supply handle the complexity.

Or, if you stay in an expensive hotel, you might find your room is always as you want it, with your minifridge fully stocked with your favorites and any toiletries you forgot provided. The staff work behind the scenes to make this happen, without you needing to make requests.

On the other end of the spectrum, we have products and services that require users to control every last step.

A professional photographer is likely to use a camera that needs them to manually set every last setting, from white balance to shutter speed. This means the camera itself doesn’t need automation, but the user needs to operate controls for everything, giving them full control over the results. An amateur photographer might use a camera that automatically chooses these settings so all they need to do is point and shoot. In this case, the complexity transfers to the camera’s inner workings.

In the restaurants inside IKEA stores, customers typically perform tasks such as filling up drinks and clearing away dishes themselves. This means less complexity for staff and much lower prices compared to restaurants where staff do these things.

***

Lessons from the conservation of complexity

The first lesson from Tesler’s law of the conservation of complexity is that how simple something looks is not a reflection of how simple it is to use. Removing controls can mean users need to learn complex sequences to use the same features—similar to how languages with fewer sounds have longer words. One way to conceptualize the movement of complexity is through the notion of trade-offs. If complexity is constant, then there are trade-offs depending on where that complexity is moved.

A very basic example of complexity trade-offs can be found in the history of arithmetic. For centuries, many counting systems all over the world employed tools using stones or beads like a tabula (the Romans) or soroban (the Japanese) to facilitate adding and subtracting numbers. They were easy to use, but not easily portable. Then the Hindu-Arabic system came along (the one we use today) and by virtue of employing columns, and thus not requiring any moving parts, offered a much more portable counting system. However, the portability came with a cost.

Paul Lockhart explains in Arithmetic, “With the Hindu-Arabic system the writing and calculating are inextricably linked. Instead of moving stones or sliding beads, our manipulations become transmutations of the symbols themselves. That means we need to know things. We need to know that one more than 2 is 3, for instance. In other words, the price we pay [for portability] is massive amounts of memorization.” Thus, there is a trade-off. The simpler arithmetic system requires more complexity in terms of the memorization required of the users. We all went through the difficult process of learning mathematical symbols early in life. Although they might seem simple to us now, that’s just because we’re so accustomed to them.

Although perceived simplicity may have greater appeal at first, users are soon frustrated if it means greater operational complexity. Norman writes:

Perceived simplicity is not at all the same as simplicity of usage: operational simplicity. Perceived simplicity decreases with the number of visible controls and displays. Increase the number of visible alternatives and the perceived simplicity drops. The problem is that operational simplicity can be drastically improved by adding more controls and displays. The very things that make something easier to learn and to use can also make it be perceived as more difficult.

Even if it receives a negative reaction before usage, operational simplicity is the more important goal. For example, in a company, having a clearly stated directly responsible person for each project might seem more complex than letting a project be a team effort that falls to whoever is best suited to each part. But in practice, this adds complexity when someone tries to move forward with it or needs to know who should hear feedback about problems.

A second lesson is that things don’t always need to be incredibly simple for users. People have an intuitive sense that complexity has to go somewhere. When using a product or service is too simple, users can feel suspicious or like they’ve been robbed of control. They know that a lot more is going on behind the scenes, they just don’t know what it is. Sometimes we need to preserve a minimum level of complexity so that users feel like an actual participant. According to legend, cake mixes require the addition of a fresh egg because early users found that dried ones felt a bit too lazy and low effort.

An example of desirable minimum complexity is help with homework. For many parents, helping their children with their homework often feels like unnecessary complexity. It is usually subjects and facts they haven’t thought about in years, and they find themselves having to relearn them in order to help their kids. It would be far simpler if the teachers could cover everything in class to a degree that each child needed no additional practice. However, the complexity created by involving parents in the homework process helps make parents more aware of what their children are learning. In addition, they often get insight into areas of both struggle and interest, can identify ways to better connect with their children, and learn where they may want to teach them some broader life skills.

When we seek to make things simpler for other people, we should recognize that there be a point of diminishing negative returns wherein further simplification leads to a worse experience. Simplicity is not an end in itself—other things like speed, usability, and time-saving are. We shouldn’t simplify things from the user standpoint for the sake of it.

If changes don’t make something better for users, we’re just creating unnecessary behind-the-scenes complexity. People want to feel in control, especially when it comes to something important. We want to learn a bit about what’s happening, and an overly simple process teaches us nothing.

A third lesson is that products and services are only as good as what happens when they break. Handling a problem with something that has lots of controls on the user side may be easier for the user. They’re used to being involved in it. If something has been fully automated up until the point where it breaks, users don’t know how to react. The change is jarring, and they may freeze or overreact. Seeing as fully automated things fade into the background, this may be their most salient and memorable interaction with a product or service. If handling a problem is difficult for the user—for example, if there’s a lack of rapid support or instructions available or it’s hard to ascertain what went wrong in the first place—they may come away with a negative overall impression, even if everything worked fine for years beforehand.

A big challenge in the development of self-driving cars is that a driver needs to be able to take over if the car encounters a problem. But if someone hasn’t had to operate the car manually for a while, they may panic or forget what to do. So it’s a good idea to limit how long the car drives itself for. The same is purportedly true for airplane pilots. If the plane does too much of the work, the pilot won’t cope well in an emergency.

A fourth lesson is the importance of thinking about how the level of control you give your customers or users influences your workload. For a graphic designer, asking a client to detail exactly how they want their logo to look makes their work simpler. But it might be hard work for the client, who might not know what they want or may make poor choices. A more experienced designer might ask a client for much less information and instead put the effort into understanding their overall brand and deducing their needs from subtle clues, then figuring out the details themselves. The more autonomy a manager gives their team, the lower their workload, and vice versa.

If we accept that complexity is a constant, we need to always be mindful of who is bearing the burden of that complexity.

 

The Spiral of Silence

Our desire to fit in with others means we don’t always say what we think. We only express opinions that seem safe. Here’s how the spiral of silence works and how we can discover what people really think.

***

Be honest: How often do you feel as if you’re really able to express your true opinions without fearing judgment? How often do you bite your tongue because you know you hold an unpopular view? How often do you avoid voicing any opinion at all for fear of having misjudged the situation?

Even in societies with robust free speech protections, most people don’t often say what they think. Instead they take pains to weigh up the situation and adjust their views accordingly. This comes down to the “spiral of silence,” a human communication theory developed by German researcher Elisabeth Noelle-Neumann in the 1960s and ’70s. The theory explains how societies form collective opinions and how we make decisions surrounding loaded topics.

Let’s take a look at how the spiral of silence works and how understanding it can give us a more realistic picture of the world.

***

How the spiral of silence works

According to Noelle-Neumann’s theory, our willingness to express an opinion is a direct result of how popular or unpopular we perceive it to be. If we think an opinion is unpopular, we will avoid expressing it. If we think it is popular, we will make a point of showing we think the same as others.

Controversy is also a factor—we may be willing to express an unpopular uncontroversial opinion but not an unpopular controversial one. We perform a complex dance whenever we share views on anything morally loaded.

Our perception of how “safe” it is to voice a particular view comes from the clues we pick up, consciously or not, about what everyone else believes. We make an internal calculation based on signs like what the mainstream media reports, what we overhear coworkers discussing on coffee breaks, what our high school friends post on Facebook, or prior responses to things we’ve said.

We also weigh up the particular context, based on factors like how anonymous we feel or whether our statements might be recorded.

As social animals, we have good reason to be aware of whether voicing an opinion might be a bad idea. Cohesive groups tend to have similar views. Anyone who expresses an unpopular opinion risks social exclusion or even ostracism within a particular context or in general. This may be because there are concrete consequences, such as losing a job or even legal penalties. Or there may be less official social consequences, like people being less friendly or willing to associate with you. Those with unpopular views may suppress them to avoid social isolation.

Avoiding social isolation is an important instinct. From an evolutionary biology perspective, remaining part of a group is important for survival, hence the need to at least appear to share the same views as anyone else. The only time someone will feel safe to voice a divergent opinion is if they think the group will share it or be accepting of divergence, or if they view the consequences of rejection as low. But biology doesn’t just dictate how individuals behave—it ends up shaping communities. It’s almost impossible for us to step outside of that need for acceptance.

A feedback loop pushes minority opinions towards less and less visibility—hence why Noelle-Neumann used the word “spiral.” Each time someone voices a majority opinion, they reinforce the sense that it is safe to do so. Each time someone receives a negative response for voicing a minority opinion, it signals to anyone sharing their view to avoid expressing it.

***

An example of the spiral of silence

A 2014 Pew Research survey of 1,801 American adults examined the prevalence of the spiral of silence on social media. Researchers asked people about their opinions on one public issue: Edward Snowden’s 2013 revelations of US government surveillance of citizens’ phones and emails. They selected this issue because, while controversial, prior surveys suggested a roughly even split in public opinion surrounding whether the leaks were justified and whether such surveillance was reasonable.

Asking respondents about their willingness to share their opinions in different contexts highlighted how the spiral of silence plays out. 86% of respondents were willing to discuss the issue in person, but only about half as many were willing to post about it on social media. Of the 14% who would not consider discussing the Snowden leaks in person, almost none (0.3%) were willing to turn to social media instead.

Both in person and online, respondents reported far greater willingness to share their views with people they knew agreed with them—three times as likely in the workplace and twice as likely in a Facebook discussion.

***

The implications of the spiral of silence

The end result of the spiral of silence is a point where no one publicly voices a minority opinion, regardless of how many people believe it. The first implication of this is that the picture we have of what most people believe is not always accurate. Many people nurse opinions they would never articulate to their friends, coworkers, families, or social media followings.

A second implication is that the possibility of discord makes us less likely to voice an opinion at all, assuming we are not trying to drum up conflict. In the aforementioned Pew survey, people were more comfortable discussing a controversial story in person than online. An opinion voiced online has a much larger potential audience than one voiced face to face, and it’s harder to know exactly who will see it. Both of these factors increase the risk of someone disagreeing.

If we want to gauge what people think about something, we need to remove the possibility of negative consequences. For example, imagine a manager who often sets overly tight deadlines, causing immense stress to their team. Everyone knows this is a problem and discusses it among themselves, recognizing that more realistic deadlines would be motivating, and unrealistic ones are just demoralizing. However, no one wants to say anything because they’ve heard the manager say that people who can’t handle pressure don’t belong in that job. If the manager asks for feedback about their leadership style, they’re not going to hear what they need to hear if they know who it comes from.

A third implication is that what seems like a sudden change in mainstream opinions can in fact be the result of a shift in what is acceptable to voice, not in what people actually think. A prominent public figure getting away with saying something controversial may make others feel safe to do the same. A change in legislation may make people comfortable saying what they already thought.

For instance, if recreational marijuana use is legalized where someone lives, they might freely remark to a coworker that they consume it and consider it harmless. Even if that was true before the legislation change, saying so would have been too fraught, so they might have lied or avoided the topic. The result is that mainstream opinions can appear to change a great deal in a short time.

A fourth implication is that highly vocal holders of a minority opinion can end up having a disproportionate influence on public discourse. This is especially true if that minority is within a group that already has a lot of power.

While this was less the case during Noelle-Neumann’s time, the internet makes it possible for a vocal minority to make their opinions seem far more prevalent than they actually are—and therefore more acceptable. Indeed, the most extreme views on any spectrum can end up seeming most normal online because people with a moderate take have less of an incentive to make themselves heard.

In anonymous environments, the spiral of silence can end up reversing itself, making the most fringe views the loudest.

The Ingredients For Innovation

Inventing new things is hard. Getting people to accept and use new inventions is often even harder. For most people, at most times, technological stagnation has been the norm. What does it take to escape from that and encourage creativity?

***

“Technological progress requires above all tolerance toward the unfamiliar and the eccentric.”

— Joel Mokyr, The Lever of Riches

Writing in The Lever of Riches: Technological Creativity and Economic Progress, economic historian Joel Mokyr asks why, when we look at the past, some societies have been considerably more creative than others at particular times. Some have experienced sudden bursts of progress, while others have stagnated for long periods of time. By examining the history of technology and identifying the commonalities between the most creative societies and time periods, Mokyr offers useful lessons we can apply as both individuals and organizations.

What does it take for a society to be technologically creative?

When trying to explain something as broad and complex as technological creativity, it’s important not to fall prey to the lure of a single explanation. There are many possible reasons for anything that happens, and it’s unwise to believe explanations that are too tidy. Mokyr disregards some of the common simplistic explanations for technological creativity, such as that war prompts creativity or people with shorter life spans are less likely to expend time on invention.

Mokyr explores some of the possible factors that contribute to a society’s technological creativity. In particular, he seeks to explain why Europe experienced such a burst of technological creativity from around 1500 to the Industrial Revolution, when prior to that it had lagged far behind the rest of the world. Mokyr explains that “invention occurs at the level of the individual, and we should address the factors that determine individual creativity. Individuals, however, do not live in a vacuum. What makes them implement, improve and adapt new technologies, or just devise small improvements in the way they carry out their daily work depends on the institutions and the attitudes around them.” While environment isn’t everything, certain conditions are necessary for technological creativity.

He identifies the three following key factors in an environment that impact the occurrence of invention and innovation.

The social infrastructure

First of all, the society needs a supply of “ingenious and resourceful innovators who are willing and able to challenge their physical environment for their own improvement.” Fostering these attributes requires factors like good nutrition, religious beliefs that are not overly conservative, and access to education. It is in part about the absence of negative factors—necessitous people have less capacity for creativity. Mokyr writes: “The supply of talent is surely not completely exogenous; it responds to incentives and attitudes. The question that must be confronted is why in some societies talent is unleashed upon technical problems that eventually change the entire productive economy, whereas in others this kind of talent is either repressed or directed elsewhere.”

One partial explanation for Europe’s creativity from 1500 to the Industrial Revolution is that it was often feasible for people to relocate to a different country if the conditions in their current one were suboptimal. A creative individual finding themselves under a conservative government seeking to maintain the technological status quo was able to move elsewhere.

The ability to move around was also part of the success of the Abbasid Caliphate, an empire that stretched from India to the Iberian Peninsula from about 750 to 1250. Economists Maristella Botticini and Zvi Eckstein write in The Chosen Few: How Education Shaped Jewish History, 70–1492 that “it was relatively easy to move or migrate” within the Abbasid empire, especially with its “common language (Arabic) and a uniform set of institutions and laws over an immense area, greatly [favoring] trade and commerce.”

It also matters whether creative people are channeled into technological fields or into other fields, like the military. In Britain during and prior to the Industrial Revolution, Mokyr considers invention to have been the main possible path for creative individuals, as other areas like politics leaned towards conformism.

The social incentives

Second, there need to be incentives in place to encourage innovation. This is of extra importance for macroinventions – completely new inventions, not improvements on existing technology – which can require a great leap of faith. The person who comes up with a faster horse knows it has a market; the one who comes up with a car does not. Such incentives are most often financial, but not always. Awards, positions of power, and recognition also count. Mokyr explains that diverse incentives encourage the patience needed for creativity: “Sustained innovation requires a set of individuals willing to absorb large risks, sometimes to wait many years for the payoff (if any.)”

Patent systems have long served as an incentive, allowing inventors to feel confident they will profit from their work. Patents first appeared in northern Italy in the early fifteenth century; Venice implemented a formal system in 1474. According to Mokyr, the monopoly rights mining contractors received over the discovery of hitherto unknown mineral resources provided inspiration for the patent system.

However, Mokyr points out that patents were not always as effective as inventors hoped. Indeed, they may have provided the incentive without any actual protection. Many inventors ended up spending unproductive time and money on patent litigation, which in some cases outweighed their profits, discouraged them from future endeavors, or left them too drained to invent more. Eli Whitney, inventor of the cotton gin, claimed his legal costs outweighed his profits. Mokyr proposes that though patent laws may be imperfect, they are, on balance, good for society as they incentivize invention while not altogether preventing good ideas from circulating and being improved upon by others.

The ability to make money from inventions is also related to geographic factors. In a country with good communication and transport systems, with markets in different areas linked, it is possible for something new to sell further afield. A bigger prospective market means stronger financial incentives. The extensive, accessible, and well-maintained trade routes during the Abbasid empire allowed for innovations to diffuse throughout the region. And during the Industrial Revolution in Britain, railroads helped bring developments to the entire country, ensuring inventors didn’t just need to rely on their local market.

The social attitude

Third, a technologically creative society must be diverse and tolerant. People must be open to new ideas and outré individuals. They must not only be willing to consider fresh ideas from within their own society but also happy to take inspiration from (or to outright steal) those coming from elsewhere. If a society views knowledge coming from other countries as suspect or even dangerous, unable to see its possible value, it is at a disadvantage. If it eagerly absorbs external influences and adapts them for its own purposes, it is at an advantage. Europeans were willing to pick up on ideas from each other. and elsewhere in the world. As Mokyr puts it, “Inventions such as the spinning wheel, the windmill, and the weight-driven clock recognized no boundaries”

In the Abbasid empire, there was an explosion of innovation that drew on the knowledge gained from other regions. Botticini and Eckstein write:

“The Abbasid period was marked by spectacular developments in science, technology, and the liberal arts. . . . The Muslim world adopted papermaking from China, improving Chinese technology with the invention of paper mills many centuries before paper was known in the West. Muslim engineers made innovate industrial uses of hydropower, tidal power, wind power, steam power, and fossil fuels. . . . Muslim engineers invented crankshafts and water turbines, employed gears in mills and water-raising machines, and pioneered the use of dams as a source of waterpower. Such advances made it possible to mechanize many industrial tasks that had previously been performed by manual labor.”

Within societies, certain people and groups seek to maintain the status quo because it is in their interests to do so. Mokyr writes that “Some of these forces protect vested interests that might incur losses if innovations were introduced, others are simply don’t-rock-the-boat kind of forces.” In order for creative technology to triumph, it must be able to overcome those forces. While there is always going to be conflict, the most creative societies are those where it is still possible for the new thing to take over. If those who seek to maintain the status quo have too much power, a society will end up stagnating in terms of technology. Ways of doing things can prevail not because they are the best, but because there is enough interest in keeping them that way.

In some historical cases in Europe, it was easier for new technologies to spread in the countryside, where the lack of guilds compensated for the lower density of people. City guilds had a huge incentive to maintain the status quo. The inventor of the ribbon loom in Danzig in 1579 was allegedly drowned by the city council, while “in the fifteenth century, the scribes guild of Paris succeeded in delaying the introduction of printing in Paris by 20 years.”

Indeed, tolerance could be said to matter more for technological creativity than education. As Mokyr repeatedly highlights, many inventors and innovators throughout history were not educated to a high level—or even at all. Up until relatively recently, most technology preceded the science explaining how it actually worked. People tinkered, looking to solve problems and experiment.

Unlike modern times, Mokyr explains, for most of history technology did not emerge from “specialized research laboratories paid for by research and development budgets and following strategies mapped out by corporate planners well-informed by marketing analysts. Technological change occurred mostly through new ideas and suggestions occurring if not randomly, then in a highly unpredictable fashion.”

When something worked, it worked, even if no one knew why or the popular explanation later proved incorrect. Steam engines are one such example. The notion that all technologies function under the same set of physical laws was not standard until Galileo. People need space to be a bit weird.

Those who were scientists and academics during some of Europe’s most creative periods worked in a different manner than what we expect today, often working on the practical problems they faced themselves. Mokyr gives Galileo as an example, as he “built his own telescopes and supplemented his salary as a professor at the University of Padua by making and repairing instruments.” The distinction between one who thinks and one who makes was not yet clear at the time of the Renaissance. Wherever and whenever making has been a respectable activity for thinkers, creativity flourishes.

Seeing as technological creativity requires a particular set of circumstances, it is not the norm. Throughout history, Mokyr writes, “Technological progress was neither continuous nor persistent. Genuinely creative societies were rare, and their bursts of creativity usually short-lived.”

Not only did people need to be open to new ideas, they also needed to be willing to actually start using new technologies. This often required a big leap of faith. If you’re a farmer just scraping by, trying a new way of ploughing your fields could mean starving to death if it doesn’t work out. Innovations can take a long time to defuse, with riskier ones taking the longest.

How can we foster the right environment?

So what can we learn from The Lever of Riches that we can apply as individuals and in organizations?

The first lesson is that creativity does not occur in a vacuum. It requires certain necessary conditions to occur. If we want to come up with new ideas as individuals, we should consider ourselves as part of a system. In particular, we need to consider what might impede us and what can encourage us. We need to eradicate anything that will get in the way of our thinking, such as limiting beliefs or lack of sleep.

We need to be clear on what motivates us to be creative, ensuring what we endeavor to do will be worthwhile enough to drive us through the associated effort. When we find ourselves creatively blocked, it’s often because we’re not in touch with what inspires us to create in the first place.

Within an organization, such factors are equally important. If you want your employees to be creative, it’s important to consider the system they’re part of. Is there anything blocking their thinking? Is a good incentive structure in place (bearing in mind incentives are not solely financial)?

Another lesson is that tolerance for divergence is essential for encouraging creativity. This may seem like part of the first lesson, but it’s crucial enough to consider in isolation.

As individuals, when we seek to come up with new ideas, we need to ask ourselves the following questions: Am I exposing myself to new material and inspirations or staying within a filter bubble? Am I open to unusual ways of thinking? Am I spending too much time around people who discourage deviation from the status quo? Am I being tolerant of myself, allowing myself to make mistakes and have bad ideas in service of eventually having good ones? Am I spending time with unorthodox people who encourage me to think differently?

Within organizations, it’s worth asking the following questions: Are new ideas welcomed or shot down? Is it in the interests of many to protect the status quo? Are ideas respected regardless of their source? Are people encouraged to question norms?

A final lesson is that the forces of inertia are always acting to discourage creativity. Invention is not the natural state of things—it is an exception. Technological stagnation is the norm. In most places, at most times, people have not come up with new technology. It takes a lot for individuals to be willing to wrestle something new from nothing or to question if something in existence can be made better. But when those acts do occur, they can have an immeasurable impact on our world.

Critical Mass and Tipping Points: How To Identify Inflection Points Before They Happen

Critical mass, which is sometimes referred to as tipping points, is one of the most effective mental models you can use to understand the world. The concept can explain everything from viral cat videos to why changing habits is so hard.

 

The Basics

Sometimes it can seem as if drastic changes happen at random.

One moment a country is stable; the next, a revolution begins and the government is overthrown. One day a new piece of technology is a novelty; the next, everyone has it and we cannot imagine life without it. Or an idea lingers at the fringes of society before it suddenly becomes mainstream.

As erratic and unpredictable as these occurrences are, there is a logic to them, which can be explained by the concept of critical mass. A collaboration between Thomas Schelling (a game theorist) and Mark Granovetter (a sociologist) led to the concept’s being identified in 1971.

Also known as the boiling point, the percolation threshold, the tipping point, and a host of other names, critical mass is the point at which something (an idea, belief, trend, virus, behavior, etc.) is prevalent enough to grow, or sustain, a process, reaction, or technology.

As a mental model, critical mass can help us to understand the world around us by letting us spot changes before they occur, make sense of tumultuous times, and even gain insight into our own behaviors. A firm understanding can also give us an edge in launching products, changing habits, and choosing investments.

In The Decision Book, Mikael Krogerus wrote of technological critical masses:

Why is it that some ideas – including stupid ones – take hold and become trends, while others bloom briefly before withering and disappearing from the public eye?

… Translated into a graph, this development takes the form of a curve typical of the progress of an epidemic. It rises, gradually at first, then reaches the critical point of any newly launched product, when many products fail. The critical point for any innovation is the transition from the early adapters to the sceptics, for at this point there is a ‘chasm’. …

With technological innovations like the iPod or the iPhone, the cycle described above is very short. Interestingly, the early adaptors turn away from the product as soon as the critical masses have accepted it, in search of the next new thing.

In Developmental Evaluation, Michael Quinn Patton wrote:

Complexity theory shows that great changes can emerge from small actions. Change involves a belief in the possible, even the “impossible.” Moreover, social innovators don’t follow a linear pathway of change; there are ups and downs, roller-coaster rides along cascades of dynamic interactions, unexpected and unanticipated divergences, tipping points and critical mass momentum shifts. Indeed, things often get worse before they get better as systems change creates resistance to and pushback against the new.

In If Nobody Speaks of Remarkable Things, Jon McGregor writes a beautiful explanation of how the concept of critical mass applies to weather:

He wonders how so much water can resist the pull of so much gravity for the time it takes such pregnant clouds to form, he wonders about the moment the rain begins, the turn from forming to falling, that slight silent pause in the physics of the sky as the critical mass is reached, the hesitation before the first swollen drop hurtles fatly and effortlessly to the ground.

Critical Mass in Physics

In nuclear physics, critical mass is defined as the minimum amount of a fissile material required to create a self-sustaining fission reaction. In simpler terms, it’s the amount of reactant necessary for something to happen and to keep happening.

This concept is similar to the mental model of activation energy. The exact critical mass depends on the nuclear properties of a material, its density, its shape, and other factors.

In some nuclear reactions, a reflector made of beryllium is used to speed up the process of reaching critical mass. If the amount of fissile material is inadequate, it is referred to as a subcritical mass. Once the rate of reaction is increasing, the amount of material is referred to as a supercritical mass. This concept has been taken from physics and used in many other disciplines.

Critical Mass in Sociology

In sociology, a critical mass is a term for a group of people who make a drastic change, altering their behavior, opinions or actions.

“When enough people (a critical mass) think about and truly consider the plausibility of a concept, it becomes reality.”

—Joseph Duda

In some societies (e.g., a small Amazonian tribe), just a handful of people can change prevailing views. In larger societies (in particular, those which have a great deal of control over people, such as North Korea), the figure must usually be higher for a change to occur.

The concept of a sociological critical mass was first used in the 1960s by Morton Grodzins, a political science professor at the University of Chicago. Grodzins studied racial segregation — in particular, examining why people seemed to separate themselves by race even when that separation was not enforced by law. His hypothesis was that white families had different levels of tolerance for the number of people of racial minorities in their neighborhoods. Some white families were completely racist; others were not concerned with the race of their neighbors. As increasing numbers of racial minorities moved into neighborhoods, the most racist people would soon leave. Then a tipping point would occur — a critical mass of white people would leave until the area was populated by racial minorities. This phenomenon became known as “white flight.”

Critical Mass in Business

In business, at a macro level, critical mass can be defined as the time when a company becomes self-sustaining and is economically viable. (Please note that there is a difference between being economically viable and being profitable.) Just as a nuclear reaction reaches critical mass when it can sustain itself, so must a business. It is important, too, that a business chooses its methods for growth with care: sometimes adding more staff, locations, equipment, stock, or other assets can be the right choice; at other times, these additions can lead to negative cash flow.

The exact threshold and time to reach critical mass varies widely, depending on the industry, competition, startup costs, products, and other economic factors.

Bob Brinker, host of Money Talk, defines critical mass in business as:

A state of freedom from worry and anxiety about money due to the accumulation of assets which make it possible to live your life as you choose without working if you prefer not to work or just working because you enjoy your work but don’t need the income. Plainly stated, the Land of Critical Mass is a place in which individuals enjoy their own personal financial nirvana. Differentiation between earned income and assets is a fundamental lesson to learn when thinking in terms of critical mass. Earned income does not produce critical mass … critical mass is strictly a function of assets.

Independence or “F*** You” Money

Most people work jobs and get paychecks. If you depend on a paycheck, like most of us, this means you are not independent — you are not self-sustaining. Once you have enough money, you can be self-sustaining.

If you were wealthy enough to be free, would you really keep the job you have now? How many of us check our opinions or thoughts before voicing them because we know they won’t be acceptable? How many times have you agreed to work on a project that you know is doomed, because you need the paycheck?

“Whose bread I eat: his song I sing.”

—Proverb

In his book The Black Swan, Nassim Taleb describes “f*** you” money, which, “in spite of its coarseness, means that it allows you to act like a Victorian gentleman, free from slavery”:

It is a psychological buffer: the capital is not so large as to make you spoiled-rich, but large enough to give you the freedom to choose a new occupation without excessive consideration of the financial rewards. It shields you from prostituting your mind and frees you from outside authority — any outside authority. … Note that the designation f*** you corresponds to the exhilarating ability to pronounce that compact phrase before hanging up the phone.

Critical Mass in Psychology

Psychologists have known for a long time that groups of people behave differently than individuals.

Sometimes when we are in a group, we tend to be less inhibited, more rebellious, and more confident. This effect is known as mob behavior. (An interesting detail is that mob psychology is one of the few branches of psychology which does not concern individuals.) As a general rule, the larger the crowd, the less responsibility people have for their behavior. (This is also why individuals and not groups should make decisions.)

“[Groups of people] can be considered to possess agential capabilities: to think, judge, decide, act, reform; to conceptualize self and others as well as self’s actions and interactions; and to reflect.”

—Burns and Engdahl

Gustav Le Bon is one psychologist who looked at the formation of critical masses of people necessary to spark change. According to Le Bon, this formation creates a collective unconsciousness, making people “a grain of sand amid other grains of sand which the wind stirs up at will.”

He identified three key processes which create a critical mass of people: anonymity, contagion, and suggestibility. When all three are present, a group loses their sense of self-restraint and behaves in a manner he considered to be more primitive than usual. The strongest members (often those who first convinced others to adopt their ideas) have power over others.

Examples of Critical Mass

Virality

Viral media include forms of content (such as text, images, and videos) which are passed amongst people and often modified along the way. We are all familiar with how memes, videos and jokes spread on social media. The term “virality” comes from the similarity to how viruses propagate.

“We are all susceptible to the pull of viral ideas. Like mass hysteria. Or a tune that gets into your head that you keep on humming all day until you spread it to someone else. Jokes. Urban legends. Crackpot religions. No matter how smart we get, there is always this deep irrational part that makes us potential hosts for self-replicating information.”

—Neal Stephenson, Snow Crash

In The Selfish Gene, Richard Dawkins compared memes to human genes. While the term “meme” is now, for the most part, used to describe content that is shared on social media, Dawkins described religion and other cultural objects as memes.

The difference between viral and mainstream media is that the former is more interactive and is shaped by the people who consume it. Gatekeeping and censorship are also less prevalent. Viral content often reflects dominant values and interests, such as kindness (for example, the dancing-man video) and humor. The importance of this form of media is apparent when it is used to negatively impact corporations or powerful individuals (such as the recent United Airlines and Pepsi fiascoes.)

Once a critical mass of people share and comment on a piece of content online, it reaches viral status. Its popularity then grows exponentially before it fades away a short time later.

Technology

The concept of critical mass is crucial when it comes to the adoption of new technology. Every piece of technology which is now (or once was) a ubiquitous part of our lives was once new and novel.

Most forms of technology become more useful as more people adopt them. There is no point in having a telephone if it cannot be used to call other people. There is no point in having an email account if it cannot be used to email other people.

The value of networked technology increases as the size of the network itself does. Eventually, the number of users reaches critical mass, and not owning that particular technology becomes a hindrance. Useful technology tends to lead the first adopters to persuade those around them to try it, too. As a general rule, the more a new technology depends upon a network of users, the faster it will reach critical mass. This situation creates a positive feedback loop.

In Zero to One, Peter Thiel describes how PayPal achieved the critical mass of users needed for it to be useful:

For PayPal to work, we needed to attract a critical mass of at least a million users. Advertising was too ineffective to justify the cost. Prospective deals with big banks kept falling through. So we decided to pay people to sign up.

We gave new customers $10 for joining, and we gave them $10 more every time they referred a friend. This got us hundreds of thousands of new customers and an exponential growth rate.

Another illustration of the importance of critical mass for technology (and the unique benefits of crowdfunding) comes from Chris LoPresti:

A friend of mine raised a lot of money to launch a mobile app; however, his app was trounced by one from another company that had raised a tenth of what he had, but had done so through 1,000 angels on Kickstarter. Those thousand angels became the customers and evangelists that provided the all-important critical mass early on. Any future project I do, I’ll do through Kickstarter, even if I don’t need the money.

Urban Legends

Urban legends are an omnipresent part of society, a modern evolution of traditional folklore. They tend to involve references to deep human fears and popular culture. Whereas traditional folklore was often full of fantastical elements, modern urban legends are usually a twist on reality. They are intended to be believed and passed on. Sociologists refer to them as “contemporary legends.” Some can survive for decades, being modified as time goes by and spreading to different areas and groups. Researchers who study urban legends have noted that many do have vague roots in actual events, and are just more sensationalized than the reality.

One classic urban legend is “The Hook.” This story has two key elements: a young couple parked in a secluded area and a killer with a hook for a hand. The radio in their car announces a serial killer on the loose, often escaped from a nearby institution, with a hook for a hand. In most versions, the couple panics and drives off, only to later find a hook hanging from the car door handle. In others, the man leaves the car while the woman listens to the radio bulletin. She keeps hearing a thumping sound on the roof of the car. When she exits to investigate, the killer is sitting on the roof, holding the man’s severed head. The origins of this story are unknown, although it first emerged in the 1950s in America. By 1960, it began to appear in publications.

Urban legends are an example of how a critical mass of people must be reached before an idea can spread. While the exact origins are rarely clear, it is assumed that it begins with a single person who misunderstands a news story or invents one and passes it on to others, perhaps at a party.

Many urban legends have a cautionary element, so they may first be told in an attempt to protect someone. “The Hook” has been interpreted as a warning to teenagers engaging in promiscuous behavior. When this story is looked at by Freudian folklorists, the implications seem obvious. It could even have been told by parents to their children.

This cautionary element is clear in one of the first printed versions of “The Hook” in 1960:

If you are interested in teenagers, you will print this story. I do not know whether it’s true or not, but it does not matter because it served its purpose for me… I do not think I will ever park to make out as long as I live. I hope this does the same for other kids.

Once a critical mass of people knows an urban legend, the rate at which it spreads grows exponentially. The internet now enables urban legends (and everything else) to pass between people faster. Although a legend might also be disproved faster, that’s a complicated mess. For now, as Lefty says in Donnie Brasco, “Forget about it.”

The more people who believe a story, the more believable it seems. This effect is exacerbated when media outlets or local police fall for the legends and issue warnings. Urban legends often then appear in popular culture (for example, “The Hook” inspired a Supernatural episode) and become part of our modern culture. The majority of people stop believing them, yet the stories linger in different forms.

Changes in Governments and Revolutions

From a distance, it can seem shocking when the people of a country revolt and overthrow dominant powers in a short time.

What is it that makes this sudden change happen? The answer is the formation of a critical mass of people necessary to move marginal ideas to a majority consensus. Pyotr Kropotkin wrote:

Finally, our studies of the preparatory stages of all revolutions bring us to the conclusion that not a single revolution has originated in parliaments or in any other representative assembly. All began with the people. And no revolution has appeared in full armor — born, like Minerva out of the head of Jupiter, in a day. They all had their periods of incubation during which the masses were very slowly becoming imbued with the revolutionary spirit, grew bolder, commenced to hope, and step by step emerged from their former indifference and resignation. And the awakening of the revolutionary spirit always took place in such a manner that at first, single individuals, deeply moved by the existing state of things, protested against it, one by one. Many perished, “uselessly,” the armchair critic would say. But the indifference of society was shaken by these progenitors. The dullest and most narrow-minded people were compelled to reflect, “Why should men, young, sincere, and full of strength, sacrifice their lives in this way?” It was impossible to remain indifferent; it was necessary to take a stand, for, or against: thought was awakening. Then, little by little, small groups came to be imbued with the same spirit of revolt; they also rebelled — sometimes in the hope of local success — in strikes or in small revolts against some official whom they disliked, or in order to get food for their hungry children, but frequently also without any hope of success: simply because the conditions grew unbearable. Not one, or two, or tens, but hundreds of similar revolts have preceded and must precede every revolution.

When an oppressive regime is in power, a change is inevitable. However, it is almost impossible to predict when that change will occur. Often, a large number of people want change and yet fear the consequences or lack the information necessary to join forces. When single individuals act upon their feelings, they are likely to be punished without having any real impact. Only when a critical mass of people’s desire for change overwhelms their fear can a revolution occur. Other people are encouraged by the first group, and the idea spreads rapidly.

One example occurred in China in 1989. While the desire for change was almost universal, the consequences felt too dire. When a handful of students protested for reform in Beijing, authorities did not punish them. We have all seen the classic image of a lone student, shopping bags in hand, standing in front of a procession of tanks and halting them. Those few students who protested were the critical mass. Demonstrations erupted in more than 300 towns all over the country as people found the confidence to act.

Malcolm Gladwell on Tipping Points

An influential text on the topic of critical mass is Malcolm Gladwell’s The Tipping Point. Published in 2000, the book describes a tipping point as “the moment of critical mass, the threshold, the boiling point.” He notes that “Ideas and products and messages and behaviors spread just like viruses do” and cites such examples as the sudden popularity of Hush Puppies and the steep drop in crime in New York after 1990. Gladwell writes that although the world “may seem like an immovable, implacable place,” it isn’t. “With the slightest push — in just the right place — it can be tipped.”

Referring to the 80/20 rule (also known as Pareto’s principle), Gladwell explains how it takes a tiny number of people to kickstart the tipping point in any sort of epidemic:

Economists often talk about the 80/20 Principle, which is the idea that in any situation roughly 80 percent of the “work” will be done by 20 percent of the participants. In most societies, 20 percent of criminals commit 80 percent of crimes. Twenty percent of motorists cause 80 percent of all accidents. Twenty percent of beer drinkers drink 80 percent of all beer. When it comes to epidemics, though, this disproportionality becomes even more extreme: a tiny percentage of people do the majority of the work.

Rising crime rates are also the result of a critical mass of people who see unlawful behavior as justified, acceptable, or necessary. It takes only a small number of people who commit crimes for a place to seem dangerous and chaotic. Gladwell explains how minor transgressions lead to more serious problems:

[T]he Broken Windows theory … was the brainchild of the criminologist James Q. Wilson and George Kelling. Wilson and Kelling argued that crime is the inevitable result of disorder. If a window is broken and left unrepaired, people walking by will conclude that no one cares and no one is in charge. Soon, more windows will be broken, and the sense of anarchy will spread from the building to the street it faces, sending a signal that anything goes. In a city, relatively minor problems like graffiti, public disorder, and aggressive panhandling, they write, are all the equivalent of broken windows, invitations to more serious crimes…

According to Gladwell’s research, there are three main factors in the creation of a critical mass of people necessary to induce a sudden change.

The first of these is the Law of the Few. Gladwell states that certain categories of people are instrumental in the creation of tipping points. These categories are:

  • Connectors: We all know connectors. These are highly gregarious, sociable people with large groups of friends. Connectors are those who introduce us to other people, instigate gatherings, and are the fulcrums of social groups. Gladwell defines connectors as those with networks of over one hundred people. An example of a cinematic connector is Kevin Bacon. There is a trivia game known as “Six Degrees of Kevin Bacon,” in which players aim to connect any actor/actress to him within a chain of six films. Gladwell writes that connectors have “some combination of curiosity, self-confidence, sociability, and energy.”
  • Mavens: Again, we all know a maven. This is the person we call to ask what brand of speakers we should buy, or which Chinese restaurant in New York is the best, or how to cope after a rough breakup. Gladwell defines mavens as “people we rely upon to connect us with new information.” These people help create a critical mass due to their habit of sharing information, passing knowledge on through word of mouth.
  • Salesmen: Whom would you call for advice about negotiating a raise, a house price, or an insurance payout? That person who just came to mind is probably what Gladwell calls a salesman. These are charismatic, slightly manipulative people who can persuade others to accept what they say.

The second factor cited by Gladwell is the “stickiness factor.” This is what makes a change significant and memorable. Heroin is sticky because it is physiologically addictive. Twitter is sticky because we want to keep returning to see what is being said about and to us. Game of Thrones is sticky because viewers are drawn in by the narrative and want to know what happens next. Once something reaches a critical mass, stickiness can be considered to be the rate of decline. The more sticky something is, the slower its decline. Cat videos aren’t very sticky, so even the viral ones thankfully fade into the night quickly.

Finally, the third factor is the specific context; the circumstances, time, and place must be right for an epidemic to occur. Understanding how a tipping point works can help to clarify the concept of critical mass.

The 10% Rule

One big question is: what percentage of a population is necessary to create a critical mass?

According to researchers at Rensselaer Polytechnic Institute, the answer is a mere 10%. Computational analysis was used to establish where the shift from minority to majority lies. According to director of research Boleslaw Szymanski:

When the number of committed opinion holders is below 10 percent, there is no visible progress in the spread of ideas. It would literally take the amount of time comparable to the age of the universe for this size group to reach the majority. Once that number grows above 10 percent, the idea spreads like flame.

The research has shown that the 10% can comprise literally anyone in a given society. What matters is that those people are set in their beliefs and do not respond to pressure to change them. Instead, they pass their ideas on to others. (I’d argue that the percentage is lower. Much lower. See Dictatorship of the Minority.)

As an example, Szymanski cites the sudden revolutions in countries such as Egypt and Tunisia: “In those countries, dictators who were in power for decades were suddenly overthrown in just a few weeks.”

According to another researcher:

In general, people do not like to have an unpopular opinion and are always seeking to try locally to come to a consensus … As agents of change start to convince more and more people, the situation begins to change. People begin to question their own views at first and then completely adopt the new view to spread it even further. If the true believers just influenced their neighbors, that wouldn’t change anything within the larger system, as we saw with percentages less than 10.

The potential use of this knowledge is tremendous. Now that we know how many people are necessary to form a critical mass, this information can be manipulated — for good or evil. The choice is yours.

Why the Printing Press and the Telegraph Were as Impactful as the Internet

What makes a communications technology revolutionary? One answer to this is to ask whether it fundamentally changes the way society is organized. This can be a very hard question to answer, because true fundamental changes alter society in such a way that it becomes difficult to speak of past society without imposing our present understanding.

In her seminal work, The Printing Press as An Agent of Change, Elizabeth Eisenstein argues just that:

When ideas are detached from the media used to transmit them, they are also cut off from the historical circumstances that shape them, and it becomes difficult to perceive the changing context within which they must be viewed.

Today we rightly think of the internet and the mobile phone, but long ago, the printing press and the telegraph both had just as heavy an impact on the development of society.

Printing Press

Thinking of the time before the telegraph, when communications had to be hand delivered, is quaint. Trying to conceive the world before the uniformity of communication brought about by the printing press is almost unimaginable.

Eisenstein argues that the printing press “is of special historical significance because it produced fundamental alterations in prevailing patterns of continuity and change.”

Before the printing press there were no books, not in the sense that we understand them. There were manuscripts that were copied by scribes, which contained inconsistencies and embellishments, and modifications that suited who the scribe was working for. The printing press halted the evolution of symbols: For the first time maps and numbers were fixed.

Furthermore, because pre-press scholars had to go to manuscripts, Eisenstein says we should “recognize the novelty of being able to assemble diverse records and reference guides, and of being able to study them without having to transcribe them at the same time” that was afforded by the printing press.

This led to new ways of being able to compare and thus develop knowledge, by reducing the friction of getting to the old knowledge:

More abundantly stocked bookshelves obviously increased opportunities to consult and compare different texts. Merely by making more scrambled data available, by increasing the output of Aristotelian, Alexandrian and Arabic texts, printers encouraged efforts to unscramble these data.

Eisenstein argues that many of the great thinkers of the 16th century, such as Descartes and Montaigne, would have been unlikely to have produced what they did without the changes wrought by the printing press. She says of Montaigne, “that he could see more books by spending a few months in his Bordeaux tower-study than earlier scholars had seen after a lifetime of travel.”

The printing press increased the speed of communication and the spread of knowledge: Far less man hours were needed to turn out 50 printed books than 50 scribed manuscripts.

Telegraph

Henry Ford famously said of life before the car “If I had asked people what they wanted, they would have said faster horses“. This sentiment could be equally applied to the telegraph, a communications technology that came about 400 years after the printing press.

Before the telegraph, the speed of communication was dependent on the speed of the physical object doing the transporting – the horse, or the ship. Societies were thus organized around the speed of communication available to them, from the way business was conducted and wars were fought to the way interpersonal communication was conducted.

Let’s consider, for example, the way the telegraph changed the conduct of war.

Prior to the telegraph, countries shared detailed knowledge of their plans with their citizens in order to boost morale, knowing that their plans would arrive at the enemy the same time their ships did. Post-telegraph, communications could arrive far faster than soldiers: This was something to consider!

In addition, as Tom Standage considers in his book The Victorian Internet, the telegraph altered the command structure in battle. “For who was better placed to make strategic decisions: the commander at the scene or his distant superiors?”

The telegraph brought changes similar in many ways to the printing press: It allowed for an accumulation of knowledge and increased the availability of this knowledge; more people had access to more information.

And society was forever altered as the new speed of communication made it fundamentally impossible to not use the telegraph, just as it is near impossible not to use a mobile phone or the Internet today.

Once the telegraph was widespread, there was no longer a way to do business without using it. Having up to the minute stock quotes changed the way businesses evaluated their holdings. Being able to communicate with various offices across the country created centralization and middle management. These elements became part of doing business so that it became nonsensical to talk about developing any aspect of business independent of the effect of electronic communication.

A Final Thought on Technology Uptake

One can argue that the more revolutionary an invention is, the slower the initial uptake into society, as society must do a fair amount of reorganizing to integrate the invention.

Such was the case for both the telegraph and printing press, as they allowed for things that were never before possible. Not being possible, they were rarely considered. Being rarely considered, there wasn’t a large populace pining for them to happen. So when new options presented themselves, no one was rushing to embrace them, because there was no general appreciation of their potential. This is, of course, a fundamental aspect of revolutionary technology. Everyone has to figure out how (and why) to use it.

In The Victorian Internet, Standage says of William Cooke and Samuel Morse, the British and American inventors, respectively, of the telegraph:

[They] had done the impossible and constructed working telegraphs. Surely the world would fall at their feet. Building the prototypes, however, turned out to be the easy part. Convincing people of their significance was far more of a challenge.

It took years for people to see advantages with the telegraph. Even after the first lines were built, and the accuracy and speed of the communications they could carry verified, Morse realized that “everybody still thought of the telegraph as a novelty, as nothing more than an amusing subject for a newspaper article, rather than the revolutionary new form of communication that he envisaged.”

The new technology might confer great benefits, but it took a lot of work building the infrastructure, both physical and mental, to take any advantage of them.

The printing press faced similar challenges. In fact, books printed from Gutenberg until 1501 have their own term, incunabula, which reflects the transition from manuscript to book. Eisenstein writes: “Printers and scribes copied each other’s products for several decades and duplicated the same texts for the same markets during the age of incunabula.”

The momentum took a while to build. When it did, the changes were remarkable.

But looking at these two technologies serves as a reminder of what revolutionary means in this context: The use by and value to society cannot be anticipated. Therefore, great and unpredictable shifts are caused when they are adopted and integrated into everyday life.

Memory and the Printing Press

You probably know that Gutenberg invented the printing press. You probably know it was pretty important. You may have heard some stuff about everyone being able to finally read the Bible without a priest handy. But here’s a point you might not be familiar with: The printing press changed why, and consequently what, we remember.

Before the printing press, memory was the main store of human knowledge. Scholars had to go to find books, often traveling around from one scriptoria to another. They couldn’t buy books. Individuals did not have libraries. The ability to remember was integral to the social accumulation of knowledge.

Thus, for centuries humans had built ways to remember out of pure necessity. Because knowledge wasn’t fixed, remembering content was the only way to access it. Things had to be known in a deep, accessible way as Elizabeth Eisenstein argues in The Printing Press as an Agent of Change:

As learning by reading took on new importance, the role played by mnemonic aids was diminished. Rhyme and cadence were no longer required to preserve certain formulas and recipes. The nature of the collective memory was transformed.

In the Church, for example, Eisenstein talks of a multimedia approach to remembering the bible. As a manuscript, it was not widely available, not even to many church representatives; the stories of the bible were often pictorially represented in the churches themselves. Use of images, both physically and mentally, was critical to storing knowledge in memory: they were used as a tool to allow one to create extensive “memory palaces” enabling the retention of knowledge.

Not only did printing eliminate many functions previously performed by stone figures over portals and stained glass in windows, but it also affected less tangible images by eliminating the need for placing figures and objects in imaginary niches located in memory theatres.

Thus, in an age before the printing press, bits of knowledge were associated with other bits of knowledge not because they complemented each other, or allowed for insights, but merely so they could be retained.

…the heavy reliance on memory training and speech arts, combined with the absence of uniform conventions for dating and placing [meant that] classical images were more likely to be placed in niches in ‘memory theatres’ than to be assigned a permanent location in a fixed past.

In our post on memory palaces, we used the analogy of a cow and a steak. To continue with the analogy used there, imagining that your partner asks you to pick up steak for dinner. To increase your chances of remembering the request, you envision a cow sitting on the front porch. When you mind-walk through your palace, you see this giant cow sitting there, perhaps waving at you (so unlike a cow!), causing you to think, ‘Why is that cow there–oh yeah, pick up steak for dinner’.

Before the printing press, it wasn’t just about picking up dinner. It was all of our knowledge. Euclid’s Elements and Aristotle’s Politics. The works of St. Augustine and Seneca. These works were shared most often orally, passing from memory to memory. Thus memory was not as much about remembering in the ages of scribes, as it was about preserving.

Consequently, knowledge was far less shared, and then only to those who could understand it and recall it.

To be preserved intact, techniques had to be entrusted to a select group of initiates who were instructed not only in special skills but also in the ‘mysteries’ associated with them. Special symbols, rituals, and incantations performed the necessary function of organizing data, laying out schedules, and preserving techniques in easily memorized forms.

Anyone who’s played the game “Telephone” knows the problem: As knowledge is passed on, over and over, it gets transformed, sometimes distorted. This needed to be guarded against, and sometimes couldn’t be. As there was no accessible reference library for knowledge, older texts were prized because they were closer to the originals.

Not only could more be learned from retrieving an early manuscript than from procuring a recent copy but the finding of lost texts was the chief means of achieving a breakthrough in almost any field.

Almost incomprehensible today, “Energies were expended on the retrieval of ancient texts because they held the promise of finding so much that still seemed new and untried.” Only by finding older texts could scholars hope to discover the original, unaltered sources of knowledge.

With the advent of the printing press, images and words became something else. Because they were now repeatable, they became fixed. No longer individual interpretations designed for memory access, they became part of the collective.

The effects of this were significant.

Difficulties engendered by diverse Greek and Arabic expressions, by medieval Latin abbreviations, by confusion between Roman letters and numbers, by neologisms, copyists’ errors and the like were so successfully overcome that modern scholars are frequently absent-minded about the limitations on progress in the mathematical sciences which scribal procedures imposed. … By the seventeenth century, Nature’s language was being emancipated from the old confusion of tongues. Diverse names for flora and fauna became less confusing when placed beneath identical pictures. Constellations and landmasses could be located without recourse to uncertain etymologies, once placed on uniform maps and globes. … The development of neutral pictorial and mathematical vocabularies made possible a large-scale pooling of talents for analyzing data, and led to the eventual achievement of a consensus that cut across all the old frontiers.

A key component of this was that apprentices and new scholars could consult books and didn’t have to exclusively rely on the memories of their superiors.

An updated technical literature enabled young men in certain fields of study to circumvent master-disciple relationships and to surpass their elders at the same time. Isaac Newton was still in his twenties when he mastered available mathematical treatises, beginning with Euclid and ending with an updated edition of Descartes. In climbing ‘on the shoulders of giants’ he was not re-enacting the experience of twelfth-century scholars for whom the retrieval of Euclid’s theorems had been a major feat.

Before the printing press, a scholar could spend his lifetime looking for a copy of Euclid’s Elements and never find them, thus having to rely on how the text was encoded in the memories of the scholars he encountered.

After the printing press, memory became less critical to knowledge. And knowledge became more widely dispersed as the reliance on memory being required for interpretation and understanding diminished. And with that, the collective power of the human mind was multiplied.

If you liked this post, check out our series on memory, starting with the advantages of our faulty memory, and continuing to the first part on our memory’s frequent errors.