Blog

Why Read? Advice From Harold Bloom

The late Harold Bloom, literary critic and professor, may well have been one of the most prolific readers of all time. Given that, Bloom was uniquely well positioned to answer the question of why we should read and how we should go about it.

According to legend, Bloom could read a 400-page book in an hour without sacrificing comprehension and could recite the whole of Shakespeare’s poetry by heart. He was also a prodigious writer, producing over fifty books during his lifetime, as well as editing hundreds of anthologies.

In How to Read and Why, Bloom dispenses wisdom for the avid reader. In this article, we’ll share some of the most striking advice from the book on… well, how to read and why.

***

Introduction

The most healing of pleasures

“Reading well is one of the great pleasures that solitude can afford you, because it is, at least in my experience, the most healing of pleasures. It returns you to otherness, whether in yourself or in friends, or in those who may become friends. Imaginative literature is otherness, and as such alleviates loneliness. We read not only because we cannot know enough people, but because friendship is so vulnerable, so likely to diminish or disappear, overcome by space, time, imperfect sympathies, and all the sorrows of familial and passional life.”

The value of irony

“Irony demands a certain attention span and the ability to sustain antithetical ideas, even when they collide with one another. Strip irony away from reading, and it loses at once all discipline and all surprise. Find now what comes near to you, that can be used for weighing and considering, and it will very likely be irony, even if many of your teachers will not know what it is, or where it is to be found.”

Why read?

“We read deeply for varied reasons, most of them familiar: that we cannot know enough people profoundly enough; that we need to know ourselves better; that we require knowledge, not just of self and others, but of the way things are. Yet the strongest, most authentic motive for deep reading of the now much-abused traditional canon is the search for a difficult pleasure.

. . . I urge you to find what truly comes near to you, that can be used for weighing and considering. Read deeply, not to believe, not to accept, not to contradict, but to learn to share in that one nature that writes and reads.”

***

Chapter 1: Short Stories

How to read short stories

“Short stories favor the tacit; they compel the reader to be active, and to discern explanations that the writer avoids. The reader, as I have said before, must slow down, quite deliberately, and start listening with the inner ear. Such listening overhears the characters, as well as hearing them; think of them as your characters, and wonder at what is implied, rather than told about them. Unlike most figures in novels, their foregrounding and postgrounding are largely up to you, utilizing the hints subtly provided by the writer.”

***

Chapter 2: Poems

How to read poems

“. . . Wherever possible, memorize them. . . . Silent intensive rereadings of a shorter poem that truly finds you should be followed by recitations to yourself until you discover that you are in possession of the poem. . . . Committed to memory, the poem will possess you, and you will be able to read it more closely, which great poetry demands and rewards.”

Why read poetry?

“Only rarely can poetry aid us in communicating with others; that is beautiful idealism, except at certain strange moments, like the instant of falling in love. Solitude is the more frequent mark of our condition; how shall we people that solitude? Poems can help us to speak to ourselves more clearly and more fully, and to overhear that speaking. . . . We speak to an otherness in ourselves, or to what may be best and oldest in ourselves. We read to find ourselves, more fully and more strange than otherwise we could hope.”

***

Chapter 3: Novels, Part 1

The difference between novels and poetry

“In some respects, reading a novel ought not to differ much from reading Shakespeare or reading a lyric poem. What matters most is who you are, since you cannot evade bringing yourself to the act of reading. Because most of us also bring definite expectations, a difference enters with the novel, where we think to encounter, if not our friends and ourselves, then a recognizable social reality, whether contemporary or historical.

. . . Novels require more readers than poems do, a statement so odd that it puzzles me, even as I agree with it. Tennyson, Browning, and Robert Frost had large audiences, but perhaps did not need them. Dickens and Tolstoy had vast readerships, and needed them; multitudes of overhearers are built into their art. How do you read a novel differently if you suspect you are one of a dwindling elite rather than the representative of a great multitude?”

Why read Don Quixote?

“Reading Don Quixote is an endless pleasure, and I hope I have indicated some aspects of how to read it. We are, many of us, Cervantine figures, mixed blends of the Quixotic and the Panzaesque. . . . It remains the best as well as the first of all novels, just as Shakespeare remains the best of all dramatists. There are parts of yourself you will not know fully until you know, as well as you can, Don Quixote and Sancho Panza.”

How to read Great Expectations

“With the deepest elements in one’s own fears, hopes, and affections: to read as if one could be a child again. Dickens invites you to do so, and makes it possible for you; that may be his greatest gift. Great Expectations does not take us into the Sublime, as Shakespeare and Cervantes do. It wants to return us to origins, painful and guilty as perhaps they must be. The novel’s appeal to our childlike need for love, and recovery of the self, is nearly irresistible. The “why” of reading it is then self-evident: to go home again, to heal our pain.”

A question to ask of great novels

“Do the principal characters change and, if they do, what causes them to change?”

Again, why read?

“The ultimate answer to the question “Why read?” is that only deep, constant reading fully establishes and augments an autonomous self. Until you become yourself, what benefit can you be to others?”

***

Chapter 4: Plays

Why read Hamlet?

“Because, by now, this play makes us an offer we cannot refuse. It has become our tradition, and the word our there is enormously inclusive. Prince Hamlet is the intellectual’s intellectual: the nobility, and the disaster, of Western consciousness. Now Hamlet has also become the representation of intelligence itself, and that is neither Western nor Eastern, male nor female, black nor white, but merely the human at its best, because Shakespeare is the first truly multicultural writer.”

How to read Shakespeare

“Reading Shakespeare’s plays, you learn to meditate upon what is left out. That is one of the many advantages that a reader has over a theatergoer in regard to Shakespeare. Ideally, one should read a Shakespeare play, watch a good performance of it, and then read it again. Shakespeare himself, directing his play at the Globe, must have experienced discomfort at how much a performance had to neglect, though we have no evidence of this. However instructed by Shakespeare, it is difficult to imagine the actor Richard Burbage catching and conveying all of Hamlet’s ironies, or the clown Will Kemp encompassing the full range of Falstaff’s wit in the Henry IV plays.”

***

Conclusion

At FS, we often talk about the benefits of reading as a way of learning from the experiences of others and avoiding mistakes. But, as Bloom shows us, the benefits are not just about becoming smarter and more productive.

Reading can help us alleviate loneliness and get to know more people on an intimate level than we could otherwise. It can provide greater self-knowledge, as the words of others give us a lens for understanding ourselves. As a “difficult pleasure,” the ways in which books challenge us help us to grow. Wrestling with a text teaches us a great deal about our capabilities and our values. There is also immense satisfaction and increased confidence when we conquer it. Reading helps you to become your full, autonomous self.

We can also learn from Bloom that there is much value in paying attention to how you approach different types of writing. No one approach works all of the time. Short stories require the ability to pick up on clues as to what isn’t included. Poetry is more illuminating if memorized. The way we experience novels has a lot to do with who we are and our perception of its popularity. And plays teach us how much more there is going on beneath the surface of what we see.

One last time: why read?

“Because you will be haunted by great visions: of Ishmael, escaped alone to tell us; of Oedipa Mass, cradling the old derelict in her arms; of Invisible Man, preparing to come up again; like Jonah, out of the whale’s belly. All of them, on some of the higher frequencies, speak to and for you.”

Why Life Can’t Be Simpler

We’d all like life to be simpler. But we also don’t want to sacrifice our options and capabilities. Tesler’s law of the conservation of complexity, a rule from design, explains why we can’t have both. Here’s how the law can help us create better products and services by rethinking simplicity.

“Why can’t life be simple?”

We’ve all likely asked ourselves that at least once. After all, life is complicated. Every day, we face processes that seem almost infinitely recursive. Each step requires the completion of a different task to make it possible, which in itself requires another task. We confront tools requiring us to memorize reams of knowledge and develop additional skills just to use them. Endeavors that seem like they should be simple, like getting utilities connected in a new home or figuring out the controls for a fridge, end up having numerous perplexing steps.

When we wish for things to be simpler, we usually mean we want products and services to have fewer steps, fewer controls, fewer options, less to learn. But at the same time, we still want all of the same features and capabilities. These two categories of desires are often at odds with each other and distort how we understand the complex.

***

Conceptual Models

In Living with Complexity, Donald A. Norman explains that complexity is all in the mind. Our perception of a product or service as simple or complex has its basis in the conceptual model we have of it. Norman writes that “A conceptual model is the underlying belief structure held by a person about how something works . . . Conceptual models are extremely important tools for organizing and understanding otherwise complex things.”

For example, on many computers, you can drag and drop a file into a folder. Both the file and the folder often have icons that represent their real-world namesakes. For the user, this process is simple; it provides a clear conceptual model. When people first started using graphical interfaces, real-world terms and icons made it easier to translate what they were doing. But the process only seems simple because of this effective conceptual model. It doesn’t represent what happens on the computer, where files and folders don’t exist. Computers store data wherever is convenient and may split files across multiple locations.

When we want something to be simpler, what we truly need is a better conceptual model of it. Once we know how to use them, complex tools end up making our lives simpler because they provide the precise functionality we want. A computer file is a great conceptual model because it hijacked something people already understood: physical files and folders. It would have been much harder for them to develop a whole new conceptual model reflecting how computers actually store files. What’s important to note is that giving users this simple conceptual model didn’t change how things work behind the scenes.

Removing functionality doesn’t make something simpler, because it removes options. Simple tools have a limited ability to simplify processes. Trying to do something complex with a simple tool is more complex than doing the same thing with a more complex tool.

A useful analogy here is the hand tools used by craftspeople, such as a silversmith’s planishing hammer (a tool used to shape and smooth the surface of metal). Norman highlights that these tools seem simple to the untrained eye. But using them requires great skill and practice. A craftsperson needs to know how to select them from the whole constellation of specialized tools they possess.

In itself, a planishing hammer might seem far, far simpler than, say, a digital photo editing program. Look again, Norman says. We have to compare the photo editing tool with the silversmith’s whole workbench. Both take a lot of time and practice to master. Both consist of many tools that are individually simple. Learning how and when to use them is the complex part.

Norman writes, “Whether something is complicated is in the mind of the beholder. ” Looking at a workbench of tools or a digital photo editing program, a novice sees complexity. A professional sees a range of different tools, each of which is simple to use. They know when to use each to make a process easier. Having fewer options would make their life more complex, not simpler, because they wouldn’t be able to break what they need to do down into individually simple steps. A professional’s experience-honed conceptual model helps them navigate a wide range of tools.

***

The conservation of complexity

To do difficult things in the simplest way, we need a lot of options.

Complexity is necessary because it gives us the functionality we need. A useful framework for understanding this is Tesler’s law of the conservation of complexity, which states:

The total complexity of a system is a constant. If you make a user’s interaction with a system simpler, the complexity behind the scenes increases.

The law originates from Lawrence Tesler (1945–2020), a computer scientist specializing in human-computer interactions who worked at Xerox, Apple, Amazon, and Yahoo! Tesler was influential in the development of early graphical interfaces, and he was the co-creator of the copy-and-paste functionality.

Complexity is like energy. It cannot be created or destroyed, only moved somewhere else. When a product or service becomes simpler for users, engineers and designers have to work harder. Norman writes, “With technology, simplifications at the level of usage invariably result in added complexity of the underlying mechanism. ” For example, the files and folders conceptual model for computer interfaces doesn’t change how files are stored, but by putting in extra work to translate the process into something recognizable, designers make navigating them easier for users.

Whether something looks simple or is simple to use says little about its overall complexity. “What is simple on the surface can be incredibly complex inside: what is simple inside can result in an incredibly complex surface. So from whose point of view do we measure complexity? ”

***

Out of control

Every piece of functionality requires a control—something that makes something happen. The more complex something is, the more controls it needs—whether they are visible to the user or not. Controls may be directly accessible to a user, as with the home button on an iPhone, or they may be behind the scenes, as with an automated thermostat.

From a user’s standpoint, the simplest products and services are those that are fully automated and do not require any intervention (unless something goes wrong.)

As long as you pay your bills, the water supply to your house is probably fully automated. When you turn on a tap, you don’t need to have requested there to be water in the pipes first. The companies that manage the water supply handle the complexity.

Or, if you stay in an expensive hotel, you might find your room is always as you want it, with your minifridge fully stocked with your favorites and any toiletries you forgot provided. The staff work behind the scenes to make this happen, without you needing to make requests.

On the other end of the spectrum, we have products and services that require users to control every last step.

A professional photographer is likely to use a camera that needs them to manually set every last setting, from white balance to shutter speed. This means the camera itself doesn’t need automation, but the user needs to operate controls for everything, giving them full control over the results. An amateur photographer might use a camera that automatically chooses these settings so all they need to do is point and shoot. In this case, the complexity transfers to the camera’s inner workings.

In the restaurants inside IKEA stores, customers typically perform tasks such as filling up drinks and clearing away dishes themselves. This means less complexity for staff and much lower prices compared to restaurants where staff do these things.

***

Lessons from the conservation of complexity

The first lesson from Tesler’s law of the conservation of complexity is that how simple something looks is not a reflection of how simple it is to use. Removing controls can mean users need to learn complex sequences to use the same features—similar to how languages with fewer sounds have longer words. One way to conceptualize the movement of complexity is through the notion of trade-offs. If complexity is constant, then there are trade-offs depending on where that complexity is moved.

A very basic example of complexity trade-offs can be found in the history of arithmetic. For centuries, many counting systems all over the world employed tools using stones or beads like a tabula (the Romans) or soroban (the Japanese) to facilitate adding and subtracting numbers. They were easy to use, but not easily portable. Then the Hindu-Arabic system came along (the one we use today) and by virtue of employing columns, and thus not requiring any moving parts, offered a much more portable counting system. However, the portability came with a cost.

Paul Lockhart explains in Arithmetic, “With the Hindu-Arabic system the writing and calculating are inextricably linked. Instead of moving stones or sliding beads, our manipulations become transmutations of the symbols themselves. That means we need to know things. We need to know that one more than 2 is 3, for instance. In other words, the price we pay [for portability] is massive amounts of memorization.” Thus, there is a trade-off. The simpler arithmetic system requires more complexity in terms of the memorization required of the users. We all went through the difficult process of learning mathematical symbols early in life. Although they might seem simple to us now, that’s just because we’re so accustomed to them.

Although perceived simplicity may have greater appeal at first, users are soon frustrated if it means greater operational complexity. Norman writes:

Perceived simplicity is not at all the same as simplicity of usage: operational simplicity. Perceived simplicity decreases with the number of visible controls and displays. Increase the number of visible alternatives and the perceived simplicity drops. The problem is that operational simplicity can be drastically improved by adding more controls and displays. The very things that make something easier to learn and to use can also make it be perceived as more difficult.

Even if it receives a negative reaction before usage, operational simplicity is the more important goal. For example, in a company, having a clearly stated directly responsible person for each project might seem more complex than letting a project be a team effort that falls to whoever is best suited to each part. But in practice, this adds complexity when someone tries to move forward with it or needs to know who should hear feedback about problems.

A second lesson is that things don’t always need to be incredibly simple for users. People have an intuitive sense that complexity has to go somewhere. When using a product or service is too simple, users can feel suspicious or like they’ve been robbed of control. They know that a lot more is going on behind the scenes, they just don’t know what it is. Sometimes we need to preserve a minimum level of complexity so that users feel like an actual participant. According to legend, cake mixes require the addition of a fresh egg because early users found that dried ones felt a bit too lazy and low effort.

An example of desirable minimum complexity is help with homework. For many parents, helping their children with their homework often feels like unnecessary complexity. It is usually subjects and facts they haven’t thought about in years, and they find themselves having to relearn them in order to help their kids. It would be far simpler if the teachers could cover everything in class to a degree that each child needed no additional practice. However, the complexity created by involving parents in the homework process helps make parents more aware of what their children are learning. In addition, they often get insight into areas of both struggle and interest, can identify ways to better connect with their children, and learn where they may want to teach them some broader life skills.

When we seek to make things simpler for other people, we should recognize that there be a point of diminishing negative returns wherein further simplification leads to a worse experience. Simplicity is not an end in itself—other things like speed, usability, and time-saving are. We shouldn’t simplify things from the user standpoint for the sake of it.

If changes don’t make something better for users, we’re just creating unnecessary behind-the-scenes complexity. People want to feel in control, especially when it comes to something important. We want to learn a bit about what’s happening, and an overly simple process teaches us nothing.

A third lesson is that products and services are only as good as what happens when they break. Handling a problem with something that has lots of controls on the user side may be easier for the user. They’re used to being involved in it. If something has been fully automated up until the point where it breaks, users don’t know how to react. The change is jarring, and they may freeze or overreact. Seeing as fully automated things fade into the background, this may be their most salient and memorable interaction with a product or service. If handling a problem is difficult for the user—for example, if there’s a lack of rapid support or instructions available or it’s hard to ascertain what went wrong in the first place—they may come away with a negative overall impression, even if everything worked fine for years beforehand.

A big challenge in the development of self-driving cars is that a driver needs to be able to take over if the car encounters a problem. But if someone hasn’t had to operate the car manually for a while, they may panic or forget what to do. So it’s a good idea to limit how long the car drives itself for. The same is purportedly true for airplane pilots. If the plane does too much of the work, the pilot won’t cope well in an emergency.

A fourth lesson is the importance of thinking about how the level of control you give your customers or users influences your workload. For a graphic designer, asking a client to detail exactly how they want their logo to look makes their work simpler. But it might be hard work for the client, who might not know what they want or may make poor choices. A more experienced designer might ask a client for much less information and instead put the effort into understanding their overall brand and deducing their needs from subtle clues, then figuring out the details themselves. The more autonomy a manager gives their team, the lower their workload, and vice versa.

If we accept that complexity is a constant, we need to always be mindful of who is bearing the burden of that complexity.

 

Being Smart is Not Enough

When hiring a team, we tend to favor the geniuses who hatch innovative ideas, but overlook the butterflies, the crucial ones who share and implement them. Here’s why it’s important to be both smart AND social.

***

In business, it’s never enough to have a great idea. For any innovation to be successful, it has to be shared, promoted, and bought into by everyone in the organization. Yet often we focus on the importance of those great ideas and seem to forget about the work that is required to spread them around.

Whenever we are building a team, we tend to look for smarts. We are attracted to those with lots of letters after their names or fancy awards on their resumes. We assume that if we hire the smartest people we can find, they will come up with new, better ways of doing things that save us time and money.

Conversely, we often look down on predominantly social people. They seem to spend too much time gossiping and not enough time working. We assume they’ll be too busy engaging on social media or away from their desks too often to focus on their duties, and thus we avoid hiring them.

Although we aren’t going to tell you to swear off smarts altogether, we are here to suggest that maybe it’s time to reconsider the role that social people play in cultural growth and the diffusion of innovation.

In his book, The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter, Joseph Henrich explores the role of culture in human evolution. One point he makes is that it’s not enough for a species to be smart. What counts far more is having the cultural infrastructure to share, teach, and learn.

Consider two very large prehuman populations, the Geniuses and the Butterflies. Suppose the Geniuses will devise an invention once in 10 lifetimes. The Butterflies are much dumber, only devising the same invention once in 1000 lifetimes. So, this means that the Geniuses are 100 times smarter than the Butterflies. However, the Geniuses are not very social and have only 1 friend they can learn from. The Butterflies have 10 friends, making them 10 times more social.

Now, everyone in both populations tries to obtain an invention, both by figuring it out for themselves and by learning from friends. Suppose learning from friends is difficult: if a friend has it, a learner only learns it half the time. After everyone has done their own individual learning and tried to learn from their friends, do you think the innovation will be more common among the Geniuses or the Butterflies?

Well, among the Geniuses a bit fewer than 1 out of 5 individuals (18%) will end up with the invention. Half of those Geniuses will have figured it out all by themselves. Meanwhile, 99.9% of Butterflies will have the innovation, but only 0.1% will have figured it out by themselves.

Wow.

What if we take this thinking and apply to the workplace? Of course you want to have smart people. But you don’t want an organization full of Geniuses. They might come up with a lot, but without being able to learn from each other easily, many of their ideas won’t have any uptake in the organization. Instead, you’d want to pair Geniuses with Butterflies—socially attuned people who are primed to adopt the successful behaviors of those around them.

If you think you don’t need Butterflies because you can just put Genius innovations into policy and procedure, you’re missing the point. Sure, some brilliant ideas are concrete, finite, and visible. Those are the ones you can identify and implement across the organization from the top down. But some of the best ideas happen on the fly in isolated, one-off situations as responses to small changes in the environment. Perhaps there’s a minor meeting with a client, and the Genius figures out a new way of describing your product that really resonates. The Genius though, is not a teacher. It worked for them and they keep repeating the behavior, but it doesn’t occur to them to teach someone else. And they don’t pick up on other tactics to further refine their innovation.

But the Butterfly who went to the meeting with the Genius? They pick up on the successful new product description right away. They emulate it in all meetings from then on. They talk about it with their friends, most of whom are also Butterflies. Within two weeks, the new description has taken off because of the propensity for cultural learning embedded in the social Butterflies.

The lesson here is to hire both types of people. Know that it’s the Geniuses who innovate, but it’s the Butterflies who spread that innovation around. Both components are required for successfully implementing new, brilliant ideas.

The Spiral of Silence

Our desire to fit in with others means we don’t always say what we think. We only express opinions that seem safe. Here’s how the spiral of silence works and how we can discover what people really think.

***

Be honest: How often do you feel as if you’re really able to express your true opinions without fearing judgment? How often do you bite your tongue because you know you hold an unpopular view? How often do you avoid voicing any opinion at all for fear of having misjudged the situation?

Even in societies with robust free speech protections, most people don’t often say what they think. Instead they take pains to weigh up the situation and adjust their views accordingly. This comes down to the “spiral of silence,” a human communication theory developed by German researcher Elisabeth Noelle-Neumann in the 1960s and ’70s. The theory explains how societies form collective opinions and how we make decisions surrounding loaded topics.

Let’s take a look at how the spiral of silence works and how understanding it can give us a more realistic picture of the world.

***

How the spiral of silence works

According to Noelle-Neumann’s theory, our willingness to express an opinion is a direct result of how popular or unpopular we perceive it to be. If we think an opinion is unpopular, we will avoid expressing it. If we think it is popular, we will make a point of showing we think the same as others.

Controversy is also a factor—we may be willing to express an unpopular uncontroversial opinion but not an unpopular controversial one. We perform a complex dance whenever we share views on anything morally loaded.

Our perception of how “safe” it is to voice a particular view comes from the clues we pick up, consciously or not, about what everyone else believes. We make an internal calculation based on signs like what the mainstream media reports, what we overhear coworkers discussing on coffee breaks, what our high school friends post on Facebook, or prior responses to things we’ve said.

We also weigh up the particular context, based on factors like how anonymous we feel or whether our statements might be recorded.

As social animals, we have good reason to be aware of whether voicing an opinion might be a bad idea. Cohesive groups tend to have similar views. Anyone who expresses an unpopular opinion risks social exclusion or even ostracism within a particular context or in general. This may be because there are concrete consequences, such as losing a job or even legal penalties. Or there may be less official social consequences, like people being less friendly or willing to associate with you. Those with unpopular views may suppress them to avoid social isolation.

Avoiding social isolation is an important instinct. From an evolutionary biology perspective, remaining part of a group is important for survival, hence the need to at least appear to share the same views as anyone else. The only time someone will feel safe to voice a divergent opinion is if they think the group will share it or be accepting of divergence, or if they view the consequences of rejection as low. But biology doesn’t just dictate how individuals behave—it ends up shaping communities. It’s almost impossible for us to step outside of that need for acceptance.

A feedback loop pushes minority opinions towards less and less visibility—hence why Noelle-Neumann used the word “spiral.” Each time someone voices a majority opinion, they reinforce the sense that it is safe to do so. Each time someone receives a negative response for voicing a minority opinion, it signals to anyone sharing their view to avoid expressing it.

***

An example of the spiral of silence

A 2014 Pew Research survey of 1,801 American adults examined the prevalence of the spiral of silence on social media. Researchers asked people about their opinions on one public issue: Edward Snowden’s 2013 revelations of US government surveillance of citizens’ phones and emails. They selected this issue because, while controversial, prior surveys suggested a roughly even split in public opinion surrounding whether the leaks were justified and whether such surveillance was reasonable.

Asking respondents about their willingness to share their opinions in different contexts highlighted how the spiral of silence plays out. 86% of respondents were willing to discuss the issue in person, but only about half as many were willing to post about it on social media. Of the 14% who would not consider discussing the Snowden leaks in person, almost none (0.3%) were willing to turn to social media instead.

Both in person and online, respondents reported far greater willingness to share their views with people they knew agreed with them—three times as likely in the workplace and twice as likely in a Facebook discussion.

***

The implications of the spiral of silence

The end result of the spiral of silence is a point where no one publicly voices a minority opinion, regardless of how many people believe it. The first implication of this is that the picture we have of what most people believe is not always accurate. Many people nurse opinions they would never articulate to their friends, coworkers, families, or social media followings.

A second implication is that the possibility of discord makes us less likely to voice an opinion at all, assuming we are not trying to drum up conflict. In the aforementioned Pew survey, people were more comfortable discussing a controversial story in person than online. An opinion voiced online has a much larger potential audience than one voiced face to face, and it’s harder to know exactly who will see it. Both of these factors increase the risk of someone disagreeing.

If we want to gauge what people think about something, we need to remove the possibility of negative consequences. For example, imagine a manager who often sets overly tight deadlines, causing immense stress to their team. Everyone knows this is a problem and discusses it among themselves, recognizing that more realistic deadlines would be motivating, and unrealistic ones are just demoralizing. However, no one wants to say anything because they’ve heard the manager say that people who can’t handle pressure don’t belong in that job. If the manager asks for feedback about their leadership style, they’re not going to hear what they need to hear if they know who it comes from.

A third implication is that what seems like a sudden change in mainstream opinions can in fact be the result of a shift in what is acceptable to voice, not in what people actually think. A prominent public figure getting away with saying something controversial may make others feel safe to do the same. A change in legislation may make people comfortable saying what they already thought.

For instance, if recreational marijuana use is legalized where someone lives, they might freely remark to a coworker that they consume it and consider it harmless. Even if that was true before the legislation change, saying so would have been too fraught, so they might have lied or avoided the topic. The result is that mainstream opinions can appear to change a great deal in a short time.

A fourth implication is that highly vocal holders of a minority opinion can end up having a disproportionate influence on public discourse. This is especially true if that minority is within a group that already has a lot of power.

While this was less the case during Noelle-Neumann’s time, the internet makes it possible for a vocal minority to make their opinions seem far more prevalent than they actually are—and therefore more acceptable. Indeed, the most extreme views on any spectrum can end up seeming most normal online because people with a moderate take have less of an incentive to make themselves heard.

In anonymous environments, the spiral of silence can end up reversing itself, making the most fringe views the loudest.

When Technology Takes Revenge

While runaway cars and vengeful stitched-together humans may be the stuff of science fiction, technology really can take revenge on us. Seeing technology as part of a complex system can help us avoid costly unintended consequences. Here’s what you need to know about revenge effects.

***

By many metrics, technology keeps making our lives better. We live longer, healthier, richer lives with more options than ever before for things like education, travel, and entertainment. Yet there is often a sense that we have lost control of our technology in many ways, and thus we end up victims of its unanticipated impacts.

Edward Tenner argues in Why Things Bite Back: Technology and the Revenge of Unintended Consequences that we often have to deal with “revenge effects.” Tenner coined this term to describe the ways in which technologies can solve one problem while creating additional worse problems, new types of problems, or shifting the harm elsewhere. In short, they bite back.

Although Why Things Bite Back was written in the late 1990s and many of its specific examples and details are now dated, it remains an interesting lens for considering issues we face today. The revenge effects Tenner describes haunt us still. As the world becomes more complex and interconnected, it’s easy to see that the potential for unintended consequences will increase.

Thus, when we introduce a new piece of technology, it would be wise to consider whether we are interfering with a wider system. If that’s the case, we should consider what might happen further down the line. However, as Tenner makes clear, once the factors involved get complex enough, we cannot anticipate them with any accuracy.

Neither Luddite nor alarmist in nature, the notion of revenge effects can help us better understand the impact of intervening with complex systems But we need to be careful. Although second-order thinking is invaluable, it cannot predict the future with total accuracy. Understanding revenge effects is primarily a reminder of the value of caution and not of specific risks.

***

Types of revenge effects

There are four different types of revenge effects, described here as follows:

  1. Repeating effects: occur when more efficient processes end up forcing us to do the same things more often, meaning they don’t free up more of our time. Better household appliances have led to higher standards of cleanliness, meaning people end up spending the same amount of time—or more—on housework.
  2. Recomplicating effects: occur when processes become more and more complex as the technology behind them improves. Tenner gives the now-dated example of phone numbers becoming longer with the move away from rotary phones. A modern example might be lighting systems that need to be operated through an app, meaning a visitor cannot simply flip a switch.
  3. Regenerating effects: occur when attempts to solve a problem end up creating additional risks. Targeting pests with pesticides can make them increasingly resistant to harm or kill off their natural predators. Widespread use of antibiotics to control certain conditions has led to be resistant strains of bacteria that are harder to treat.
  4. Rearranging effects: occur when costs are transferred elsewhere so risks shift and worsen. Air conditioning units on subways cool down the trains—while releasing extra heat and making the platforms warmer. Vacuum cleaners can throw dust mite pellets into the air, where they remain suspended and are more easily breathed in. Shielding beaches from waves transfers the water’s force elsewhere.

***

Recognizing unintended consequences

The more we try to control our tools, the more they can retaliate.

Revenge effects occur when the technology for solving a problem ends up making it worse due to unintended consequences that are almost impossible to predict in advance. A smartphone might make it easier to work from home, but always being accessible means many people end up working more.

Things go wrong because technology does not exist in isolation. It interacts with complex systems, meaning any problems spread far from where they begin. We can never merely do one thing.

Tenner writes: “Revenge effects happen because new structures, devices, and organisms react with real people in real situations in ways we could not foresee.” He goes on to add that “complexity makes it impossible for anyone to understand how the system might act: tight coupling spreads problems once they begin.”

Prior to the Industrial Revolution, technology typically consisted of tools that served as an extension of the user. They were not, Tenner argues, prone to revenge effects because they did not function as parts in an overall system like modern technology. He writes that “a machine can’t appear to have a will of its own unless it is a system, not just a device. It needs parts that interact in unexpected and sometimes unstable and unwanted ways.”

Revenge effects often involve the transformation of defined, localized risks into nebulous, gradual ones involving the slow accumulation of harm. Compared to visible disasters, these are much harder to diagnose and deal with.

Large localized accidents, like a plane crash, tend to prompt the creation of greater safety standards, making us safer in the long run. Small cumulative ones don’t.

Cumulative problems, compared to localized ones, aren’t easy to measure or even necessarily be concerned about. Tenner points to the difference between reactions in the 1990s to the risk of nuclear disasters compared to global warming. While both are revenge effects, “the risk from thermonuclear weapons had an almost built-in maintenance compulsion. The deferred consequences of climate change did not.”

Many revenge effects are the result of efforts to improve safety. “Our control of the acute has indirectly promoted chronic problems”, Tenner writes. Both X-rays and smoke alarms cause a small number of cancers each year. Although they save many more lives and avoiding them is far riskier, we don’t get the benefits without a cost. The widespread removal of asbestos has reduced fire safety, and disrupting the material is often more harmful than leaving it in place.

***

Not all effects exact revenge

A revenge effect is not a side effect—defined as a cost that goes along with a benefit. The value of being able to sanitize a public water supply has significant positive health outcomes. It also has a side effect of necessitating an organizational structure that can manage and monitor that supply.

Rather, a revenge effect must actually reverse the benefit for at least a small subset of users. For example, the greater ease of typing on a laptop compared to a typewriter has led to an increase in carpal tunnel syndrome and similar health consequences. It turns out that the physical effort required to press typewriter keys and move the carriage protected workers from some of the harmful effects of long periods of time spent typing.

Likewise, a revenge effect is not just a tradeoff—a benefit we forgo in exchange for some other benefit. As Tenner writes:

If legally required safety features raise airline fares, that is a tradeoff. But suppose, say, requiring separate seats (with child restraints) for infants, and charging a child’s fare for them, would lead many families to drive rather than fly. More children could in principle die from transportation accidents than if the airlines had continued to permit parents to hold babies on their laps. This outcome would be a revenge effect.

***

In support of caution

In the conclusion of Why Things Bite Back, Tenner writes:

We seem to worry more than our ancestors, surrounded though they were by exploding steamboat boilers, raging epidemics, crashing trains, panicked crowds, and flaming theaters. Perhaps this is because the safer life imposes an ever increasing burden of attention. Not just in the dilemmas of medicine but in the management of natural hazards, in the control of organisms, in the running of offices, and even in the playing of games there are, not necessarily more severe, but more subtle and intractable problems to deal with.

While Tenner does not proffer explicit guidance for dealing with the phenomenon he describes, one main lesson we can draw from his analysis is that revenge effects are to be expected, even if they cannot be predicted. This is because “the real benefits usually are not the ones that we expected, and the real perils are not those we feared.”

Chains of cause and effect within complex systems are stranger than we can often imagine. We should expect the unexpected, rather than expecting particular effects.

While we cannot anticipate all consequences, we can prepare for their existence and factor it into our estimation of the benefits of new technology. Indeed, we should avoid becoming overconfident about our ability to see the future, even when we use second-order thinking. As much as we might prepare for a variety of impacts, revenge effects may be dependent on knowledge we don’t yet possess. We should expect larger revenge effects the more we intensify something (e.g., making cars faster means worse crashes).

Before we intervene in a system, assuming it can only improve things, we should be aware that our actions can do the opposite or do nothing at all. Our estimations of benefits are likely to be more realistic if we are skeptical at first.

If we bring more caution to our attempts to change the world, we are better able to avoid being bitten.

 

A Primer on Algorithms and Bias

The growing influence of algorithms on our lives means we owe it to ourselves to better understand what they are and how they work. Understanding how the data we use to inform algorithms influences the results they give can help us avoid biases and make better decisions.

***

Algorithms are everywhere: driving our cars, designing our social media feeds, dictating which mixer we end up buying on Amazon, diagnosing diseases, and much more.

Two recent books explore algorithms and the data behind them. In Hello World: Being Human in the Age of Algorithms, mathematician Hannah Fry shows us the potential and the limitations of algorithms. And Invisible Women: Data Bias in a World Designed for Men by writer, broadcaster, and feminist activist Caroline Criado Perez demonstrates how we need to be much more conscientious of the quality of the data we feed into them.

Humans or algorithms?

First, what is an algorithm? Explanations of algorithms can be complex. Fry explains that at their core, they are defined as step-by-step procedures for solving a problem or achieving a particular end. We tend to use the term to refer to mathematical operations that crunch data to make decisions.

When it comes to decision-making, we don’t necessarily have to choose between doing it ourselves and relying wholly on algorithms. The best outcome may be a thoughtful combination of the two.

We all know that in certain contexts, humans are not the best decision-makers. For example, when we are tired, or when we already have a desired outcome in mind, we may ignore relevant information. In Thinking, Fast and Slow, Daniel Kahneman gave multiple examples from his research with Amos Tversky that demonstrated we are heavily influenced by cognitive biases such as availability and anchoring when making certain types of decisions. It’s natural, then, that we would want to employ algorithms that aren’t vulnerable to the same tendencies. In fact, their main appeal for use in decision-making is that they can override our irrationalities.

Algorithms, however, aren’t without their flaws. One of the obvious ones is that because algorithms are written by humans, we often code our biases right into them. Criado Perez offers many examples of algorithmic bias.

For example, an online platform designed to help companies find computer programmers looks through activity such as sharing and developing code in online communities, as well as visiting Japanese manga (comics) sites. People visiting certain sites with frequency received higher scores, thus making them more visible to recruiters.

However, Criado Perez presents the analysis of this recruiting algorithm by Cathy O’Neil, scientist and author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, who points out that “women, who do 75% of the world’s unpaid care work, may not have the spare leisure time to spend hours chatting about manga online . . . and if, like most of techdom, that manga site is dominated by males and has a sexist tone, a good number of women in the industry will probably avoid it.”

Criado Perez postulates that the authors of the recruiting algorithm didn’t intend to encode a bias that discriminates against women. But, she says, “if you aren’t aware of how those biases operate, if you aren’t collecting data and taking a little time to produce evidence-based processes, you will continue to blindly perpetuate old injustices.”

Fry also covers algorithmic bias and asserts that “wherever you look, in whatever sphere you examine, if you delve deep enough into any system at all, you’ll find some kind of bias.” We aren’t perfect—and we shouldn’t expect our algorithms to be perfect, either.

In order to have a conversation about the value of an algorithm versus a human in any decision-making context, we need to understand, as Fry explains, that “algorithms require a clear, unambiguous idea of exactly what we want them to achieve and a solid understanding of the human failings they are replacing.”

Garbage in, garbage out

No algorithm is going to be successful if the data it uses is junk. And there’s a lot of junk data in the world. Far from being a new problem, Criado Perez argues that “most of recorded human history is one big data gap.” And that has a serious negative impact on the value we are getting from our algorithms.

Criado Perez explains the situation this way: We live in “a world [that is] increasingly reliant on and in thrall to data. Big data. Which in turn is panned for Big Truths by Big Algorithms, using Big Computers. But when your data is corrupted by big silences, the truths you get are half-truths, at best.”

A common human bias is one regarding the universality of our own experience. We tend to assume that what is true for us is generally true across the population. We have a hard enough time considering how things may be different for our neighbors, let alone for other genders or races. It becomes a serious problem when we gather data about one subset of the population and mistakenly assume that it represents all of the population.

For example, Criado Perez examines the data gap in relation to incorrect information being used to inform decisions about safety and women’s bodies. From personal protective equipment like bulletproof vests that don’t fit properly and thus increase the chances of the women wearing them getting killed to levels of exposure to toxins that are unsafe for women’s bodies, she makes the case that without representative data, we can’t get good outputs from our algorithms. She writes that “we continue to rely on data from studies done on men as if they apply to women. Specifically, Caucasian men aged twenty-five to thirty, who weigh 70 kg. This is ‘Reference Man’ and his superpower is being able to represent humanity as whole. Of course, he does not.” Her book contains a wide variety of disciplines and situations where the gender gap in data leads to increased negative outcomes for women.

The limits of what we can do

Although there is a lot we can do better when it comes to designing algorithms and collecting the data sets that feed them, it’s also important to consider their limits.

We need to accept that algorithms can’t solve all problems, and there are limits to their functionality. In Hello World, Fry devotes a chapter to the use of algorithms in justice. Specifically, algorithms designed to provide information to judges about the likelihood of a defendant committing further crimes. Our first impulse is to say, “Let’s not rely on bias here. Let’s not have someone’s skin color or gender be a key factor for the algorithm.” After all, we can employ that kind of bias just fine ourselves. But simply writing bias out of an algorithm is not as easy as wishing it so. Fry explains that “unless the fraction of people who commit crimes is the same in every group of defendants, it is mathematically impossible to create a test which is equally accurate at predicting across the board and makes false positive and false negative mistakes at the same rate for every group of defendants.”

Fry comes back to such limits frequently throughout her book, exploring them in various disciplines. She demonstrates to the reader that “there are boundaries to the reach of algorithms. Limits to what can be quantified.” Perhaps a better understanding of those limits is needed to inform our discussions of where we want to use algorithms.

There are, however, other limits that we can do something about. Both authors make the case for more education about algorithms and their input data. Lack of understanding shouldn’t hold us back. Algorithms that have a significant impact on our lives specifically need to be open to scrutiny and analysis. If an algorithm is going to put you in jail or impact your ability to get a mortgage, then you ought to be able to have access to it.

Most algorithm writers and the companies they work for wave the “proprietary” flag and refuse to open themselves up to public scrutiny. Many algorithms are a black box—we don’t actually know how they reach the conclusions they do. But Fry says that shouldn’t deter us. Pursuing laws (such as the data access and protection rights being instituted in the European Union) and structures (such as an algorithm-evaluating body playing a role similar to the one the U.S. Food and Drug Administration plays in evaluating whether pharmaceuticals can be made available to the U.S. market) will help us decide as a society what we want and need our algorithms to do.

Where do we go from here?

Algorithms aren’t going away, so it’s best to acquire the knowledge needed to figure out how they can help us create the world we want.

Fry suggests that one way to approach algorithms is to “imagine that we designed them to support humans in their decisions, rather than instruct them.” She envisions a world where “the algorithm and the human work together in partnership, exploiting each other’s strengths and embracing each other’s flaws.”

Part of getting to a world where algorithms provide great benefit is to remember how diverse our world really is and make sure we get data that reflects the realities of that diversity. We can either actively change the algorithm, or we change the data set. And if we do the latter, we need to make sure we aren’t feeding our algorithms data that, for example, excludes half the population. As Criado Perez writes, “when we exclude half of humanity from the production of knowledge, we lose out on potentially transformative insights.”

Given how complex the world of algorithms is, we need all the amazing insights we can get. Algorithms themselves perhaps offer the best hope, because they have the inherent flexibility to improve as we do.

Fry gives this explanation: “There’s nothing inherent in [these] algorithms that means they have to repeat the biases of the past. It all comes down to the data you give them. We can choose to be ‘crass empiricists’ (as Richard Berk put it ) and follow the numbers that are already there, or we can decide that the status quo is unfair and tweak the numbers accordingly.”

We can get excited about the possibilities that algorithms offer us and use them to create a world that is better for everyone.