Blog

A Primer on Algorithms and Bias

The growing influence of algorithms on our lives means we owe it to ourselves to better understand what they are and how they work. Understanding how the data we use to inform algorithms influences the results they give can help us avoid biases and make better decisions.

***

Algorithms are everywhere: driving our cars, designing our social media feeds, dictating which mixer we end up buying on Amazon, diagnosing diseases, and much more.

Two recent books explore algorithms and the data behind them. In Hello World: Being Human in the Age of Algorithms, mathematician Hannah Fry shows us the potential and the limitations of algorithms. And Invisible Women: Data Bias in a World Designed for Men by writer, broadcaster, and feminist activist Caroline Criado Perez demonstrates how we need to be much more conscientious of the quality of the data we feed into them.

Humans or algorithms?

First, what is an algorithm? Explanations of algorithms can be complex. Fry explains that at their core, they are defined as step-by-step procedures for solving a problem or achieving a particular end. We tend to use the term to refer to mathematical operations that crunch data to make decisions.

When it comes to decision-making, we don’t necessarily have to choose between doing it ourselves and relying wholly on algorithms. The best outcome may be a thoughtful combination of the two.

We all know that in certain contexts, humans are not the best decision-makers. For example, when we are tired, or when we already have a desired outcome in mind, we may ignore relevant information. In Thinking, Fast and Slow, Daniel Kahneman gave multiple examples from his research with Amos Tversky that demonstrated we are heavily influenced by cognitive biases such as availability and anchoring when making certain types of decisions. It’s natural, then, that we would want to employ algorithms that aren’t vulnerable to the same tendencies. In fact, their main appeal for use in decision-making is that they can override our irrationalities.

Algorithms, however, aren’t without their flaws. One of the obvious ones is that because algorithms are written by humans, we often code our biases right into them. Criado Perez offers many examples of algorithmic bias.

For example, an online platform designed to help companies find computer programmers looks through activity such as sharing and developing code in online communities, as well as visiting Japanese manga (comics) sites. People visiting certain sites with frequency received higher scores, thus making them more visible to recruiters.

However, Criado Perez presents the analysis of this recruiting algorithm by Cathy O’Neil, scientist and author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, who points out that “women, who do 75% of the world’s unpaid care work, may not have the spare leisure time to spend hours chatting about manga online . . . and if, like most of techdom, that manga site is dominated by males and has a sexist tone, a good number of women in the industry will probably avoid it.”

Criado Perez postulates that the authors of the recruiting algorithm didn’t intend to encode a bias that discriminates against women. But, she says, “if you aren’t aware of how those biases operate, if you aren’t collecting data and taking a little time to produce evidence-based processes, you will continue to blindly perpetuate old injustices.”

Fry also covers algorithmic bias and asserts that “wherever you look, in whatever sphere you examine, if you delve deep enough into any system at all, you’ll find some kind of bias.” We aren’t perfect—and we shouldn’t expect our algorithms to be perfect, either.

In order to have a conversation about the value of an algorithm versus a human in any decision-making context, we need to understand, as Fry explains, that “algorithms require a clear, unambiguous idea of exactly what we want them to achieve and a solid understanding of the human failings they are replacing.”

Garbage in, garbage out

No algorithm is going to be successful if the data it uses is junk. And there’s a lot of junk data in the world. Far from being a new problem, Criado Perez argues that “most of recorded human history is one big data gap.” And that has a serious negative impact on the value we are getting from our algorithms.

Criado Perez explains the situation this way: We live in “a world [that is] increasingly reliant on and in thrall to data. Big data. Which in turn is panned for Big Truths by Big Algorithms, using Big Computers. But when your data is corrupted by big silences, the truths you get are half-truths, at best.”

A common human bias is one regarding the universality of our own experience. We tend to assume that what is true for us is generally true across the population. We have a hard enough time considering how things may be different for our neighbors, let alone for other genders or races. It becomes a serious problem when we gather data about one subset of the population and mistakenly assume that it represents all of the population.

For example, Criado Perez examines the data gap in relation to incorrect information being used to inform decisions about safety and women’s bodies. From personal protective equipment like bulletproof vests that don’t fit properly and thus increase the chances of the women wearing them getting killed to levels of exposure to toxins that are unsafe for women’s bodies, she makes the case that without representative data, we can’t get good outputs from our algorithms. She writes that “we continue to rely on data from studies done on men as if they apply to women. Specifically, Caucasian men aged twenty-five to thirty, who weigh 70 kg. This is ‘Reference Man’ and his superpower is being able to represent humanity as whole. Of course, he does not.” Her book contains a wide variety of disciplines and situations where the gender gap in data leads to increased negative outcomes for women.

The limits of what we can do

Although there is a lot we can do better when it comes to designing algorithms and collecting the data sets that feed them, it’s also important to consider their limits.

We need to accept that algorithms can’t solve all problems, and there are limits to their functionality. In Hello World, Fry devotes a chapter to the use of algorithms in justice. Specifically, algorithms designed to provide information to judges about the likelihood of a defendant committing further crimes. Our first impulse is to say, “Let’s not rely on bias here. Let’s not have someone’s skin color or gender be a key factor for the algorithm.” After all, we can employ that kind of bias just fine ourselves. But simply writing bias out of an algorithm is not as easy as wishing it so. Fry explains that “unless the fraction of people who commit crimes is the same in every group of defendants, it is mathematically impossible to create a test which is equally accurate at predicting across the board and makes false positive and false negative mistakes at the same rate for every group of defendants.”

Fry comes back to such limits frequently throughout her book, exploring them in various disciplines. She demonstrates to the reader that “there are boundaries to the reach of algorithms. Limits to what can be quantified.” Perhaps a better understanding of those limits is needed to inform our discussions of where we want to use algorithms.

There are, however, other limits that we can do something about. Both authors make the case for more education about algorithms and their input data. Lack of understanding shouldn’t hold us back. Algorithms that have a significant impact on our lives specifically need to be open to scrutiny and analysis. If an algorithm is going to put you in jail or impact your ability to get a mortgage, then you ought to be able to have access to it.

Most algorithm writers and the companies they work for wave the “proprietary” flag and refuse to open themselves up to public scrutiny. Many algorithms are a black box—we don’t actually know how they reach the conclusions they do. But Fry says that shouldn’t deter us. Pursuing laws (such as the data access and protection rights being instituted in the European Union) and structures (such as an algorithm-evaluating body playing a role similar to the one the U.S. Food and Drug Administration plays in evaluating whether pharmaceuticals can be made available to the U.S. market) will help us decide as a society what we want and need our algorithms to do.

Where do we go from here?

Algorithms aren’t going away, so it’s best to acquire the knowledge needed to figure out how they can help us create the world we want.

Fry suggests that one way to approach algorithms is to “imagine that we designed them to support humans in their decisions, rather than instruct them.” She envisions a world where “the algorithm and the human work together in partnership, exploiting each other’s strengths and embracing each other’s flaws.”

Part of getting to a world where algorithms provide great benefit is to remember how diverse our world really is and make sure we get data that reflects the realities of that diversity. We can either actively change the algorithm, or we change the data set. And if we do the latter, we need to make sure we aren’t feeding our algorithms data that, for example, excludes half the population. As Criado Perez writes, “when we exclude half of humanity from the production of knowledge, we lose out on potentially transformative insights.”

Given how complex the world of algorithms is, we need all the amazing insights we can get. Algorithms themselves perhaps offer the best hope, because they have the inherent flexibility to improve as we do.

Fry gives this explanation: “There’s nothing inherent in [these] algorithms that means they have to repeat the biases of the past. It all comes down to the data you give them. We can choose to be ‘crass empiricists’ (as Richard Berk put it ) and follow the numbers that are already there, or we can decide that the status quo is unfair and tweak the numbers accordingly.”

We can get excited about the possibilities that algorithms offer us and use them to create a world that is better for everyone.

Aim For What’s Reasonable: Leadership Lessons From Director Jean Renoir

Directing a film involves getting an enormous group of people to work together on turning the image inside your head into a reality. In this 1970 interview, director Jean Renoir dispenses time-tested wisdom for leaders everywhere on humility, accountability, goal-setting, and more.

***

Many of us end up in leadership roles at some point in our career. Most of us, however, never get any training or instruction on how to actually be a good leader. But whether we end up offering formal or informal leadership, at some point we need to inspire or motivate people towards accomplishing a shared vision.

Directors are the leaders of movie productions. They assemble their team, they communicate their vision, and they manage the ups and downs of the filming process. Thus the experience of a successful director offers great insight into the qualities of a good leader. In 1970, film director Jean Renoir gave an interview with George Stevens Jr. of the American Film Institute where he discussed the leadership aspects of directing. His insights illustrate some important lessons. Renoir started out making silent films, and he continued filmmaking through to the 1960s. His two greatest cinematic achievements were the films The Grand Illusion (1937) and The Rules of the Game (1939). He received a Lifetime Achievement Academy Award in 1975 for his contribution to the motion picture industry.

In the interview, Renoir speaks to humility in leadership when he says, “I’m a director who has spent his life suggesting stories that nobody wanted. It’s still going on. But I’m used to it and I’m not complaining, because the ideas which were forced on me were often better than my own ideas.”

Leadership is not necessarily coming up with all the answers; it’s also important to put aside your own ego to cultivate and support the contributions from your team. Sometimes leaders have the best ideas. But often people on their team have excellent ones as well.

Renoir suggests that the role of a director is to have a clear enough vision that you can work through the imperfections involved in executing it. “A picture, often when it is good, is the result of some inner belief which is so strong that you have to show what you want, in spite of a stupid story or difficulties about the commercial side of the film.”

Good leaders don’t require perfection to achieve results. They work with what they have, often using creativity and ingenuity to fill in when reality doesn’t conform to the ideal image in their head. Having a vision is not about achieving exactly that vision. It’s about doing the best you can once you come into contact with reality.

When Renoir says, “We directors are simply midwives,” he implies that effective leadership is about giving shape to the talents and capabilities that already exist. Excellent leaders find a way to challenge and develop those on their team. In explaining how he works with actors, he says, “You must not ask an actor to do what he cannot do.” Rather, you need to work with what you have, using clear feedback and communication to find a way to bring out the best in people. Sometimes getting out of people’s way and letting their natural abilities come out is the most important thing to do.

Although Renoir says, “When I can, I shoot my scenes only once. I like to be committed, to be a slave to my decision,” he further explains, “I don’t like to make the important decisions alone.” Good leaders know when to consult others. They know to take in information from those who know more than they do and to respect different forms of expertise. But they still take accountability for their decisions because they made the final choice.

Good leaders are also mindful of the world outside the group or organization they are leading. They don’t lead in a vacuum but are sensitive to all those involved in achieving the results they are trying to deliver. For a director, it makes no sense to conceive of a film without considering the audience. Renoir explains, “I believe that the work of art where the spectator does not collaborate is not a work of art.” Similarly, we all have groups that we interact with outside of our organization, like clients or customers. We too need to run our teams with an understanding of that outside world.

No one can be good at everything, and thus effective leadership involves knowing when to ask for help. Renoir admits, “That’s where I like to have my friends help me, because I am very bad at casting.” Knowing your weaknesses is vital, because then you can find people who have strengths in those areas to assist you.

Additionally, most organizations are too complex for any one person to be an expert at all of the roles. Leaders show hubris when they assume they can do the jobs of everyone else well. Renoir explains this notion of knowing your role as a leader: “Too many directors work like this. They tell the actor, ‘Sit down, my dear friends, and look at me. I am going to act a scene, and you are going to repeat what I just did.’ He acts a scene and he acts it badly, because if he is a director instead of an actor, it’s probably because he’s a bad actor.”

***

Although leadership can be all encompassing, we shouldn’t be intimidated by the ideal list of qualities and behaviors a good leader displays. Focus on how you can improve. Set goals. Reflect on your failures, and recognize your success.

“You know, there is an old slogan, very popular in our occidental civilization: you must look to an end higher than normal, and that way you will achieve something. Your aim must be very, very high. Myself, I am absolutely convinced that it is mere stupidity. The aim must be easy to reach, and by reaching it, you achieve more.”

The Observer Effect: Seeing Is Changing

The act of looking at something changes it – an effect that holds true for people, animals, even atoms. Here’s how the observer effect distorts our world and how we can get a more accurate picture.

***

We often forget to factor in the distortion of observation when we evaluate someone’s behavior. We see what they are doing as representative of their whole life. But the truth is, we all change how we act when we expect to be seen. Are you ever on your best behavior when you’re alone in your house? To get better at understanding other people, we need to consider the observer effect: observing things changes them, and some phenomena only exist when observed.

The observer effect is not universal. The moon continues to orbit whether we have a telescope pointed at it or not. But both things and people can change under observation. So, before you judge someone’s behavior, it’s worth asking if they are changing because you are looking at them, or if their behavior is natural. People are invariably affected by observation. Being watched makes us act differently.

“I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers.”

— Isaac Asimov

The observer effect in science

The observer effect pops up in many scientific fields.

In physics, Erwin Schrödinger’s famous cat highlights the power of observation. In his best-known thought experiment, Schrödinger asked us to imagine a cat placed in a box with a radioactive atom that might or might not kill it in an hour. Until the box opens, the cat exists in a state of superposition (when half of two states each occur at the same time)—that is, the cat is both alive and dead. Only by observing it does the cat shift permanently to one of the two states. The observation removes the cat from a state of superposition and commits it to just one.

(Although Schrodinger meant this as a counter-argument to Einstein’s proposition of superposition of quantum states – he wanted to demonstrate the absurdity of the proposition – it has caught on in popular culture as a thought experiment of the observer effect.)

In biology, when researchers want to observe animals in their natural habitat, it is paramount that they find a way to do so without disturbing those animals. Otherwise, the behavior they see is unlikely to be natural, because most animals (including humans) change their behavior when they are being observed. For instance, Dr. Cristian Damsa and his colleagues concluded in their paper “Heisenberg in the ER” that being observed makes psychiatric patients a third less likely to require sedation. Doctors and nurses wash their hands more when they know their hygiene is being tracked. And other studies have shown that zoo animals only exhibit certain behaviors in the presence of visitors, such as being hypervigilant of their presence and repeatedly looking at them.

In general, we change our behavior when we expect to be seen. Philosopher Jeremy Bentham knew this when he designed the panopticon prison in the eighteenth century, building upon an idea by his brother Samuel. The prison was constructed so that its cells circled a central watchtower so inmates could never tell if they were being watched or not. Bentham expected this would lead to better behavior, without the need for many staff. It never caught on as an actual design for prisons, but the modern prevalence of CCTV is often compared to the Panopticon. We never know when we’re being watched, so we act as if it’s all the time.

The observer effect, however, is twofold. Observing changes what occurs, but observing also changes our perceptions of what occurs. Let’s take a look at that next.

“How much does one imagine, how much observe? One can no more separate those functions than divide light from air, or wetness from water.”

— Elspeth Huxley

Observer bias

The effects of observation get more complex when we consider how each of us filters what we see through our own biases, assumptions, preconceptions, and other distortions. There’s a reason, after all, why double-blinding (ensuring both tester and subject does not receive any information that may influence their behavior) is the gold-standard in research involving living things. Observer bias occurs when we alter what we see, either by only noticing what we expect or by behaving in ways that have influence on what occurs. Without intending to do so, researchers may encourage certain results, leading to changes in ultimate outcomes.

A researcher falling prey to the observer bias is more likely to make erroneous interpretations, leading to inaccurate results. For instance, in a trial for an anti-anxiety drug where researchers know which subjects receive a placebo and which receive actual drugs, they may report that the latter group seems calmer because that’s what they expect.

The truth is, we often see what we expect to see. Our biases lead us to factor in irrelevant information when evaluating the actions of others. We also bring our past into the present and let that color our perceptions as well—so, for example, if someone has really hurt you before, you are less likely to see anything good in what they do.

The actor-observer bias

Another factor in the observer effect, and one we all fall victim to, is our tendency to attribute the behavior of others to innate personality traits. Yet we tend to attribute our own behavior to external circumstances. This is known as the actor-observer bias.

For example, a student who gets a poor grade on a test claims they were tired that day or the wording on the test was unclear. Conversely, when that same student observes a peer who performed badly on a test on which they performed well, the student judges their peer as incompetent or ill-prepared. If someone is late to a meeting with a friend, they rush in apologizing for the bad traffic. But if the friend is late, they label them as inconsiderate. When we see a friend having an awesome time in a social media post, we assume their life is fun all of the time. When we post about ourselves having an awesome time, we see it as an anomaly in an otherwise non-awesome life.

We have different levels of knowledge about ourselves and others. Because observation focuses on what is displayed, not what preceded or motivated it, we see the full context for our own behavior but only the final outcome for other people. We need to take the time to learn the context of other’s lives before we pass judgment on their actions.

Conclusion

We can use the observer effect to our benefit. If we want to change a behavior, finding some way to ensure someone else observes it can be effective. For instance, going to the gym with a friend means they know if we don’t go, making it more likely that we stick with it. Tweeting about our progress on a project can help keep us accountable. Even installing software on our laptop that tracks how often we check social media can reduce our usage.

But if we want to get an accurate view of reality, it is important we consider how observing it may distort the results. The value of knowing about the observer effect in everyday life is that it can help us factor in the difference that observation makes. If we want to gain an accurate picture of the world, it pays to consider how we take that picture. For instance, you cannot assume that an employee’s behavior in a meeting translates to their work, or that the way your kids act at home is the same as in the playground. We all act differently when we know we are being watched.

The Ingredients For Innovation

Inventing new things is hard. Getting people to accept and use new inventions is often even harder. For most people, at most times, technological stagnation has been the norm. What does it take to escape from that and encourage creativity?

***

“Technological progress requires above all tolerance toward the unfamiliar and the eccentric.”

— Joel Mokyr, The Lever of Riches

Writing in The Lever of Riches: Technological Creativity and Economic Progress, economic historian Joel Mokyr asks why, when we look at the past, some societies have been considerably more creative than others at particular times. Some have experienced sudden bursts of progress, while others have stagnated for long periods of time. By examining the history of technology and identifying the commonalities between the most creative societies and time periods, Mokyr offers useful lessons we can apply as both individuals and organizations.

What does it take for a society to be technologically creative?

When trying to explain something as broad and complex as technological creativity, it’s important not to fall prey to the lure of a single explanation. There are many possible reasons for anything that happens, and it’s unwise to believe explanations that are too tidy. Mokyr disregards some of the common simplistic explanations for technological creativity, such as that war prompts creativity or people with shorter life spans are less likely to expend time on invention.

Mokyr explores some of the possible factors that contribute to a society’s technological creativity. In particular, he seeks to explain why Europe experienced such a burst of technological creativity from around 1500 to the Industrial Revolution, when prior to that it had lagged far behind the rest of the world. Mokyr explains that “invention occurs at the level of the individual, and we should address the factors that determine individual creativity. Individuals, however, do not live in a vacuum. What makes them implement, improve and adapt new technologies, or just devise small improvements in the way they carry out their daily work depends on the institutions and the attitudes around them.” While environment isn’t everything, certain conditions are necessary for technological creativity.

He identifies the three following key factors in an environment that impact the occurrence of invention and innovation.

The social infrastructure

First of all, the society needs a supply of “ingenious and resourceful innovators who are willing and able to challenge their physical environment for their own improvement.” Fostering these attributes requires factors like good nutrition, religious beliefs that are not overly conservative, and access to education. It is in part about the absence of negative factors—necessitous people have less capacity for creativity. Mokyr writes: “The supply of talent is surely not completely exogenous; it responds to incentives and attitudes. The question that must be confronted is why in some societies talent is unleashed upon technical problems that eventually change the entire productive economy, whereas in others this kind of talent is either repressed or directed elsewhere.”

One partial explanation for Europe’s creativity from 1500 to the Industrial Revolution is that it was often feasible for people to relocate to a different country if the conditions in their current one were suboptimal. A creative individual finding themselves under a conservative government seeking to maintain the technological status quo was able to move elsewhere.

The ability to move around was also part of the success of the Abbasid Caliphate, an empire that stretched from India to the Iberian Peninsula from about 750 to 1250. Economists Maristella Botticini and Zvi Eckstein write in The Chosen Few: How Education Shaped Jewish History, 70–1492 that “it was relatively easy to move or migrate” within the Abbasid empire, especially with its “common language (Arabic) and a uniform set of institutions and laws over an immense area, greatly [favoring] trade and commerce.”

It also matters whether creative people are channeled into technological fields or into other fields, like the military. In Britain during and prior to the Industrial Revolution, Mokyr considers invention to have been the main possible path for creative individuals, as other areas like politics leaned towards conformism.

The social incentives

Second, there need to be incentives in place to encourage innovation. This is of extra importance for macroinventions – completely new inventions, not improvements on existing technology – which can require a great leap of faith. The person who comes up with a faster horse knows it has a market; the one who comes up with a car does not. Such incentives are most often financial, but not always. Awards, positions of power, and recognition also count. Mokyr explains that diverse incentives encourage the patience needed for creativity: “Sustained innovation requires a set of individuals willing to absorb large risks, sometimes to wait many years for the payoff (if any.)”

Patent systems have long served as an incentive, allowing inventors to feel confident they will profit from their work. Patents first appeared in northern Italy in the early fifteenth century; Venice implemented a formal system in 1474. According to Mokyr, the monopoly rights mining contractors received over the discovery of hitherto unknown mineral resources provided inspiration for the patent system.

However, Mokyr points out that patents were not always as effective as inventors hoped. Indeed, they may have provided the incentive without any actual protection. Many inventors ended up spending unproductive time and money on patent litigation, which in some cases outweighed their profits, discouraged them from future endeavors, or left them too drained to invent more. Eli Whitney, inventor of the cotton gin, claimed his legal costs outweighed his profits. Mokyr proposes that though patent laws may be imperfect, they are, on balance, good for society as they incentivize invention while not altogether preventing good ideas from circulating and being improved upon by others.

The ability to make money from inventions is also related to geographic factors. In a country with good communication and transport systems, with markets in different areas linked, it is possible for something new to sell further afield. A bigger prospective market means stronger financial incentives. The extensive, accessible, and well-maintained trade routes during the Abbasid empire allowed for innovations to diffuse throughout the region. And during the Industrial Revolution in Britain, railroads helped bring developments to the entire country, ensuring inventors didn’t just need to rely on their local market.

The social attitude

Third, a technologically creative society must be diverse and tolerant. People must be open to new ideas and outré individuals. They must not only be willing to consider fresh ideas from within their own society but also happy to take inspiration from (or to outright steal) those coming from elsewhere. If a society views knowledge coming from other countries as suspect or even dangerous, unable to see its possible value, it is at a disadvantage. If it eagerly absorbs external influences and adapts them for its own purposes, it is at an advantage. Europeans were willing to pick up on ideas from each other. and elsewhere in the world. As Mokyr puts it, “Inventions such as the spinning wheel, the windmill, and the weight-driven clock recognized no boundaries”

In the Abbasid empire, there was an explosion of innovation that drew on the knowledge gained from other regions. Botticini and Eckstein write:

“The Abbasid period was marked by spectacular developments in science, technology, and the liberal arts. . . . The Muslim world adopted papermaking from China, improving Chinese technology with the invention of paper mills many centuries before paper was known in the West. Muslim engineers made innovate industrial uses of hydropower, tidal power, wind power, steam power, and fossil fuels. . . . Muslim engineers invented crankshafts and water turbines, employed gears in mills and water-raising machines, and pioneered the use of dams as a source of waterpower. Such advances made it possible to mechanize many industrial tasks that had previously been performed by manual labor.”

Within societies, certain people and groups seek to maintain the status quo because it is in their interests to do so. Mokyr writes that “Some of these forces protect vested interests that might incur losses if innovations were introduced, others are simply don’t-rock-the-boat kind of forces.” In order for creative technology to triumph, it must be able to overcome those forces. While there is always going to be conflict, the most creative societies are those where it is still possible for the new thing to take over. If those who seek to maintain the status quo have too much power, a society will end up stagnating in terms of technology. Ways of doing things can prevail not because they are the best, but because there is enough interest in keeping them that way.

In some historical cases in Europe, it was easier for new technologies to spread in the countryside, where the lack of guilds compensated for the lower density of people. City guilds had a huge incentive to maintain the status quo. The inventor of the ribbon loom in Danzig in 1579 was allegedly drowned by the city council, while “in the fifteenth century, the scribes guild of Paris succeeded in delaying the introduction of printing in Paris by 20 years.”

Indeed, tolerance could be said to matter more for technological creativity than education. As Mokyr repeatedly highlights, many inventors and innovators throughout history were not educated to a high level—or even at all. Up until relatively recently, most technology preceded the science explaining how it actually worked. People tinkered, looking to solve problems and experiment.

Unlike modern times, Mokyr explains, for most of history technology did not emerge from “specialized research laboratories paid for by research and development budgets and following strategies mapped out by corporate planners well-informed by marketing analysts. Technological change occurred mostly through new ideas and suggestions occurring if not randomly, then in a highly unpredictable fashion.”

When something worked, it worked, even if no one knew why or the popular explanation later proved incorrect. Steam engines are one such example. The notion that all technologies function under the same set of physical laws was not standard until Galileo. People need space to be a bit weird.

Those who were scientists and academics during some of Europe’s most creative periods worked in a different manner than what we expect today, often working on the practical problems they faced themselves. Mokyr gives Galileo as an example, as he “built his own telescopes and supplemented his salary as a professor at the University of Padua by making and repairing instruments.” The distinction between one who thinks and one who makes was not yet clear at the time of the Renaissance. Wherever and whenever making has been a respectable activity for thinkers, creativity flourishes.

Seeing as technological creativity requires a particular set of circumstances, it is not the norm. Throughout history, Mokyr writes, “Technological progress was neither continuous nor persistent. Genuinely creative societies were rare, and their bursts of creativity usually short-lived.”

Not only did people need to be open to new ideas, they also needed to be willing to actually start using new technologies. This often required a big leap of faith. If you’re a farmer just scraping by, trying a new way of ploughing your fields could mean starving to death if it doesn’t work out. Innovations can take a long time to defuse, with riskier ones taking the longest.

How can we foster the right environment?

So what can we learn from The Lever of Riches that we can apply as individuals and in organizations?

The first lesson is that creativity does not occur in a vacuum. It requires certain necessary conditions to occur. If we want to come up with new ideas as individuals, we should consider ourselves as part of a system. In particular, we need to consider what might impede us and what can encourage us. We need to eradicate anything that will get in the way of our thinking, such as limiting beliefs or lack of sleep.

We need to be clear on what motivates us to be creative, ensuring what we endeavor to do will be worthwhile enough to drive us through the associated effort. When we find ourselves creatively blocked, it’s often because we’re not in touch with what inspires us to create in the first place.

Within an organization, such factors are equally important. If you want your employees to be creative, it’s important to consider the system they’re part of. Is there anything blocking their thinking? Is a good incentive structure in place (bearing in mind incentives are not solely financial)?

Another lesson is that tolerance for divergence is essential for encouraging creativity. This may seem like part of the first lesson, but it’s crucial enough to consider in isolation.

As individuals, when we seek to come up with new ideas, we need to ask ourselves the following questions: Am I exposing myself to new material and inspirations or staying within a filter bubble? Am I open to unusual ways of thinking? Am I spending too much time around people who discourage deviation from the status quo? Am I being tolerant of myself, allowing myself to make mistakes and have bad ideas in service of eventually having good ones? Am I spending time with unorthodox people who encourage me to think differently?

Within organizations, it’s worth asking the following questions: Are new ideas welcomed or shot down? Is it in the interests of many to protect the status quo? Are ideas respected regardless of their source? Are people encouraged to question norms?

A final lesson is that the forces of inertia are always acting to discourage creativity. Invention is not the natural state of things—it is an exception. Technological stagnation is the norm. In most places, at most times, people have not come up with new technology. It takes a lot for individuals to be willing to wrestle something new from nothing or to question if something in existence can be made better. But when those acts do occur, they can have an immeasurable impact on our world.

Thinking For Oneself

When I was young, I thought other people could give me wisdom. Now that I’m older, I know this isn’t true.

Wisdom is earned, not given. When other people give us the answer, it belongs to them and not us. While we might achieve the outcome we desire, it comes from dependence, not insight. Instead of thinking for ourselves, we’re dependent on the insight of others.

There is nothing wrong with buying insight, this is one way we leverage ourselves. The problem is when we assume the insight of others is our own.

Earning insight requires going below the surface. Most of us want to shy away from the details and complexity. It takes a while. It’s boring. It’s mental work.

Yet it is only by jumping into the complexity that we can really discover simplicity for ourselves.

While the abundant directives, rules, and simplicities offered by others make us feel like we’re getting smarter, it’s nothing more than the illusion of knowledge.

If wisdom was as simple to acquire as reading, we’d all be wealthy and happy. Others help you but they can’t do the work for you. Owning wisdom for oneself requires a discipline the promiscuous consumer of it does not share.

Perhaps an example will help. The other day a plumber came to repair a pipe. He fixed the problem in under 5 minutes. The mechanical motions are easy to replicate. In fact, while it would take me longer, the procedure was so simple if you watched him you’d be able to do it. However, if even one thing were to deviate or change, we’d have a crisis on our hands, whereas the plumber would not. It took years of work to earn the wisdom he brought to solve the problem. Just because we could only see the simplicity he brought to the problem didn’t mean there wasn’t a deep understanding of the complexity behind it. There is no way we could acquire that insight in a few minutes by watching. We’d need to do it over and over for years, experiencing all of the things that could go wrong.

Thinking is something you have to do by yourself.

Appearances vs Experiences: What Really Makes Us Happy

In the search for happiness, we often confuse how something looks with how it’s likely to make us feel. This is especially true when it comes to our homes. If we want to maximize happiness, we need to prioritize experiences over appearances.

***

Most of us try to make decisions intended to bring us greater happiness. The problem is that we misunderstand how our choices really impact our well-being and end up making ones that have the opposite effect. We buy stuff that purports to inspire happiness and end up feeling depressed instead. Knowing some of the typical pitfalls in the search for happiness—especially the ones that seem to go against common sense—can help us improve quality of life.

It’s an old adage that experiences make us happier than physical things. But knowing is not the same as doing. One area this is all too apparent is when it comes to choosing where to live. You might think that how a home looks is vital to how happy you are living in it. Wrong! The experience of a living space is far more important than its appearance.

The influence of appearance

In Happy City: Transforming Our Lives Through Urban Design, Charles Montgomery explores some of the ways in which we misunderstand how our built environment and the ways we move through cities influence our happiness.

Towards the end of their first year at Harvard, freshmen find out which dormitory they will be living in for the rest of their time at university. Places are awarded via a lottery system, so individual students have no control over where they end up. Harvard’s dormitories are many and varied in their design, size, amenities, age, location, and overall prestige. Students take allocation seriously, as the building they’re in inevitably has a big influence on their experience at university. Or does it?

Montgomery points to two Harvard dormitories. Lowell House, a stunning red brick building with a rich history, is considered the most prestigious of them all. Students clamor to live in it. Who could ever be gloomy in such a gorgeous building?

Meanwhile, Mather House is a much-loathed concrete tower. It’s no one’s first choice. Most students pray for a room in the former and hope to be spared the latter, because they think their university experience will be as awful-looking as the building. (It’s worth noting that although the buildings vary in appearance, neither is lacking any of the amenities a student needs to live. Nor is Mather House in any way decrepit.)

The psychologist Elizabeth Dunn asked a group of freshmen to predict how each of the available dormitories might affect their experience of Harvard. In follow-up interviews, she compared their lived experience with those initial predictions. Montgomery writes:

The results would surprise many Harvard freshmen. Students sent to what they were sure would be miserable houses ended up much happier than they had anticipated. And students who landed in the most desirable houses were less happy than they expected to be. Life in Lowell House was fine. But so was life in the reviled Mather House. Overall, Harvard’s choice dormitories just didn’t make anyone much happier than its spurned dormitories.

Why did students make this mistake and waste so much energy worrying about dormitory allocation? Dunn found that they “put far too much weight on obvious differences between residences, such as location and architectural features, and far too little on things that were not so glaringly different, such as the sense of community and the quality of relationships they would develop in their dormitory.”

Asked to guess if relationships or architecture are more important, most of us would, of course, say relationships. Our behavior, however, doesn’t always reflect that. Dunn further states:

This is the standard mis-weighing of extrinsic and intrinsic values: we may tell each other that experiences are more important than things, but we constantly make choices as though we didn’t believe it.

When we think that the way a building looks will dictate our experience living in it, we are mistaking the map for the territory. Architectural flourishes soon fade into the background. What matters is the day-to-day experience of living there, when relationships matter much more than how things look. Proximity to friends is a higher predictor of happiness than charming old brick.

The impact of experience

Some things we can get used to. Some we can’t. We make a major mistake when we think it’s worthwhile to put up with negative experiences that are difficult to grow accustomed to in order to have nice things. Once again, this happens when we forget that our day-to-day experience is paramount in our perception of our happiness.

Take the case of suburbs. Montgomery describes how many people in recent decades moved to suburbs outside of American cities. There, they could enjoy luxuries like big gardens, sprawling front lawns, wide streets with plenty of room between houses, spare bedrooms, and so on. City dwellers imagined themselves and their families spreading out in spacious, safe homes. But American cities ended up being shaped by flawed logic, as Montgomery elaborates:

Neoclassical economics, which dominated the second half of the twentieth century, is based on the premise that we are all perfectly well equipped to make choices that maximize utility. . . . But the more psychologists and economists examine the relationship between decision-making and happiness, the more they realize that this is simply not true. We make bad choices all the time. . . . Our flawed choices have helped shape the modern city—and consequently, the shape of our lives.

Living in the suburbs comes at a price: long commutes. Many people spend hours a day behind the wheel, getting to and from work. On top of that, the dispersed nature of suburbs means that everything from the grocery store to the gym requires more extended periods of time driving. It’s easy for an individual to spend almost all of their non-work, non-sleep time in their car.

Commuting is, in just about every sense, terrible for us. The more time people spend driving each day, the less happy they are with their life in general. This unhappiness even extends to the partners of people with long commutes, who also experience a decline in well-being. Commuters see their health suffer due to long periods of inactivity and the stress of being stuck in traffic. It’s hard to find the time and energy for things like exercise or seeing friends if you’re always on the road. Gas and car-related expenses can eat up the savings from living outside of the city. That’s not to mention the environmental toll. Commuting is generally awful for mental health, which Montgomery illustrates:

A person with a one-hour commute has to earn 40 percent more money to be as satisfied with life as someone who walks to the office. On the other hand, for a single person, exchanging a long commute for a short walk to work has the same effect on happiness as finding a new love.

So why do we make this mistake? Drawing on the work of psychologist Daniel Gilbert, Montgomery explains that it’s a matter of us thinking we’ll get used to commuting (an experience) and won’t get used to the nicer living environment (a thing.)

The opposite is true. While a bigger garden and spare bedroom soon cease to be novel, every day’s commute is a little bit different, meaning we can never get quite used to it. There is a direct linear downwards relationship between commute time and life satisfaction, but there’s no linear upwards correlation between house size and life satisfaction. As Montgomery says, “The problem is, we consistently make decisions that suggest we are not so good at distinguishing between ephemeral and lasting pleasures. We keep getting it wrong.”

Happy City teems with insights about the link between the design of where we live and our quality of life. In particular, it explores how cities are often shaped by mistaken ideas about what brings us happiness. We maximize our chances at happiness when we prioritize our experience of life instead of acquiring things to fill it with.