Blog

Thinking For Oneself

When I was young, I thought other people could give me wisdom. Now that I’m older, I know this isn’t true.

Wisdom is earned, not given. When other people give us the answer, it belongs to them and not us. While we might achieve the outcome we desire, it comes from dependence, not insight. Instead of thinking for ourselves, we’re dependent on the insight of others.

There is nothing wrong with buying insight, this is one way we leverage ourselves. The problem is when we assume the insight of others is our own.

Earning insight requires going below the surface. Most of us want to shy away from the details and complexity. It takes a while. It’s boring. It’s mental work.

Yet it is only by jumping into the complexity that we can really discover simplicity for ourselves.

While the abundant directives, rules, and simplicities offered by others make us feel like we’re getting smarter, it’s nothing more than the illusion of knowledge.

If wisdom was as simple to acquire as reading, we’d all be wealthy and happy. Others help you but they can’t do the work for you. Owning wisdom for oneself requires a discipline the promiscuous consumer of it does not share.

Perhaps an example will help. The other day a plumber came to repair a pipe. He fixed the problem in under 5 minutes. The mechanical motions are easy to replicate. In fact, while it would take me longer, the procedure was so simple if you watched him you’d be able to do it. However, if even one thing were to deviate or change, we’d have a crisis on our hands, whereas the plumber would not. It took years of work to earn the wisdom he brought to solve the problem. Just because we could only see the simplicity he brought to the problem didn’t mean there wasn’t a deep understanding of the complexity behind it. There is no way we could acquire that insight in a few minutes by watching. We’d need to do it over and over for years, experiencing all of the things that could go wrong.

Thinking is something you have to do by yourself.

Appearances vs Experiences: What Really Makes Us Happy

In the search for happiness, we often confuse how something looks with how it’s likely to make us feel. This is especially true when it comes to our homes. If we want to maximize happiness, we need to prioritize experiences over appearances.

***

Most of us try to make decisions intended to bring us greater happiness. The problem is that we misunderstand how our choices really impact our well-being and end up making ones that have the opposite effect. We buy stuff that purports to inspire happiness and end up feeling depressed instead. Knowing some of the typical pitfalls in the search for happiness—especially the ones that seem to go against common sense—can help us improve quality of life.

It’s an old adage that experiences make us happier than physical things. But knowing is not the same as doing. One area this is all too apparent is when it comes to choosing where to live. You might think that how a home looks is vital to how happy you are living in it. Wrong! The experience of a living space is far more important than its appearance.

The influence of appearance

In Happy City: Transforming Our Lives Through Urban Design, Charles Montgomery explores some of the ways in which we misunderstand how our built environment and the ways we move through cities influence our happiness.

Towards the end of their first year at Harvard, freshmen find out which dormitory they will be living in for the rest of their time at university. Places are awarded via a lottery system, so individual students have no control over where they end up. Harvard’s dormitories are many and varied in their design, size, amenities, age, location, and overall prestige. Students take allocation seriously, as the building they’re in inevitably has a big influence on their experience at university. Or does it?

Montgomery points to two Harvard dormitories. Lowell House, a stunning red brick building with a rich history, is considered the most prestigious of them all. Students clamor to live in it. Who could ever be gloomy in such a gorgeous building?

Meanwhile, Mather House is a much-loathed concrete tower. It’s no one’s first choice. Most students pray for a room in the former and hope to be spared the latter, because they think their university experience will be as awful-looking as the building. (It’s worth noting that although the buildings vary in appearance, neither is lacking any of the amenities a student needs to live. Nor is Mather House in any way decrepit.)

The psychologist Elizabeth Dunn asked a group of freshmen to predict how each of the available dormitories might affect their experience of Harvard. In follow-up interviews, she compared their lived experience with those initial predictions. Montgomery writes:

The results would surprise many Harvard freshmen. Students sent to what they were sure would be miserable houses ended up much happier than they had anticipated. And students who landed in the most desirable houses were less happy than they expected to be. Life in Lowell House was fine. But so was life in the reviled Mather House. Overall, Harvard’s choice dormitories just didn’t make anyone much happier than its spurned dormitories.

Why did students make this mistake and waste so much energy worrying about dormitory allocation? Dunn found that they “put far too much weight on obvious differences between residences, such as location and architectural features, and far too little on things that were not so glaringly different, such as the sense of community and the quality of relationships they would develop in their dormitory.”

Asked to guess if relationships or architecture are more important, most of us would, of course, say relationships. Our behavior, however, doesn’t always reflect that. Dunn further states:

This is the standard mis-weighing of extrinsic and intrinsic values: we may tell each other that experiences are more important than things, but we constantly make choices as though we didn’t believe it.

When we think that the way a building looks will dictate our experience living in it, we are mistaking the map for the territory. Architectural flourishes soon fade into the background. What matters is the day-to-day experience of living there, when relationships matter much more than how things look. Proximity to friends is a higher predictor of happiness than charming old brick.

The impact of experience

Some things we can get used to. Some we can’t. We make a major mistake when we think it’s worthwhile to put up with negative experiences that are difficult to grow accustomed to in order to have nice things. Once again, this happens when we forget that our day-to-day experience is paramount in our perception of our happiness.

Take the case of suburbs. Montgomery describes how many people in recent decades moved to suburbs outside of American cities. There, they could enjoy luxuries like big gardens, sprawling front lawns, wide streets with plenty of room between houses, spare bedrooms, and so on. City dwellers imagined themselves and their families spreading out in spacious, safe homes. But American cities ended up being shaped by flawed logic, as Montgomery elaborates:

Neoclassical economics, which dominated the second half of the twentieth century, is based on the premise that we are all perfectly well equipped to make choices that maximize utility. . . . But the more psychologists and economists examine the relationship between decision-making and happiness, the more they realize that this is simply not true. We make bad choices all the time. . . . Our flawed choices have helped shape the modern city—and consequently, the shape of our lives.

Living in the suburbs comes at a price: long commutes. Many people spend hours a day behind the wheel, getting to and from work. On top of that, the dispersed nature of suburbs means that everything from the grocery store to the gym requires more extended periods of time driving. It’s easy for an individual to spend almost all of their non-work, non-sleep time in their car.

Commuting is, in just about every sense, terrible for us. The more time people spend driving each day, the less happy they are with their life in general. This unhappiness even extends to the partners of people with long commutes, who also experience a decline in well-being. Commuters see their health suffer due to long periods of inactivity and the stress of being stuck in traffic. It’s hard to find the time and energy for things like exercise or seeing friends if you’re always on the road. Gas and car-related expenses can eat up the savings from living outside of the city. That’s not to mention the environmental toll. Commuting is generally awful for mental health, which Montgomery illustrates:

A person with a one-hour commute has to earn 40 percent more money to be as satisfied with life as someone who walks to the office. On the other hand, for a single person, exchanging a long commute for a short walk to work has the same effect on happiness as finding a new love.

So why do we make this mistake? Drawing on the work of psychologist Daniel Gilbert, Montgomery explains that it’s a matter of us thinking we’ll get used to commuting (an experience) and won’t get used to the nicer living environment (a thing.)

The opposite is true. While a bigger garden and spare bedroom soon cease to be novel, every day’s commute is a little bit different, meaning we can never get quite used to it. There is a direct linear downwards relationship between commute time and life satisfaction, but there’s no linear upwards correlation between house size and life satisfaction. As Montgomery says, “The problem is, we consistently make decisions that suggest we are not so good at distinguishing between ephemeral and lasting pleasures. We keep getting it wrong.”

Happy City teems with insights about the link between the design of where we live and our quality of life. In particular, it explores how cities are often shaped by mistaken ideas about what brings us happiness. We maximize our chances at happiness when we prioritize our experience of life instead of acquiring things to fill it with.

Job Interviews Don’t Work

Better hiring leads to better work environments, less turnover, and more innovation and productivity. When you understand the limitations and pitfalls of the job interview, you improve your chances of hiring the best possible person for your needs.

***

The job interview is a ritual just about every adult goes through at least once. They seem to be a ubiquitous part of most hiring processes. The funny thing about them, however, is that they take up time and resources without actually helping to select the best people to hire. Instead, they promote a homogenous workforce where everyone thinks the same.

If you have any doubt about how much you can get from an interview, think of what’s involved for the person being interviewed. We’ve all been there. The night before, you dig out your smartest outfit, iron it, and hope your hair lies flat for once. You frantically research the company, reading every last news article based on a formulaic press release, every blog post by the CEO, and every review by a disgruntled former employee.

After a sleepless night, you trek to their office, make awkward small talk, then answer a set of predictable questions. What’s your biggest weakness? Where do you see yourself in five years? Why do you want this job? Why are you leaving your current job? You reel off the answers you prepared the night before, highlighting the best of the best. All the while, you’re reminding yourself to sit up straight, don’t bite your nails, and keep smiling.

It’s not much better on the employer’s side of the table. When you have a role to fill, you select a list of promising candidates and invite them for an interview. Then you pull together a set of standard questions to riff off, doing a little improvising as you hear their responses. At the end of it all, you make some kind of gut judgment about the person who felt right—likely the one you connected with the most in the short time you were together.

Is it any surprise that job interviews don’t work when the whole process is based on subjective feelings? They are in no way the most effective means of deciding who to hire because they maximize the role of bias and minimize the role of evaluating competency.

What is a job interview?

“In most cases, the best strategy for a job interview is to be fairly honest, because the worst thing that can happen is that you won’t get the job and will spend the rest of your life foraging for food in the wilderness and seeking shelter underneath a tree or the awning of a bowling alley that has gone out of business.”

— Lemony Snicket, Horseradish

When we say “job interviews” throughout this post, we’re talking about the type of interview that has become standard in many industries and even in universities: free-form interviews in which candidates sit in a room with one or more people from a prospective employer (often people they might end up working with) and answer unstructured questions. Such interviews tend to focus on how a candidate behaves generally, emphasizing factors like whether they arrive on time or if they researched the company in advance. While questions may ostensibly be about predicting job performance, they tend to better select for traits like charisma rather than actual competence.

Unstructured interviews can make sense for certain roles. The ability to give a good first impression and be charming matters for a salesperson. But not all roles need charm, and just because you don’t want to hang out with someone after an interview doesn’t mean they won’t be an amazing software engineer. In a small startup with a handful of employees, someone being “one of the gang” might matter because close-knit friendships are a strong motivator when work is hard and pay is bad. But that group mentality may be less important in a larger company in need of diversity.

Considering the importance of hiring and how much harm getting it wrong can cause, it makes sense for companies to study and understand the most effective interview methods. Let’s take a look at why job interviews don’t work and what we can do instead.

Why job interviews are ineffective

Discrimination and bias

Information like someone’s age, gender, race, appearance, or social class shouldn’t dictate if they get a job or not—their competence should. But that’s unfortunately not always the case. Interviewers can end up picking the people they like the most, which often means those who are most similar to them. This ultimately means a narrower range of competencies is available to the organization.

Psychologist Ron Friedman explains in The Best Place to Work: The Art and Science of Creating an Extraordinary Workplace some of the unconscious biases that can impact hiring. We tend to rate attractive people as more competent, intelligent, and qualified. We consider tall people to be better leaders, particularly when evaluating men. We view people with deep voices as more trustworthy than those with higher voices.

Implicit bias is pernicious because it’s challenging to spot the ways it influences interviews. Once an interviewer judges someone, they may ask questions that nudge the interviewee towards fitting that perception. For instance, if they perceive someone to be less intelligent, they may ask basic questions that don’t allow the candidate to display their expertise. Having confirmed their bias, the interviewer has no reason to question it or even notice it in the future.

Hiring often comes down to how much an interviewer likes a candidate as a person. This means that we can be manipulated by manufactured charm. If someone’s charisma is faked for an interview, an organization can be left dealing with the fallout for ages.

The map is not the territory

The representation of something is not the thing itself. A job interview is meant to be a quick snapshot to tell a company how a candidate would be at a job. However, it’s not a representative situation in terms of replicating how the person will perform in the actual work environment.

For instance, people can lie during job interviews. Indeed, the situation practically encourages it. While most people feel uncomfortable telling outright lies (and know they would face serious consequences later on for a serious fabrication), bending the truth is common. Ron Friedman writes, “Research suggests that outright lying generates too much psychological discomfort for people to do it very often. More common during interviews are more nuanced forms of deception which include embellishment (in which we take credit for things we haven’t done), tailoring (in which we adapt our answers to fit the job requirements), and constructing (in which we piece together elements from different experiences to provide better answers.)” An interviewer can’t know if someone is deceiving them in any of these ways. So they can’t know if they’re hearing the truth.

One reason why we think job interviews are representative is the fundamental attribution error. This is a logical fallacy that leads us to believe that the way people behave in one area carries over to how they will behave in other situations. We view people’s behaviors as the visible outcome of innate characteristics, and we undervalue the impact of circumstances.

Some employers report using one single detail they consider representative to make hiring decisions, such as whether a candidate sends a thank-you note after the interview or if their LinkedIn picture is a selfie. Sending a thank-you note shows manners and conscientiousness. Having a selfie on LinkedIn shows unprofessionalism. But is that really true? Can one thing carry across to every area of job performance? It’s worth debating.

Gut feelings aren’t accurate

We all like to think we can trust our intuition. The problem is that intuitive judgments tend to only work in areas where feedback is fast and cause and effect clear. Job interviews don’t fall into that category. Feedback is slow. The link between a hiring decision and a company’s success is unclear.

Overwhelmed by candidates and the pressure of choosing, interviewers may resort to making snap judgments based on limited information. And interviews introduce a lot of noise, which can dilute relevant information while leading to overconfidence. In a study entitled Belief in the Unstructured Interview: The Persistence of an Illusion, participants predicted the future GPA of a set of students. They either received biographical information about the students or both biographical information and an interview. In some of the cases, the interview responses were entirely random, meaning they shouldn’t have conveyed any genuine useful information.

Before the participants made their predictions, the researchers informed them that the strongest predictor of a student’s future GPA is their past GPA. Seeing as all participants had access to past GPA information, they should have factored it heavily into their predictions.

In the end, participants who were able to interview the students made worse predictions than those who only had access to biographical information. Why? Because the interviews introduced too much noise. They distracted participants with irrelevant information, making them forget the most significant predictive factor: past GPA. Of course, we do not have clear metrics like GPA for jobs. But this study indicates that interviews do not automatically lead to better judgments about a person.

We tend to think human gut judgments are superior, even when evidence doesn’t support this. We are quick to discard information that should shape our judgments in favor of less robust intuitions that we latch onto because they feel good. The less challenging information is to process, the better it feels. And we tend to associate good feelings with ‘rightness’.

Experience ≠ expertise in interviewing

In 1979, the University of Texas Medical School at Houston suddenly had to increase its incoming class size by 50 students due to a legal change requiring larger classes. Without time to interview again, they selected from the pool of candidates the school chose to interview, then rejected as unsuitable for admission. Seeing as they got through to the interview stage, they had to be among the best candidates. They just weren’t previously considered good enough to admit.

When researchers later studied the result of this unusual situation, they found that the students whom the school first rejected performed no better or worse academically than the ones they first accepted. In short, interviewing students did nothing to help select for the highest performers.

Studying the efficacy of interviews is complicated and hard to manage from an ethical standpoint. We can’t exactly give different people the same real-world job in the same conditions. We can take clues from fortuitous occurrences, like the University of Texas Medical School change in class size and the subsequent lessons learned. Without the legal change, the interviewers would never have known that the students they rejected were of equal competence to the ones they accepted. This is why building up experience in this arena is difficult. Even if someone has a lot of experience conducting interviews, it’s not straightforward to translate that into expertise. Expertise is about have a predictive model of something, not just knowing a lot about it.

Furthermore, the feedback from hiring decisions tends to be slow. An interviewer cannot know what would happen if they hired an alternate candidate. If a new hire doesn’t work out, that tends to fall on them, not the person who chose them. There are so many factors involved that it’s not terribly conducive to learning from experience.

Making interviews more effective

It’s easy to see why job interviews are so common. People want to work with people they like, so interviews allow them to scope out possible future coworkers. Candidates expect interviews, as well—wouldn’t you feel a bit peeved if a company offered you a job without the requisite “casual chat” beforehand? Going through a grueling interview can make candidates more invested in the position and likely to accept an offer. And it can be hard to imagine viable alternatives to interviews.

But it is possible to make job interviews more effective or make them the final step in the hiring process after using other techniques to gauge a potential hire’s abilities. Doing what works should take priority over what looks right or what has always been done.

Structured interviews

While unstructured interviews don’t work, structured ones can be excellent. In Thinking, Fast and Slow, Daniel Kahneman describes how he redefined the Israel Defense Force’s interviewing process as a young psychology graduate. At the time, recruiting a new soldier involved a series of psychometric tests followed by an interview to assess their personality. Interviewers then based their decision on their intuitive sense of a candidate’s fitness for a particular role. It was very similar to the method of hiring most companies use today—and it proved to be useless.

Kahneman introduced a new interviewing style in which candidates answered a predefined series of questions that were intended to measure relevant personality traits for the role (for example, responsibility and sociability). He then asked interviewers to give candidates a score for how well they seemed to exhibit each trait based on their responses. Kahneman explained that “by focusing on standardized, factual questions I hoped to combat the halo effect, where favorable first impressions influence later judgments.” He tasked interviewers only with providing these numbers, not with making a final decision.

Although interviewers at first disliked Kahneman’s system, structured interviews proved far more effective and soon became the standard for the IDF. In general, they are often the most useful way to hire. The key is to decide in advance on a list of questions, specifically designed to test job-specific skills, then ask them to all the candidates. In a structured interview, everyone gets the same questions with the same wording, and the interviewer doesn’t improvise.

Tomas Chamorro-Premuzic writes in The Talent Delusion:

There are at least 15 different meta-analytic syntheses on the validity of job interviews published in academic research journals. These studies show that structured interviews are very useful to predict future job performance. . . . In comparison, unstructured interviews, which do not have a set of predefined rules for scoring or classifying answers and observations in a reliable and standardized manner, are considerably less accurate.

Why does it help if everyone hears the same questions? Because, as we learned previously, interviewers can make unconscious judgments about candidates, then ask questions intended to confirm their assumptions. Structured interviews help measure competency, not irrelevant factors. Ron Friedman explains this further:

It’s also worth having interviewers develop questions ahead of time so that: 1) each candidate receives the same questions, and 2) they are worded the same way. The more you do to standardize your interviews, providing the same experience to every candidate, the less influence you wield on their performance.

What, then, is an employer to do with the answers? Friedman says you must then create clear criteria for evaluating them.

Another step to help minimize your interviewing blind spots: include multiple interviewers and give them each specific criteria upon which to evaluate the candidate. Without a predefined framework for evaluating applicants—which may include relevant experience, communication skills, attention to detail—it’s hard for interviewers to know where to focus. And when this happens, fuzzy interpersonal factors hold greater weight, biasing assessments. Far better to channel interviewers’ attention in specific ways, so that the feedback they provide is precise.

Blind auditions

One way to make job interviews more effective is to find ways to “blind” the process—to disguise key information that may lead to biased judgments. Blinded interviews focus on skills alone, not who a candidate is as a person. Orchestras offer a remarkable case study in the benefits of blinding.

In the 1970s, orchestras had a gender bias problem. A mere 5% of their members were women, on average. Orchestras knew they were missing out on potential talent, but they found the audition process seemed to favor men over women. Those who were carrying out auditions couldn’t sidestep their unconscious tendency to favor men.

Instead of throwing up their hands in despair and letting this inequality stand, orchestras began carrying out blind auditions. During these, candidates would play their instruments behind a screen while a panel listened and assessed their performance. They received no identifiable information about candidates. The idea was that orchestras would be able to hire without room for bias. It took a bit of tweaking to make it work – at first, the interviewers were able to discern gender based on the sound of a candidate’s shoes. After that, they requested that people interview without their shoes.

The results? By 1997, up to 25% of orchestra members were women. Today, the figure is closer to 30%.

Although this is sometimes difficult to replicate for other types of work, blind auditions can provide an inspiration to other industries that could benefit from finding ways to make interviews more about a person’s abilities than their identity.

Competency-related evaluations

What’s the best way to test if someone can do a particular job well? Get them to carry out tasks that are part of the job. See if they can do what they say they can do. It’s much harder for someone to lie and mislead an interviewer during actual work than during an interview. Using competency tests for a blinded interview process is also possible—interviewers could look at depersonalized test results to make unbiased judgments.

Tomas Chamorro-Premuzic writes in The Talent Delusion: Why Data, Not Intuition, Is the Key to Unlocking Human Potential, “The science of personnel selection is over a hundred years old yet decision-makers still tend to play it by ear or believe in tools that have little academic rigor. . . . An important reason why talent isn’t measured more scientifically is the belief that rigorous tests are difficult and time-consuming to administer, and that subjective evaluations seem to do the job ‘just fine.’”

Competency tests are already quite common in many fields. But interviewers tend not to accord them sufficient importance. They come after an interview, or they’re considered secondary to it. A bad interview can override a good competency test. At best, interviewers accord them equal importance to interviews. Yet they should consider them far more important.

Ron Friedman writes, “Extraneous data such as a candidate’s appearance or charisma lose their influence when you can see the way an applicant actually performs. It’s also a better predictor of their future contributions because unlike traditional in-person interviews, it evaluates job-relevant criteria. Including an assignment can help you better identify the true winners in your applicant pool while simultaneously making them more invested in the position.”

Conclusion

If a company relies on traditional job interviews as its sole or main means of choosing employees, it simply won’t get the best people. And getting hiring right is paramount to the success of any organization. A driven team of people passionate about what they do can trump one with better funding and resources. The key to finding those people is using hiring techniques that truly work.

Why You Feel At Home In A Crisis

When disaster strikes, people come together. During the worst times of our lives, we can end up experiencing the best mental health and relationships with others. Here’s why that happens and how we can bring the lessons we learn with us once things get better.

***

“Humans don’t mind hardship, in fact they thrive on it; what they mind is not feeling necessary. Modern society has perfected the art of making people not feel necessary.”

— Sebastian Junger

The Social Benefits of Adversity

When World War II began to unfold in 1939, the British government feared the worst. With major cities like London and Manchester facing aerial bombardment from the German air force, leaders were sure societal breakdown was imminent. Civilians were, after all, in no way prepared for war. How would they cope with a complete change to life as they knew it? How would they respond to the nightly threat of injury or death? Would they riot, loot, experience mass-scale psychotic breaks, go on murderous rampages, or lapse into total inertia as a result of exposure to German bombing campaigns?

Robert M. Titmuss writes in Problems of Social Policy that “social distress, disorganization, and loss of morale” were expected. Experts predicted 600,000 deaths and 1.2 million injuries from the bombings. Some in the government feared three times as many psychiatric casualties as physical ones. Official reports pondered how the population would respond to “financial distress, difficulties of food distribution, breakdowns in transport, communications, gas, lighting, and water supplies.”

After all, no one had lived through anything like this. Civilians couldn’t receive training as soldiers could, so it stood to reason they would be at high risk of psychological collapse. Titmus writes, “It seems sometimes to have been expected almost as a matter of course that widespread neurosis and panic would ensue.” The government contemplated sending a portion of soldiers into cities, rather than to the front lines, to maintain order.

Known as the Blitz, the effects of the bombing campaign were brutal. Over 60,000 civilians died, about half of them in London. The total cost of property damage was about £56 billion in today’s money, with almost a third of the houses in London becoming uninhabitable.

Yet despite all this, the anticipated social and psychological breakdown never happened. The death toll was also much lower than predicted, in part due to stringent adherence to safety instructions. In fact, the Blitz achieved the opposite of what the attackers intended: the British people proved more resilient than anyone predicted. Morale remained high, and there didn’t appear to be an increase in mental health problems. The suicide rate may have decreased. Some people with longstanding mental health issues found themselves feeling better.

People in British cities came together like never before to organize themselves at the community level. The sense of collective purpose this created led many to experience better mental health than they’d ever had. One indicator of this is that children who remained with their parents fared better than those evacuated to the safety of the countryside. The stress of the aerial bombardment didn’t override the benefits of staying in their city communities.

The social unity the British people reported during World War II lasted in the decades after. We can see it in the political choices the wartime generation made—the politicians they voted into power and the policies they voted for. By some accounts, the social unity fostered by the Blitz was the direct cause of the strong welfare state that emerged after the war and the creation of Britain’s free national healthcare system. Only when the wartime generation started to pass away did that sentiment fade.

We know how to Adapt to Adversity

We may be ashamed to admit it, but human nature is more at home in a crisis.

Disasters force us to band together and often strip away our differences. The effects of World War II on the British people were far from unique. The Allied bombing of Germany also strengthened community spirit. In fact, cities that suffered the least damage saw the worst psychological consequences. Similar improvements in morale occurred during other wars, riots, and after September 11, 2001.

When normality breaks down, we experience the sort of conditions we evolved to handle. Our early ancestors lived with a great deal of pain and suffering. The harsh environments they faced necessitated collaboration and sharing. Groups of people who could work together were most likely to survive. Because of this, evolution selected for altruism.

Among modern foraging tribal groups, the punishments for freeloading are severe. Execution is not uncommon. As severe as this may seem, allowing selfishness to flourish endangers the whole group. It stands to reason that the same was true for our ancestors living in much the same conditions. Being challenged as a group by difficult changes in our environment leads to incredible community cohesion.

Many of the conditions we need to flourish both as individuals and as a species emerge during disasters. Modern life otherwise fails to provide them. Times of crisis are closer to the environments our ancestors evolved in. Of course, this does not mean that disasters are good. By their nature, they produce immense suffering. But understanding their positive flip side can help us to both weather them better and bring important lessons into the aftermath.

Embracing Struggle

Good times don’t actually produce good societies.

In Tribe: On Homecoming and Belonging, Sebastian Junger argues that modern society robs us of the solidarity we need to thrive. Unfortunately, he writes, “The beauty and the tragedy of the modern world is that it eliminates many situations that require people to demonstrate commitment to the collective good.” As life becomes safer, it is easier for us to live detached lives. We can meet all of our needs in relative isolation, which prevents us from building a strong connection to a common purpose. In our normal day to day, we rarely need to show courage, turn to our communities for help, or make sacrifices for the sake of others.

Furthermore, our affluence doesn’t seem to make us happier. Junger writes that “as affluence and urbanization rise in a society, rates of depression and suicide tend to go up, not down. Rather than buffering people from clinical depression, increased wealth in society seems to foster it.” We often think of wealth as a buffer from pain, but beyond a certain point, wealth can actually make us more fragile.

The unexpected worsening of mental health in modern society has much to do with our lack of community—which might explain why times of disaster, when everyone faces the breakdown of normal life, can counterintuitively improve mental health, despite the other negative consequences. When situations requiring sacrifice do reappear and we must work together to survive, it alleviates our disconnection from each other. Disaster increases our reliance on our communities.

In a state of chaos, our way of relating to each other changes. Junger explains that “self-interest gets subsumed into group interest because there is no survival outside of group survival, and that creates a social bond that many people sorely miss.” Helping each other survive builds ties stronger than anything we form during normal conditions. After a natural disaster, residents of a city may feel like one big community for the first time. United by the need to get their lives back together, individual differences melt away for a while.

Junger writes particularly of one such instance:

The one thing that might be said for societal collapse is that—for a while at least—everyone is equal. In 1915 an earthquake killed 30,000 people in Avezzano, Italy, in less than a minute. The worst-hit areas had a mortality rate of 96 percent. The rich were killed along with the poor, and virtually everyone who survived was immediately thrust into the most basic struggle for survival: they needed food, they needed water, they needed shelter, and they needed to rescue the living and bury the dead. In that sense, plate tectonics under the town of Avezzano managed to recreate the communal conditions of our evolutionary past quite well.

Disasters bring out the best in us. Junger goes on to say that “communities that have been devastated by natural or manmade disasters almost never lapse into chaos and disorder; if anything they become more just, more egalitarian, and more deliberately fair to individuals.” When catastrophes end, despite their immense negatives, people report missing how it felt to unite for a common cause. Junger explains that “what people miss presumably isn’t danger or loss but the unity that these things often engender.” The loss of that unification can be, in its own way, traumatic.

Don’t be Afraid of Disaster

So what can we learn from Tribe?

The first lesson is that, in the face of disaster, we should not expect the worst from other people. Yes, instances of selfishness will happen no matter what. Many people will look out for themselves at the expense of others, not least the ultra-wealthy who are unlikely to be affected in a meaningful way and so will not share in the same experience. But on the whole, history has shown that the breakdown of order people expect is rare. Instead, we find new ways to continue and to cope.

During World War II, there were fears that British people would resent the appearance of over two million American servicemen in their country. After all, it meant more competition for scarce resources. Instead, the “friendly invasion” met with a near-unanimous warm welcome. British people shared what they had without bitterness. They understood that the Americans were far from home and missing their loved ones, so they did all they could to help. In a crisis, we can default to expecting the best from each other.

Second, we can achieve a great deal by organizing on the community level when disaster strikes. Junger writes, “There are many costs to modern society, starting with its toll on the global ecosystem and working one’s way down to its toll on the human psyche, but the most dangerous may be to community. If the human race is under threat in some way that we don’t yet understand, it will probably be at a community level that we either solve the problem or fail to.” When normal life is impossible, being able to volunteer help is an important means of retaining a sense of control, even if it imposes additional demands. One explanation for the high morale during the Blitz is that everyone could be involved in the war effort, whether they were fostering a child, growing cabbages in their garden, or collecting scrap metal to make planes.

For our third and final lesson, we should not forget what we learn about the importance of banding together. What’s more, we must do all we can to let that knowledge inform future decisions. It is possible for disasters to spark meaningful changes in the way we live. We should continue to emphasize community and prioritize stronger relationships. We can do this by building strong reminders of what happened and how it impacted people. We can strive to educate future generations, teaching them why unity matters.

(In addition to Tribe, many of the details of this post come from Disasters and Mental Health: Therapeutic Principles Drawn from Disaster Studies by Charles E. Fritz.)

Stop Preparing For The Last Disaster

When something goes wrong, we often strive to be better prepared if the same thing happens again. But the same disasters tend not to happen twice in a row. A more effective approach is simply to prepare to be surprised by life, instead of expecting the past to repeat itself.

***

If we want to become less fragile, we need to stop preparing for the last disaster.

When disaster strikes, we learn a lot about ourselves. We learn whether we are resilient, whether we can adapt to challenges and come out stronger. We learn what has meaning for us, we discover core values, and we identify what we’re willing to fight for. Disaster, if it doesn’t kill us, can make us stronger. Maybe we discover abilities we didn’t know we had. Maybe we adapt to a new normal with more confidence. And often we make changes so we will be better prepared in the future.

But better prepared for what?

After a particularly trying event, most people prepare for a repeat of whatever challenge they just faced. From the micro level to the macro level, we succumb to the availability bias and get ready to fight a war we’ve already fought. We learn that one lesson, but we don’t generalize that knowledge or expand it to other areas. Nor do we necessarily let the fact that a disaster happened teach us that disasters do, as a rule, tend to happen. Because we focus on the particulars, we don’t extrapolate what we learn to identifying what we can better do to prepare for adversity in general.

We tend to have the same reaction to challenge, regardless of the scale of impact on our lives.

Sometimes the impact is strictly personal. For example, our partner cheats on us, so we vow never to have that happen again and make changes designed to catch the next cheater before they get a chance; in future relationships, we let jealousy cloud everything.

But other times, the consequences are far reaching and impact the social, cultural, and national narratives we are a part of. Like when a terrorist uses an airplane to attack our city, so we immediately increase security at airports so that planes can never be used again to do so much damage and kill so many people.

The changes we make may keep us safe from a repeat of those scenarios that hurt us. The problem is, we’re still fragile. We haven’t done anything to increase our resilience—which means the next disaster is likely to knock us on our ass.

Why do we keep preparing for the last disaster?

Disasters cause pain. Whether it’s emotional or physical, the hurt causes vivid and strong reactions. We remember pain, and we want to avoid it in the future through whatever means possible. The availability of memories of our recent pain informs what we think we should do to stop it from happening again.

This process, called the availability bias, has significant implications for how we react in the aftermath of disaster. Writing in The Legal Analyst: A Toolkit for Thinking about the Law about the information cascades this logical fallacy sets off, Ward Farnsworth says they “also help explain why it’s politically so hard to take strong measures against disasters before they have happened at least once. Until they occur they aren’t available enough to the public imagination to seem important; after they occur their availability cascades and there is an exaggerated rush to prevent the identical thing from happening again. Thus after the terrorist attacks on the World Trade Center, cutlery was banned from airplanes and invasive security measures were imposed at airports. There wasn’t the political will to take drastic measures against the possibility of nuclear or other terrorist attacks of a type that hadn’t yet happened and so weren’t very available.”

In the aftermath of a disaster, we want to be reassured of future safety. We lived through it, and we don’t want to do so again. By focusing on the particulars of a single event, however, we miss identifying the changes that will improve our chances of better outcomes next time. Yes, we don’t want any more planes to fly into buildings. But preparing for the last disaster leaves us just as underprepared for the next one.

What might we do instead?

We rarely take a step back and go beyond the pain to look at what made us so vulnerable to it in the first place. However, that’s exactly where we need to start if we really want to better prepare ourselves for future disaster. Because really, what most of us want is to not be taken by surprise again, caught unprepared and vulnerable.

The reality is that the same disaster is unlikely to happen twice. Your next lover is unlikely to hurt you in the same way your former one did, just as the next terrorist is unlikely to attack in the same way as their predecessor. If we want to make ourselves less fragile in the face of great challenge, the first step is to accept that you are never going to know what the next disaster will be. Then ask yourself: How can I prepare anyway? What changes can I make to better face the unknown?

As Andrew Zolli and Ann Marie Healy explain in Resilience: Why Things Bounce Back, “surprises are by definition inevitable and unforeseeable, but seeking out their potential sources is the first step toward adopting the open, ready stance on which resilient responses depend.”

Giving serious thought to the range of possible disasters immediately makes you aware that you can’t prepare for all of them. But what are the common threads? What safeguards can you put in place that will be useful in a variety of situations? A good place to start is increasing your adaptability. The easier you can adapt to change, the more flexibility you have. More flexibility means having more options to deal with, mitigate, and even capitalize on disaster.

Another important mental tool is to accept that disasters will happen. Expect them. It’s not about walking around every day with your adrenaline pumped in anticipation; it’s about making plans assuming that they will get derailed at some point. So you insert backup systems. You create a cushion, moving away from razor-thin margins. You give yourself the optionality to respond differently when the next disaster hits.

Finally, we can find ways to benefit from disaster. Author and economist Keisha Blair, in Holistic Wealth, suggests that “building our resilience muscles starts with the way we process the negative events in our lives. Mental toughness is a prerequisite for personal growth and success.” She further writes, “adversity allows us to become better rounded, richer in experience, and to strengthen our inner resources.” We can learn from the last disaster how to grow and leverage our experiences to better prepare for the next one.

Coordination Problems: What It Takes to Change the World

The key to major changes on a societal level is getting enough people to alter their behavior at the same time. It’s not enough for isolated individuals to act. Here’s what we can learn from coordination games in game theory about what it takes to solve some of the biggest problems we face.

***

What is a Coordination Failure?

Sometimes we see systems where everyone involved seems to be doing things in a completely ineffective and inefficient way. A single small tweak could make everything substantially better—save lives, be more productive, save resources. To an outsider, it might seem obvious what needs to be done, and it might be hard to think of an explanation for the ineffectiveness that is more nuanced than assuming everyone in that system is stupid.

Why is publicly funded research published in journals that charge heavily for it, limiting the flow of important scientific knowledge, without contributing anything? Why are countries spending billions of dollars and risking disaster developing nuclear weapons intended only as deterrents? Why is doping widespread in some sports, even though it carries heavy health consequences and is banned? You can probably think of many similar problems.

Coordination games in game theory gives us a lens for understanding both the seemingly inscrutable origins of such problems and why they persist.

The Theoretical Background to Coordination Failure

In game theory, a game is a set of circumstances where two or more players pick among competing strategies in order to get a payoff. A coordination game is one where players get the best possible payoff by all doing the same thing. If one player chooses a different strategy, they get a diminished payoff and the other player usually gets an increased payoff.

When all players are carrying out a strategy from which they have no incentive to deviate, this is called the Nash equilibrium: given the strategy chosen by the other player(s), no player could improve their payoff by changing their strategy. However, a game can have multiple Nash equilibria with different payoffs. In real-world terms, this means there are multiple different choices everyone could make, some better than others, but all only working if they are unanimous.

The Prisoner’s Dilemma is a coordination game. In a one-round Prisoner’s Dilemma, the optimal strategy for each player is to defect. Even though this is the strategy that makes most sense, it isn’t the one with the highest possible payoff—that would involve both players cooperating. If one cooperates when the other doesn’t, they receive a diminished payoff. Seeing as they cannot know what the other player will do, cooperating is unwise. If they cooperate when the other defects, they get the worst possible payoff. If they defect and the other player also defects, they still get a better payoff than they would have done by cooperating.

So the Prisoner’s Dilemma is a coordination failure. The players would get a better payoff if they both cooperated, but they cannot trust each other. In a form of the Iterated Prisoner’s Dilemma, players compete for an unknown number of rounds. In this case, cooperation becomes possible if both players use the strategy of “tit for tat.” This means that they cooperate in the first round, then do whatever the other player previously did for each subsequent round. However, there is still an incentive to defect because any given round could be the last, so cooperating can never be the Nash equilibrium in the Prisoner’s Dilemma.

Many of the major problems we see around us are coordination failures. They are only solvable if everyone can agree to do the same thing at the same time. Faced with multiple Nash equilibria, we do not necessarily choose the best one overall. We choose what makes sense given the existing incentives, which often discourage us from challenging the status quo. It often makes most sense to do what everyone else is doing, whether that’s driving on the left side of the road, wearing a suit to a job interview, or keeping your country’s nuclear arsenal stocked up.

Take the case of academic publishing, given as a classic coordination failure by Eliezer Yudkowsky in Inadequate Equilibria: Where and How Civilizations Get Stuck. Academic journals publish research within a given field and charge for access to it, often at exorbitant rates. In order to get the best jobs and earn prestige within a field, researchers need to publish in the most respected journals. If they don’t, no one will take their work seriously.

Academic publishing is broken in many ways. By charging high prices, journals limit the flow of knowledge and slow scientific progress. They do little to help researchers, instead profiting from the work of volunteers and taxpayer funding. Yet researchers continue to submit their work to them. Why? Because this is the Nash equilibrium. Although it would be better for science as a whole if everyone stopped publishing in journals that charge for access, it isn’t in the interests of any individual scientist to do so. If they did, their career would suffer and most likely end. The only solution would be a coordinated effort for everyone to move away from journals. But seeing as this is so difficult to organize, the farce of academic publishing continues, harming everyone except the journals.

How We Can Solve and Avoid Coordination Failures

It’s possible to change things on a large scale if we are able to communicate on a much greater scale. When everyone knows that everyone knows, changing what we do is much easier.

We all act out of self-interest, so expecting individuals to risk the costs of going against convention is usually unreasonable. Yet it only takes a small proportion of people to change their opinions to reach a tipping point where there is strong incentive for everyone to change their behavior, and this is magnified even more if those people have a high degree of influence. The more power those who enact change have, the faster everyone else can do the same.

To overcome coordination failures, we need to be able to communicate despite our differences. And we need to be able to trust that when we act, others will act too. The initial kick can be enough people making their actions visible. Groups can have exponentially greater impacts than individuals. We thus need to think beyond the impact of our own actions and consider what will happen when we act as part of a group.

In an example given by the effective altruism-centered website 80,000 Hours, there are countless charitable causes one could donate money to at any given time. Most people who donate do so out of emotional responses or habit. However, some charitable causes are orders of magnitude more effective than others at saving lives and having a positive global impact. If many people can coordinate and donate to the most effective charities until they reach their funding goal, the impact of the group giving is far greater than if isolated individuals calculate the best use of their money. Making research and evidence of donations public helps solve the communication issue around determining the impact of charitable giving.

As Michael Suk-Young Chwe writes in Rational Ritual: Culture, Coordination, and Common Knowledge, “Successful communication sometimes is not simply a matter of whether a given message is received. It also depends on whether people are aware that other people also receive it.” According to Suk-Young Chwe, for people to coordinate on the basis of certain information it must be “common knowledge,” a phrase used here to mean “everyone knows it, everyone knows that everyone knows it, everyone knows that everyone knows that everyone knows it, and so on.” The more public and visible the change is, the better.

We can prevent coordination failures in the first place by visible guarantees that those who take a different course of action will not suffer negative consequences. Bank runs are a coordination failure that were particularly problematic during the Great Depression. It’s better for everyone if everyone leaves their deposits in the bank so it doesn’t run out of reserves and fail. But when other people start panicking and withdrawing their deposits, it makes sense for any given individual to do likewise in case the bank fails and they lose their money. The solution to this is deposit protection insurance, which ensures no one comes away empty-handed even if a bank does fail.

Game theory can help us to understand not only why it can be difficult for people to work together in the best possible way but also how we can reach more optimal outcomes through better communication. With a sufficient push towards a new equilibrium, we can drastically improve our collective circumstances in a short time.