Category: Thinking

How Julia Child Used First Principles Thinking

There’s a big difference between knowing how to follow a recipe and knowing how to cook. If you can master the first principles within a domain, you can see much further than those who are just following recipes. That’s what Julia Child, “The French Chef”, did throughout her career.

Following a recipe might get you the results you want, but it doesn’t teach you anything about how cooking works at the foundational level. Or what to do when something goes wrong. Or how to come up with your own recipes when you open the fridge on a Wednesday night and realize you forgot to go grocery shopping. Or how to adapt recipes for your own dietary needs.

Adhering to recipes will only get you so far, and it certainly won’t result in you coming up with anything new or creative.

People who know how to cook understand the basic principles that make food taste, look, and smell good. They have confidence in troubleshooting and solving problems as they go—or adjusting to unexpected outcomes. They can glance at an almost barren kitchen and devise something delicious. They know how to adapt to a guest with a gluten allergy or a child who doesn’t like green food. Sure, they might consult a recipe when it makes sense to do so. But they’re not dependent on it, and they can change it up based on their particular circumstances.

There’s a reason many cooking competition shows feature a segment where contestants need to design their own recipe from a limited assortment of ingredients. Effective improvisation shows the judges that someone can actually cook, not just follow recipes.

We can draw a strong parallel from cooking to thinking. If you want to learn how to think for yourself, you can’t just follow what someone else came up with. You need to understand first principles if you want to be able to solve complex problems or think in a unique, creative fashion. First principles are the building blocks of knowledge, the foundational understanding acquired from breaking something down into its most essential concepts.

One person who exemplifies first principles thinking is Julia Child, an American educator who charmed audiences with her classes, books, and TV shows. First principles thinking enabled Julia to both master her own struggles with cooking and then teach the world to do the same. In Something from the Oven, Laura Shapiro tells the charming story of how she did it. Here’s what we can learn about better thinking from the “French Chef.”

***

Gustave Flaubert wrote that “talent is a long patience, ” something which was all too true for Julia. She wasn’t born with an innate skill for or even love of cooking. Her starting point was falling in love with her future husband, Paul Child, in Ceylon in 1944 when both were working for the Office of Strategic Services. Paul adored food, and his delight in it inspired Julia. When they each returned to their separate homes after the war, she decided she would learn to cook. Things got off to a bad start, as Shapiro explains:

“At first she tried to teach herself at home, but it was frustrating to bushwhack her way through one dish after another. She never knew whether she would find success or failure when she opened the oven door, and worst of all, she didn’t know why this recipe worked and that one didn’t.”

Seeking expert guidance, Julia started taking cooking classes three times a week at a Beverly Hills cooking school. Even that didn’t help much, however, and after she married Paul a year later, her experiments in their Washington, DC kitchen continued to go awry. Only when the couple moved to Paris did an epiphany strike. Julia’s encounters with French cooking instilled in her an understanding of the need for first principles thinking. Trying to follow recipes without comprehending their logic wasn’t going to produce delicious results. She needed to learn how food actually worked.

In 1949, at the age of 37, she enrolled in classes at the famous Cordon Bleu school of cooking. It changed her forever:

“Learning to cook at the Cordon Bleu meant breaking down every dish into its smallest individual steps and doing each laborious and exhausting procedure by hand. In time Child could bone a duck while leaving the skin intact, extract the guts of a chicken through a hole she made in the neck, make a ham mousse by pounding the ham to a pulp with a mortar and pestle, and turn out a swath of elaborate dishes from choucroute garnie to vol-au-vent financière. None of this came effortlessly but she could do it. She had the brains, the considerable physical strength it demanded, and her vast determination. Most important, she could understand for the first time the principles governing how and why a recipe worked as it did.”

Julia had found her calling. After six months of Cordon Bleu classes, she continued studying independently for a year. She immersed herself in French cooking, filled her home with equipment, and befriended two women who shared her passion, Simone Beck and Louisette Bertholle. In the early 1950s, they opened a tiny school together, with a couple of students working out of Julia’s kitchen. She was “adamant that the recipes used in class be absolutely reliable, and she tested every one of them for what she called ‘scientific workability.’” By this, Julia meant that the recipes needed to make sense per her understanding of the science of cooking. If they didn’t agree with the first principles she knew, they were out.

***

When Paul transferred to Marseille, Julia was sad to leave her school. But she and her friends continued their collaboration, working at a distance on a French cookery book aimed at Americans. For what would become Mastering the Art of French Cooking, Julia focused on teaching first principles in a logical order, not copying down mere recipes.

She’d grown frustrated at opening recipe books to see instructions she knew couldn’t work because they contradicted the science of cooking—for example, recipes calling for temperatures she knew would burn a particular ingredient, or omitting key ingredients like baking soda, without which a particular effect would be impossible. It was clear no one had bothered to test anything before they wrote it down, and she was determined not to make the same mistake.

Mastering the Art of French Cooking came out in 1961. Shapiro writes, “The reviews were excellent, there was a gratifying burst of publicity all across the country, and the professional food world acknowledged a new star in Julia Child. What nobody knew for sure was whether everyday homemakers in the nation that invented the TV dinner would buy the book.” Though the book was far from a flop, it was the TV show it inspired that catapulted Julia and her approach to cooking to stardom.

The French Chef first aired in 1963 and was an enormous success from the start. Viewers adored how Julia explained why she did what she did and how it worked. They also loved her spontaneous capacity to adapt to unanticipated outcomes. It was usually only possible to shoot one take so Julia needed to keep going no matter what happened.

Her show appealed to every kind of person because it could make anyone a better cook—or at least help them understand the process better. Not only was Julia “a striking image of unaffected good nature,” the way she taught really worked. Viewers and readers who followed her guidance discovered a way of cooking that made them feel in control.

Julia “believed anybody could cook with distinction from scratch and that’s what she was out to prove.” Many of the people who watched The French Chef were women who needed a new way to think about cooking. As gender roles were being redefined and more women entered the workforce, it no longer seemed like something they were obligated by birth to do. At the same time, treating it as an undesirable chore was no more pleasant than treating it as a duty. Julia taught them another way. Cooking could be an intellectual, creative, enjoyable activity. Once you understood how it actually worked, you could learn from mistakes instead of repeating them again and again.

Shapiro explains that “Child was certainly not the first TV chef. The genre was almost as old as TV itself. But she was the first to make it her own and have an enduring societal impact.”

***

If you can master the first principles within a domain, you can see much further than those who are just following recipes. That’s what Julia managed to do, and it’s part of why she stood out from the other TV chefs of her time—and still stands out today. By mastering first principles, you can find better ways of doing things, instead of having to stick to conventions. If Julia thought a modern piece of equipment worked better than a traditional one or that part of a technique was a pointless custom, she didn’t hesitate to make changes as she saw fit. Once you know the why of something, it is easy to modify the how to achieve your desired result.

The lessons of first principles in cooking are the same for the first principles in any domain. Looking for first principles is just a way of thinking. It’s a commitment to understanding the foundation that something is built on and giving yourself the freedom to adapt, develop, and create. Once you know the first principles, you can keep learning more advanced concepts as well as innovating for yourself.

Learning Through Play

Play is an essential way of learning about the world. Doing things we enjoy without a goal in mind leads us to find new information, better understand our own capabilities, and find unexpected beauty around us. Arithmetic is one example of an area we can explore through play.

Every parent knows that children need space for unstructured play that helps them develop their creativity and problem-solving skills. Free-form experimentation leads to the rapid acquisition of information about the world. When children play together, they expand their social skills and strengthen the ability to regulate their emotions. Young animals, such as elephants, dogs, ravens, and crocodiles, also develop survival skills through play.

The benefits of play don’t disappear as soon as you become an adult. Even if we engage our curiosity in different ways as we grow up, a lot of learning and exploration still comes from analogous activities: things we do for the sheer fun of it.

When the pressure mounts to be productive every minute of the day, we have much to gain from doing all we can to carve out time to play. Take away prescriptions and obligations, and we gravitate towards whatever interests us the most. Just like children and baby elephants, we can learn important lessons through play. It can also give us a new perspective on topics we take for granted—such as the way we represent numbers.

***

Playing with symbols

The book Arithmetic, in addition to being a clear and engaging history of the subject, is a demonstration of how insights and understanding can be combined with enjoyment and fun. The best place to start the book is at the afterword, where author and mathematics professor Paul Lockhart writes, “I especially hope that I have managed to get across the idea of viewing your mind as a playground—a place to create beautiful things for your own pleasure and amusement and to marvel at what you’ve made and at what you have yet to understand.

Arithmetic, the branch of math dealing with the manipulation and properties of numbers, can be very playful. After all, there are many ways to add and multiply numbers that in themselves can be represented in various ways. When we see six cows in a field, we represent that amount with the symbol 6. The Romans used VI. And there are many other ways that unfortunately can’t be typed on a standard English keyboard. If two more cows wander into the field, the usual method of counting them is to add 2 to 6 and conclude there are now 8 cows. But we could just as easily add 2 + 3 + 3. Or turn everything into fractions with a base of 2 and go from there.

One of the most intriguing parts of the book is when Lockhart encourages us to step away from how we commonly label numbers so we can have fun experimenting with them. He says, “The problem with familiarity is not so much that it breeds contempt, but that it breeds loss of perspective.” So we don’t get too hung up on our symbols such as 4 and 5, Lockhart shows us how any symbols can be used to complete some of the main arithmetic tasks such as comparing and grouping. He shows how completely random symbols can represent amounts and gives insight into how they can be manipulated.

When we start to play with the representations, we connect to the underlying reasoning behind what we are doing. We could be counting for the purposes of comparison, and we could also be interested in learning the patterns produced by our actions. Lockhart explains that “every number can be represented in a variety of ways, and we want to choose a form that is as useful and convenient as possible.” We can thus choose our representations of numbers based on curiosity versus what is conventional. It’s easy to extrapolate this thinking to broader life situations. How often do we assume certain parameters are fixed just because that is what has always been done? What else could we accomplish if we let go of convention and focused instead on function?

***

Stepping away from requirements

We all use the Hindu-Arabic number system, which utilizes groups of tens. Ten singles are ten, ten tens are a hundred, and so on. It has a consistent logic to it, and it is a pervasive way of grouping numbers as they increase. But Lockhart explains that grouping numbers by ten is as arbitrary as the symbols we use to represent numbers. He explains how a society might group by fours or sevens. One of the most interesting ideas though, comes when he’s explaining the groupings:

“You might think there is no question about it; we chose four as our grouping size, so that’s that. Of course we will group our groups into fours—as opposed to what? Grouping things into fours and then grouping our groups into sixes? That would be insane! But it happens all the time. Inches are grouped into twelves to make feet, and then three feet make a yard. And the old British monetary system had twelve pence to the shilling and twenty shillings to the pound.”

By reminding us of the options available in such a simple, everyday activity as counting, Lockhart opens a mental door. What other ways might we go about our tasks and solve our problems? It’s a reminder that most of our so-called requirements are ones that we impose on ourselves.

If we think back to being children, we often played with things in ways that were different from what they were intended for. Pots became drums and tape strung around the house became lasers. A byproduct of this type of play is usually learning—we learn what things are normally used for by playing with them. But that’s not the intention behind a child’s play. The fun comes first, and thus they don’t restrain themselves to convention.

***

Have fun with the unfamiliar

There are advantages and disadvantages to all counting systems. For Lockhart, the only way to discover what those are is to play around with them. And it is in the playing that we may learn more than arithmetic. For example, he says: “In fact, getting stuck (say on 7 +8 for instance) is one of the best things that can happen to you because it gives you an opportunity to reinvent and to appreciate exactly what it is that you are doing.” In the case of adding two numbers, we “are rearranging numerical information for comparison purposes.

The larger point is that getting stuck on anything can be incredibly useful. If forces you to stop and consider what it is you are really trying to achieve. Getting stuck can help you identify the first principles in your situation. In getting unstuck, we learn lessons that resonate and help us to grow.

Lockhart says of arithmetic that we need to “not let our familiarity with a particular system blind us to its arbitrariness.” We don’t have to use the symbol 2 to represent how many cows there are in a field, just as we don’t have to group sixty minutes into one hour. We may find those representations useful, but we also may not. There are some people in the world with so much money that the numbers that represent their wealth are almost nonsensical, and most people find the clock manipulation that is the annual flip to daylight savings time to be annoying and stressful.

Playing around with arithmetic can teach the broader lesson that we don’t have to keep using systems that no longer serve us well. Yet how many of us have a hard time letting go of the ineffective simply because it’s familiar?

Which brings us back to play. Play is often the exploration of the unfamiliar. After all, if you knew what the result would be, it likely wouldn’t be considered play. When we play we take chances, we experiment, and we try new combinations just to see what happens. We do all of this in the pursuit of fun because it is the novelty that brings us pleasure and makes play rewarding.

Lockhart makes a similar point about arithmetic:

“The point of studying arithmetic and its philosophy is not merely to get good at it but also to gain a larger perspective and to expand our worldview . . . Plus, it’s fun. Anyway, as connoisseurs of arithmetic, we should always be questioning and critiquing, examining and playing.”

***

We suggest that playing need not be confined to arithmetic. If you happen to enjoy playing with numbers, then go for it. Lockhart’s book gives great inspiration on how to have fun with numbers. Playing is inherently valuable and doesn’t need to be productive. Children and animals have no purpose for play; they merely do what’s fun. It just so happens that unstructured, undirected play often has incredibly powerful byproducts.

Play can lead to new ideas and innovations. It can also lead to personal growth and development, not to mention a better understanding of the world. And, by its definition, play leads to fun. Which is the best part. Arithmetic is just one example of an unexpected area we can approach with the spirit of play.

Being Smart is Not Enough

When hiring a team, we tend to favor the geniuses who hatch innovative ideas, but overlook the butterflies, the crucial ones who share and implement them. Here’s why it’s important to be both smart AND social.

***

In business, it’s never enough to have a great idea. For any innovation to be successful, it has to be shared, promoted, and bought into by everyone in the organization. Yet often we focus on the importance of those great ideas and seem to forget about the work that is required to spread them around.

Whenever we are building a team, we tend to look for smarts. We are attracted to those with lots of letters after their names or fancy awards on their resumes. We assume that if we hire the smartest people we can find, they will come up with new, better ways of doing things that save us time and money.

Conversely, we often look down on predominantly social people. They seem to spend too much time gossiping and not enough time working. We assume they’ll be too busy engaging on social media or away from their desks too often to focus on their duties, and thus we avoid hiring them.

Although we aren’t going to tell you to swear off smarts altogether, we are here to suggest that maybe it’s time to reconsider the role that social people play in cultural growth and the diffusion of innovation.

In his book, The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter, Joseph Henrich explores the role of culture in human evolution. One point he makes is that it’s not enough for a species to be smart. What counts far more is having the cultural infrastructure to share, teach, and learn.

Consider two very large prehuman populations, the Geniuses and the Butterflies. Suppose the Geniuses will devise an invention once in 10 lifetimes. The Butterflies are much dumber, only devising the same invention once in 1000 lifetimes. So, this means that the Geniuses are 100 times smarter than the Butterflies. However, the Geniuses are not very social and have only 1 friend they can learn from. The Butterflies have 10 friends, making them 10 times more social.

Now, everyone in both populations tries to obtain an invention, both by figuring it out for themselves and by learning from friends. Suppose learning from friends is difficult: if a friend has it, a learner only learns it half the time. After everyone has done their own individual learning and tried to learn from their friends, do you think the innovation will be more common among the Geniuses or the Butterflies?

Well, among the Geniuses a bit fewer than 1 out of 5 individuals (18%) will end up with the invention. Half of those Geniuses will have figured it out all by themselves. Meanwhile, 99.9% of Butterflies will have the innovation, but only 0.1% will have figured it out by themselves.

Wow.

What if we take this thinking and apply to the workplace? Of course you want to have smart people. But you don’t want an organization full of Geniuses. They might come up with a lot, but without being able to learn from each other easily, many of their ideas won’t have any uptake in the organization. Instead, you’d want to pair Geniuses with Butterflies—socially attuned people who are primed to adopt the successful behaviors of those around them.

If you think you don’t need Butterflies because you can just put Genius innovations into policy and procedure, you’re missing the point. Sure, some brilliant ideas are concrete, finite, and visible. Those are the ones you can identify and implement across the organization from the top down. But some of the best ideas happen on the fly in isolated, one-off situations as responses to small changes in the environment. Perhaps there’s a minor meeting with a client, and the Genius figures out a new way of describing your product that really resonates. The Genius though, is not a teacher. It worked for them and they keep repeating the behavior, but it doesn’t occur to them to teach someone else. And they don’t pick up on other tactics to further refine their innovation.

But the Butterfly who went to the meeting with the Genius? They pick up on the successful new product description right away. They emulate it in all meetings from then on. They talk about it with their friends, most of whom are also Butterflies. Within two weeks, the new description has taken off because of the propensity for cultural learning embedded in the social Butterflies.

The lesson here is to hire both types of people. Know that it’s the Geniuses who innovate, but it’s the Butterflies who spread that innovation around. Both components are required for successfully implementing new, brilliant ideas.

Thinking For Oneself

When I was young, I thought other people could give me wisdom. Now that I’m older, I know this isn’t true.

Wisdom is earned, not given. When other people give us the answer, it belongs to them and not us. While we might achieve the outcome we desire, it comes from dependence, not insight. Instead of thinking for ourselves, we’re dependent on the insight of others.

There is nothing wrong with buying insight, this is one way we leverage ourselves. The problem is when we assume the insight of others is our own.

Earning insight requires going below the surface. Most of us want to shy away from the details and complexity. It takes a while. It’s boring. It’s mental work.

Yet it is only by jumping into the complexity that we can really discover simplicity for ourselves.

While the abundant directives, rules, and simplicities offered by others make us feel like we’re getting smarter, it’s nothing more than the illusion of knowledge.

If wisdom was as simple to acquire as reading, we’d all be wealthy and happy. Others help you but they can’t do the work for you. Owning wisdom for oneself requires a discipline the promiscuous consumer of it does not share.

Perhaps an example will help. The other day a plumber came to repair a pipe. He fixed the problem in under 5 minutes. The mechanical motions are easy to replicate. In fact, while it would take me longer, the procedure was so simple if you watched him you’d be able to do it. However, if even one thing were to deviate or change, we’d have a crisis on our hands, whereas the plumber would not. It took years of work to earn the wisdom he brought to solve the problem. Just because we could only see the simplicity he brought to the problem didn’t mean there wasn’t a deep understanding of the complexity behind it. There is no way we could acquire that insight in a few minutes by watching. We’d need to do it over and over for years, experiencing all of the things that could go wrong.

Thinking is something you have to do by yourself.

Job Interviews Don’t Work

Better hiring leads to better work environments, less turnover, and more innovation and productivity. When you understand the limitations and pitfalls of the job interview, you improve your chances of hiring the best possible person for your needs.

***

The job interview is a ritual just about every adult goes through at least once. They seem to be a ubiquitous part of most hiring processes. The funny thing about them, however, is that they take up time and resources without actually helping to select the best people to hire. Instead, they promote a homogenous workforce where everyone thinks the same.

If you have any doubt about how much you can get from an interview, think of what’s involved for the person being interviewed. We’ve all been there. The night before, you dig out your smartest outfit, iron it, and hope your hair lies flat for once. You frantically research the company, reading every last news article based on a formulaic press release, every blog post by the CEO, and every review by a disgruntled former employee.

After a sleepless night, you trek to their office, make awkward small talk, then answer a set of predictable questions. What’s your biggest weakness? Where do you see yourself in five years? Why do you want this job? Why are you leaving your current job? You reel off the answers you prepared the night before, highlighting the best of the best. All the while, you’re reminding yourself to sit up straight, don’t bite your nails, and keep smiling.

It’s not much better on the employer’s side of the table. When you have a role to fill, you select a list of promising candidates and invite them for an interview. Then you pull together a set of standard questions to riff off, doing a little improvising as you hear their responses. At the end of it all, you make some kind of gut judgment about the person who felt right—likely the one you connected with the most in the short time you were together.

Is it any surprise that job interviews don’t work when the whole process is based on subjective feelings? They are in no way the most effective means of deciding who to hire because they maximize the role of bias and minimize the role of evaluating competency.

What is a job interview?

“In most cases, the best strategy for a job interview is to be fairly honest, because the worst thing that can happen is that you won’t get the job and will spend the rest of your life foraging for food in the wilderness and seeking shelter underneath a tree or the awning of a bowling alley that has gone out of business.”

— Lemony Snicket, Horseradish

When we say “job interviews” throughout this post, we’re talking about the type of interview that has become standard in many industries and even in universities: free-form interviews in which candidates sit in a room with one or more people from a prospective employer (often people they might end up working with) and answer unstructured questions. Such interviews tend to focus on how a candidate behaves generally, emphasizing factors like whether they arrive on time or if they researched the company in advance. While questions may ostensibly be about predicting job performance, they tend to better select for traits like charisma rather than actual competence.

Unstructured interviews can make sense for certain roles. The ability to give a good first impression and be charming matters for a salesperson. But not all roles need charm, and just because you don’t want to hang out with someone after an interview doesn’t mean they won’t be an amazing software engineer. In a small startup with a handful of employees, someone being “one of the gang” might matter because close-knit friendships are a strong motivator when work is hard and pay is bad. But that group mentality may be less important in a larger company in need of diversity.

Considering the importance of hiring and how much harm getting it wrong can cause, it makes sense for companies to study and understand the most effective interview methods. Let’s take a look at why job interviews don’t work and what we can do instead.

Why job interviews are ineffective

Discrimination and bias

Information like someone’s age, gender, race, appearance, or social class shouldn’t dictate if they get a job or not—their competence should. But that’s unfortunately not always the case. Interviewers can end up picking the people they like the most, which often means those who are most similar to them. This ultimately means a narrower range of competencies is available to the organization.

Psychologist Ron Friedman explains in The Best Place to Work: The Art and Science of Creating an Extraordinary Workplace some of the unconscious biases that can impact hiring. We tend to rate attractive people as more competent, intelligent, and qualified. We consider tall people to be better leaders, particularly when evaluating men. We view people with deep voices as more trustworthy than those with higher voices.

Implicit bias is pernicious because it’s challenging to spot the ways it influences interviews. Once an interviewer judges someone, they may ask questions that nudge the interviewee towards fitting that perception. For instance, if they perceive someone to be less intelligent, they may ask basic questions that don’t allow the candidate to display their expertise. Having confirmed their bias, the interviewer has no reason to question it or even notice it in the future.

Hiring often comes down to how much an interviewer likes a candidate as a person. This means that we can be manipulated by manufactured charm. If someone’s charisma is faked for an interview, an organization can be left dealing with the fallout for ages.

The map is not the territory

The representation of something is not the thing itself. A job interview is meant to be a quick snapshot to tell a company how a candidate would be at a job. However, it’s not a representative situation in terms of replicating how the person will perform in the actual work environment.

For instance, people can lie during job interviews. Indeed, the situation practically encourages it. While most people feel uncomfortable telling outright lies (and know they would face serious consequences later on for a serious fabrication), bending the truth is common. Ron Friedman writes, “Research suggests that outright lying generates too much psychological discomfort for people to do it very often. More common during interviews are more nuanced forms of deception which include embellishment (in which we take credit for things we haven’t done), tailoring (in which we adapt our answers to fit the job requirements), and constructing (in which we piece together elements from different experiences to provide better answers.)” An interviewer can’t know if someone is deceiving them in any of these ways. So they can’t know if they’re hearing the truth.

One reason why we think job interviews are representative is the fundamental attribution error. This is a logical fallacy that leads us to believe that the way people behave in one area carries over to how they will behave in other situations. We view people’s behaviors as the visible outcome of innate characteristics, and we undervalue the impact of circumstances.

Some employers report using one single detail they consider representative to make hiring decisions, such as whether a candidate sends a thank-you note after the interview or if their LinkedIn picture is a selfie. Sending a thank-you note shows manners and conscientiousness. Having a selfie on LinkedIn shows unprofessionalism. But is that really true? Can one thing carry across to every area of job performance? It’s worth debating.

Gut feelings aren’t accurate

We all like to think we can trust our intuition. The problem is that intuitive judgments tend to only work in areas where feedback is fast and cause and effect clear. Job interviews don’t fall into that category. Feedback is slow. The link between a hiring decision and a company’s success is unclear.

Overwhelmed by candidates and the pressure of choosing, interviewers may resort to making snap judgments based on limited information. And interviews introduce a lot of noise, which can dilute relevant information while leading to overconfidence. In a study entitled Belief in the Unstructured Interview: The Persistence of an Illusion, participants predicted the future GPA of a set of students. They either received biographical information about the students or both biographical information and an interview. In some of the cases, the interview responses were entirely random, meaning they shouldn’t have conveyed any genuine useful information.

Before the participants made their predictions, the researchers informed them that the strongest predictor of a student’s future GPA is their past GPA. Seeing as all participants had access to past GPA information, they should have factored it heavily into their predictions.

In the end, participants who were able to interview the students made worse predictions than those who only had access to biographical information. Why? Because the interviews introduced too much noise. They distracted participants with irrelevant information, making them forget the most significant predictive factor: past GPA. Of course, we do not have clear metrics like GPA for jobs. But this study indicates that interviews do not automatically lead to better judgments about a person.

We tend to think human gut judgments are superior, even when evidence doesn’t support this. We are quick to discard information that should shape our judgments in favor of less robust intuitions that we latch onto because they feel good. The less challenging information is to process, the better it feels. And we tend to associate good feelings with ‘rightness’.

Experience ≠ expertise in interviewing

In 1979, the University of Texas Medical School at Houston suddenly had to increase its incoming class size by 50 students due to a legal change requiring larger classes. Without time to interview again, they selected from the pool of candidates the school chose to interview, then rejected as unsuitable for admission. Seeing as they got through to the interview stage, they had to be among the best candidates. They just weren’t previously considered good enough to admit.

When researchers later studied the result of this unusual situation, they found that the students whom the school first rejected performed no better or worse academically than the ones they first accepted. In short, interviewing students did nothing to help select for the highest performers.

Studying the efficacy of interviews is complicated and hard to manage from an ethical standpoint. We can’t exactly give different people the same real-world job in the same conditions. We can take clues from fortuitous occurrences, like the University of Texas Medical School change in class size and the subsequent lessons learned. Without the legal change, the interviewers would never have known that the students they rejected were of equal competence to the ones they accepted. This is why building up experience in this arena is difficult. Even if someone has a lot of experience conducting interviews, it’s not straightforward to translate that into expertise. Expertise is about have a predictive model of something, not just knowing a lot about it.

Furthermore, the feedback from hiring decisions tends to be slow. An interviewer cannot know what would happen if they hired an alternate candidate. If a new hire doesn’t work out, that tends to fall on them, not the person who chose them. There are so many factors involved that it’s not terribly conducive to learning from experience.

Making interviews more effective

It’s easy to see why job interviews are so common. People want to work with people they like, so interviews allow them to scope out possible future coworkers. Candidates expect interviews, as well—wouldn’t you feel a bit peeved if a company offered you a job without the requisite “casual chat” beforehand? Going through a grueling interview can make candidates more invested in the position and likely to accept an offer. And it can be hard to imagine viable alternatives to interviews.

But it is possible to make job interviews more effective or make them the final step in the hiring process after using other techniques to gauge a potential hire’s abilities. Doing what works should take priority over what looks right or what has always been done.

Structured interviews

While unstructured interviews don’t work, structured ones can be excellent. In Thinking, Fast and Slow, Daniel Kahneman describes how he redefined the Israel Defense Force’s interviewing process as a young psychology graduate. At the time, recruiting a new soldier involved a series of psychometric tests followed by an interview to assess their personality. Interviewers then based their decision on their intuitive sense of a candidate’s fitness for a particular role. It was very similar to the method of hiring most companies use today—and it proved to be useless.

Kahneman introduced a new interviewing style in which candidates answered a predefined series of questions that were intended to measure relevant personality traits for the role (for example, responsibility and sociability). He then asked interviewers to give candidates a score for how well they seemed to exhibit each trait based on their responses. Kahneman explained that “by focusing on standardized, factual questions I hoped to combat the halo effect, where favorable first impressions influence later judgments.” He tasked interviewers only with providing these numbers, not with making a final decision.

Although interviewers at first disliked Kahneman’s system, structured interviews proved far more effective and soon became the standard for the IDF. In general, they are often the most useful way to hire. The key is to decide in advance on a list of questions, specifically designed to test job-specific skills, then ask them to all the candidates. In a structured interview, everyone gets the same questions with the same wording, and the interviewer doesn’t improvise.

Tomas Chamorro-Premuzic writes in The Talent Delusion:

There are at least 15 different meta-analytic syntheses on the validity of job interviews published in academic research journals. These studies show that structured interviews are very useful to predict future job performance. . . . In comparison, unstructured interviews, which do not have a set of predefined rules for scoring or classifying answers and observations in a reliable and standardized manner, are considerably less accurate.

Why does it help if everyone hears the same questions? Because, as we learned previously, interviewers can make unconscious judgments about candidates, then ask questions intended to confirm their assumptions. Structured interviews help measure competency, not irrelevant factors. Ron Friedman explains this further:

It’s also worth having interviewers develop questions ahead of time so that: 1) each candidate receives the same questions, and 2) they are worded the same way. The more you do to standardize your interviews, providing the same experience to every candidate, the less influence you wield on their performance.

What, then, is an employer to do with the answers? Friedman says you must then create clear criteria for evaluating them.

Another step to help minimize your interviewing blind spots: include multiple interviewers and give them each specific criteria upon which to evaluate the candidate. Without a predefined framework for evaluating applicants—which may include relevant experience, communication skills, attention to detail—it’s hard for interviewers to know where to focus. And when this happens, fuzzy interpersonal factors hold greater weight, biasing assessments. Far better to channel interviewers’ attention in specific ways, so that the feedback they provide is precise.

Blind auditions

One way to make job interviews more effective is to find ways to “blind” the process—to disguise key information that may lead to biased judgments. Blinded interviews focus on skills alone, not who a candidate is as a person. Orchestras offer a remarkable case study in the benefits of blinding.

In the 1970s, orchestras had a gender bias problem. A mere 5% of their members were women, on average. Orchestras knew they were missing out on potential talent, but they found the audition process seemed to favor men over women. Those who were carrying out auditions couldn’t sidestep their unconscious tendency to favor men.

Instead of throwing up their hands in despair and letting this inequality stand, orchestras began carrying out blind auditions. During these, candidates would play their instruments behind a screen while a panel listened and assessed their performance. They received no identifiable information about candidates. The idea was that orchestras would be able to hire without room for bias. It took a bit of tweaking to make it work – at first, the interviewers were able to discern gender based on the sound of a candidate’s shoes. After that, they requested that people interview without their shoes.

The results? By 1997, up to 25% of orchestra members were women. Today, the figure is closer to 30%.

Although this is sometimes difficult to replicate for other types of work, blind auditions can provide an inspiration to other industries that could benefit from finding ways to make interviews more about a person’s abilities than their identity.

Competency-related evaluations

What’s the best way to test if someone can do a particular job well? Get them to carry out tasks that are part of the job. See if they can do what they say they can do. It’s much harder for someone to lie and mislead an interviewer during actual work than during an interview. Using competency tests for a blinded interview process is also possible—interviewers could look at depersonalized test results to make unbiased judgments.

Tomas Chamorro-Premuzic writes in The Talent Delusion: Why Data, Not Intuition, Is the Key to Unlocking Human Potential, “The science of personnel selection is over a hundred years old yet decision-makers still tend to play it by ear or believe in tools that have little academic rigor. . . . An important reason why talent isn’t measured more scientifically is the belief that rigorous tests are difficult and time-consuming to administer, and that subjective evaluations seem to do the job ‘just fine.’”

Competency tests are already quite common in many fields. But interviewers tend not to accord them sufficient importance. They come after an interview, or they’re considered secondary to it. A bad interview can override a good competency test. At best, interviewers accord them equal importance to interviews. Yet they should consider them far more important.

Ron Friedman writes, “Extraneous data such as a candidate’s appearance or charisma lose their influence when you can see the way an applicant actually performs. It’s also a better predictor of their future contributions because unlike traditional in-person interviews, it evaluates job-relevant criteria. Including an assignment can help you better identify the true winners in your applicant pool while simultaneously making them more invested in the position.”

Conclusion

If a company relies on traditional job interviews as its sole or main means of choosing employees, it simply won’t get the best people. And getting hiring right is paramount to the success of any organization. A driven team of people passionate about what they do can trump one with better funding and resources. The key to finding those people is using hiring techniques that truly work.

Why You Feel At Home In A Crisis

When disaster strikes, people come together. During the worst times of our lives, we can end up experiencing the best mental health and relationships with others. Here’s why that happens and how we can bring the lessons we learn with us once things get better.

***

“Humans don’t mind hardship, in fact they thrive on it; what they mind is not feeling necessary. Modern society has perfected the art of making people not feel necessary.”

— Sebastian Junger

The Social Benefits of Adversity

When World War II began to unfold in 1939, the British government feared the worst. With major cities like London and Manchester facing aerial bombardment from the German air force, leaders were sure societal breakdown was imminent. Civilians were, after all, in no way prepared for war. How would they cope with a complete change to life as they knew it? How would they respond to the nightly threat of injury or death? Would they riot, loot, experience mass-scale psychotic breaks, go on murderous rampages, or lapse into total inertia as a result of exposure to German bombing campaigns?

Robert M. Titmuss writes in Problems of Social Policy that “social distress, disorganization, and loss of morale” were expected. Experts predicted 600,000 deaths and 1.2 million injuries from the bombings. Some in the government feared three times as many psychiatric casualties as physical ones. Official reports pondered how the population would respond to “financial distress, difficulties of food distribution, breakdowns in transport, communications, gas, lighting, and water supplies.”

After all, no one had lived through anything like this. Civilians couldn’t receive training as soldiers could, so it stood to reason they would be at high risk of psychological collapse. Titmus writes, “It seems sometimes to have been expected almost as a matter of course that widespread neurosis and panic would ensue.” The government contemplated sending a portion of soldiers into cities, rather than to the front lines, to maintain order.

Known as the Blitz, the effects of the bombing campaign were brutal. Over 60,000 civilians died, about half of them in London. The total cost of property damage was about £56 billion in today’s money, with almost a third of the houses in London becoming uninhabitable.

Yet despite all this, the anticipated social and psychological breakdown never happened. The death toll was also much lower than predicted, in part due to stringent adherence to safety instructions. In fact, the Blitz achieved the opposite of what the attackers intended: the British people proved more resilient than anyone predicted. Morale remained high, and there didn’t appear to be an increase in mental health problems. The suicide rate may have decreased. Some people with longstanding mental health issues found themselves feeling better.

People in British cities came together like never before to organize themselves at the community level. The sense of collective purpose this created led many to experience better mental health than they’d ever had. One indicator of this is that children who remained with their parents fared better than those evacuated to the safety of the countryside. The stress of the aerial bombardment didn’t override the benefits of staying in their city communities.

The social unity the British people reported during World War II lasted in the decades after. We can see it in the political choices the wartime generation made—the politicians they voted into power and the policies they voted for. By some accounts, the social unity fostered by the Blitz was the direct cause of the strong welfare state that emerged after the war and the creation of Britain’s free national healthcare system. Only when the wartime generation started to pass away did that sentiment fade.

We know how to Adapt to Adversity

We may be ashamed to admit it, but human nature is more at home in a crisis.

Disasters force us to band together and often strip away our differences. The effects of World War II on the British people were far from unique. The Allied bombing of Germany also strengthened community spirit. In fact, cities that suffered the least damage saw the worst psychological consequences. Similar improvements in morale occurred during other wars, riots, and after September 11, 2001.

When normality breaks down, we experience the sort of conditions we evolved to handle. Our early ancestors lived with a great deal of pain and suffering. The harsh environments they faced necessitated collaboration and sharing. Groups of people who could work together were most likely to survive. Because of this, evolution selected for altruism.

Among modern foraging tribal groups, the punishments for freeloading are severe. Execution is not uncommon. As severe as this may seem, allowing selfishness to flourish endangers the whole group. It stands to reason that the same was true for our ancestors living in much the same conditions. Being challenged as a group by difficult changes in our environment leads to incredible community cohesion.

Many of the conditions we need to flourish both as individuals and as a species emerge during disasters. Modern life otherwise fails to provide them. Times of crisis are closer to the environments our ancestors evolved in. Of course, this does not mean that disasters are good. By their nature, they produce immense suffering. But understanding their positive flip side can help us to both weather them better and bring important lessons into the aftermath.

Embracing Struggle

Good times don’t actually produce good societies.

In Tribe: On Homecoming and Belonging, Sebastian Junger argues that modern society robs us of the solidarity we need to thrive. Unfortunately, he writes, “The beauty and the tragedy of the modern world is that it eliminates many situations that require people to demonstrate commitment to the collective good.” As life becomes safer, it is easier for us to live detached lives. We can meet all of our needs in relative isolation, which prevents us from building a strong connection to a common purpose. In our normal day to day, we rarely need to show courage, turn to our communities for help, or make sacrifices for the sake of others.

Furthermore, our affluence doesn’t seem to make us happier. Junger writes that “as affluence and urbanization rise in a society, rates of depression and suicide tend to go up, not down. Rather than buffering people from clinical depression, increased wealth in society seems to foster it.” We often think of wealth as a buffer from pain, but beyond a certain point, wealth can actually make us more fragile.

The unexpected worsening of mental health in modern society has much to do with our lack of community—which might explain why times of disaster, when everyone faces the breakdown of normal life, can counterintuitively improve mental health, despite the other negative consequences. When situations requiring sacrifice do reappear and we must work together to survive, it alleviates our disconnection from each other. Disaster increases our reliance on our communities.

In a state of chaos, our way of relating to each other changes. Junger explains that “self-interest gets subsumed into group interest because there is no survival outside of group survival, and that creates a social bond that many people sorely miss.” Helping each other survive builds ties stronger than anything we form during normal conditions. After a natural disaster, residents of a city may feel like one big community for the first time. United by the need to get their lives back together, individual differences melt away for a while.

Junger writes particularly of one such instance:

The one thing that might be said for societal collapse is that—for a while at least—everyone is equal. In 1915 an earthquake killed 30,000 people in Avezzano, Italy, in less than a minute. The worst-hit areas had a mortality rate of 96 percent. The rich were killed along with the poor, and virtually everyone who survived was immediately thrust into the most basic struggle for survival: they needed food, they needed water, they needed shelter, and they needed to rescue the living and bury the dead. In that sense, plate tectonics under the town of Avezzano managed to recreate the communal conditions of our evolutionary past quite well.

Disasters bring out the best in us. Junger goes on to say that “communities that have been devastated by natural or manmade disasters almost never lapse into chaos and disorder; if anything they become more just, more egalitarian, and more deliberately fair to individuals.” When catastrophes end, despite their immense negatives, people report missing how it felt to unite for a common cause. Junger explains that “what people miss presumably isn’t danger or loss but the unity that these things often engender.” The loss of that unification can be, in its own way, traumatic.

Don’t be Afraid of Disaster

So what can we learn from Tribe?

The first lesson is that, in the face of disaster, we should not expect the worst from other people. Yes, instances of selfishness will happen no matter what. Many people will look out for themselves at the expense of others, not least the ultra-wealthy who are unlikely to be affected in a meaningful way and so will not share in the same experience. But on the whole, history has shown that the breakdown of order people expect is rare. Instead, we find new ways to continue and to cope.

During World War II, there were fears that British people would resent the appearance of over two million American servicemen in their country. After all, it meant more competition for scarce resources. Instead, the “friendly invasion” met with a near-unanimous warm welcome. British people shared what they had without bitterness. They understood that the Americans were far from home and missing their loved ones, so they did all they could to help. In a crisis, we can default to expecting the best from each other.

Second, we can achieve a great deal by organizing on the community level when disaster strikes. Junger writes, “There are many costs to modern society, starting with its toll on the global ecosystem and working one’s way down to its toll on the human psyche, but the most dangerous may be to community. If the human race is under threat in some way that we don’t yet understand, it will probably be at a community level that we either solve the problem or fail to.” When normal life is impossible, being able to volunteer help is an important means of retaining a sense of control, even if it imposes additional demands. One explanation for the high morale during the Blitz is that everyone could be involved in the war effort, whether they were fostering a child, growing cabbages in their garden, or collecting scrap metal to make planes.

For our third and final lesson, we should not forget what we learn about the importance of banding together. What’s more, we must do all we can to let that knowledge inform future decisions. It is possible for disasters to spark meaningful changes in the way we live. We should continue to emphasize community and prioritize stronger relationships. We can do this by building strong reminders of what happened and how it impacted people. We can strive to educate future generations, teaching them why unity matters.

(In addition to Tribe, many of the details of this post come from Disasters and Mental Health: Therapeutic Principles Drawn from Disaster Studies by Charles E. Fritz.)