Category: Psychology

The Pygmalion Effect: Proving Them Right

The Pygmalion Effect is a powerful secret weapon. Without even realizing it, we can nudge others towards success. In this article, discover how expectations can influence performance for better or worse.

How Expectations Influence Performance

Many people believe that their pets or children are of unusual intelligence or can understand everything they say. Some people have stories of abnormal feats. In the late 19th century, one man claimed that about his horse and appeared to have evidence. William Von Osten was a teacher and horse trainer. He believed that animals could learn to read or count. Von Osten’s initial attempts with dogs and a bear were unsuccessful, but when he began working with an unusual horse, he changed our understanding of psychology. Known as Clever Hans, the animal could answer questions, with 90% accuracy, by tapping his hoof. He could add, subtract, multiply, divide, and tell the time and the date.

Clever Hans could also read and understand questions written or asked in German. Crowds flocked to see the horse, and the scientific community soon grew interested. Researchers studied the horse, looking for signs of trickery. Yet they found none. The horse could answer questions asked by anyone, even if Von Osten was absent. This indicated that no signaling was at play. For a while, the world believed the horse was truly clever.

Then psychologist Oskar Pfungst turned his attention to Clever Hans. Assisted by a team of researchers, he uncovered two anomalies. When blinkered or behind a screen, the horse could not answer questions. Likewise, he could respond only if the questioner knew the answer. From these observations, Pfungst deduced that Clever Hans was not making any mental calculations. Nor did he understand numbers or language in the human sense. Although Von Osten had intended no trickery, the act was false.

Instead, Clever Hans had learned to detect subtle, yet consistent nonverbal cues. When someone asked a question, Clever Hans responded to their body language with a degree of accuracy many poker players would envy. For example, when someone asked Clever Hans to make a calculation, he would begin tapping his hoof. Once he reached the correct answer, the questioner would show involuntary signs. Pfungst found that many people tilted their head at this point. Clever Hans would recognize this behavior and stop. When blinkered or when the questioner did not know the answer, the horse didn’t have a clue. When he couldn’t see the cues, he had no answer.

The Pygmalion Effect

Von Osten died in 1909 and Clever Hans disappeared from record. But his legacy lives on in a particular branch of psychology.

The case of Clever Hans is of less interest than the research it went on to provoke. Psychologists working in the decades following began to study how the expectations of others affect us. If someone expected Clever Hans to answer a question and ensured that he knew it, could the same thing occur elsewhere?

Could we be, at times, responding to subtle cues? Decades of research have provided consistent, robust evidence that the answer is yes. It comes down to the concepts of the self-fulfilling prophecy and the Pygmalion effect.

The Pygmalion effect is a psychological phenomenon wherein high expectations lead to improved performance in a given area. Its name comes from the story of Pygmalion, a mythical Greek sculptor. Pygmalion carved a statue of a woman and then became enamored with it. Unable to love a human, Pygmalion appealed to Aphrodite, the goddess of love. She took pity and brought the statue to life. The couple married and went on to have a daughter, Paphos.

False Beliefs Come True Over Time

In the same way Pygmalion’s fixation on the statue brought it to life, our focus on a belief or assumption can do the same. The flipside is the Golem effect, wherein low expectations lead to decreased performance. Both effects come under the category of self-fulfilling prophecies. Whether the expectation comes from us or others, the effect manifests in the same way.

The Pygmalion effect has profound ramifications in schools and organizations and with regard to social class and stereotypes. By some estimations, it is the result of our brains’ poorly distinguishing between perception and expectation. Although many people purport to want to prove their critics wrong, we often merely end up proving our supporters right.

Understanding the Pygmalion effect is a powerful way to positively affect those around us, from our children and friends to employees and leaders. If we don’t take into account the ramifications of our expectations, we may miss out on the dramatic benefits of holding high standards.

The concept of a self-fulfilling prophecy is attributed to sociologist Robert K. Merton. In 1948, Merton published the first paper on the topic. In it, he described the phenomenon as a false belief that becomes true over time. Once this occurs, it creates a feedback loop. We assume we were always correct because it seems so in hindsight. Merton described a self-fulfilling prophecy as self-hypnosis through our own propaganda.

As with many psychological concepts, people had a vague awareness of its existence long before research confirmed anything. Renowned orator and theologian Jacques Benigne Bossuet declared in the 17th century that “The greatest weakness of all weaknesses is to fear too much to appear weak.”

Even Sigmund Freud was aware of self-fulfilling prophecies. In A Childhood Memory of Goethe, Freud wrote: “If a man has been his mother’s undisputed darling he retains throughout life the triumphant feeling, the confidence in success, which not seldom brings actual success with it.”

The IQ of Students

Research by Robert Rosenthal and Lenore Jacobson examined the influence of teachers’ expectations on students’ performance. Their subsequent paper is one of the most cited and discussed psychological studies ever conducted.

Rosenthal and Jacobson began by testing the IQ of elementary school students. Teachers were told that the IQ test showed around one-fifth of their students to be unusually intelligent. For ethical reasons, they did not label an alternate group as unintelligent and instead used unlabeled classmates as the control group. It will doubtless come as no surprise that the “gifted” students were chosen at random. They should not have had a significant statistical advantage over their peers. As the study period ended, all students had their IQs retested. Both groups showed an improvement. Yet those who were described as intelligent experienced much greater gains in their IQ points. Rosenthal and Jacobson attributed this result to the Pygmalion effect. Teachers paid more attention to “gifted” students, offering more support and encouragement than they would otherwise. Picked at random, those children ended up excelling. Sadly, no follow-up studies were ever conducted, so we do not know the long-term impact on the children involved.

Prior to studying the effect on children, Rosenthal performed preliminary research on animals. Students were given rats from two groups, one described as “maze dull” and the other as “maze bright.” Researchers claimed that the former group could not learn to properly negotiate a maze, but the latter could with ease. As you might expect, the groups of rats were the same. Like the gifted and nongifted children, they were chosen at random. Yet by the time the study finished, the “maze-bright” rats appeared to have learned faster. The students considered them tamer and more pleasant to work with than the “maze-dull” rats.

In general, authority figures have the power to influence how the people subordinate to them behave by holding high expectations. Whether consciously or not, leaders facilitate changes in behavior, such as by giving people more responsibility or setting stretch goals. Like the subtle cues that allowed Clever Hans to make calculations, these small changes in treatment can promote learning and growth. If a leader thinks an employee is competent, they will treat them as such. The employee then gets more opportunities to develop their competence, and their performance improves in a positive feedback loop. This works both ways. When we expect an authority figure to be competent or successful, we tend to be attentive and supportive. In the process, we bolster their performance, too. Students who act interested in lectures create interesting lecturers.

In Pygmalion in Management, J. Sterling Livingston writes,

Some managers always treat their subordinates in a way that leads to superior performance. But most … unintentionally treat their subordinates in a way that leads to lower performance than they are capable of achieving. The way managers treat their subordinates is subtly influenced by what they expect of them. If manager’s expectations are high, productivity is likely to be excellent. If their expectations are low, productivity is likely to be poor. It is as though there were a law that caused subordinates’ performance to rise or fall to meet managers’ expectations.

The Pygmalion effect shows us that our reality is negotiable and can be manipulated by others — on purpose or by accident. What we achieve, how we think, how we act, and how we perceive our capabilities can be influenced by the expectations of those around us. Those expectations may be the result of biased or irrational thinking, but they have the power to affect us and change what happens. While cognitive biases distort only what we perceive, self-fulfilling prophecies alter what happens.

Of course, the Pygmalion effect works only when we are physically capable of achieving what is expected of us. After Rosenthal and Jacobson published their initial research, many people were entranced by the implication that we are all capable of more than we think. Although that can be true, we have no indication that any of us can do anything if someone believes we can. Instead, the Pygmalion effect seems to involve us leveraging our full capabilities and avoiding the obstacles created by low expectations.

Clever Hans truly was an intelligent horse, but he was smart because he could read almost imperceptible nonverbal cues, not because he could do math. So, he did have unusual capabilities, as shown by the fact that few other animals have done what he did.

We can’t do anything just because someone expects us to. Overly high expectations can also be stressful. When someone sets the bar too high, we can get discouraged and not even bother trying. Stretch goals and high expectations are beneficial, up to the point of diminishing returns. Research by McClelland and Atkinson indicates that the Pygmalion effect drops off if we see our chance of success as being less than 50%. If an endeavor seems either certain or completely uncertain, the Pygmalion effect does not hold. When we are stretched but confident, high expectations can help us achieve more.

Check Your Assumptions

In Self-Fulfilling Prophecy: A Practical Guide to Its Use in Education, Robert T. Tauber describes an exercise in which people are asked to list their assumptions about people with certain descriptions. These included a cheerleader, “a minority woman with four kids at the market using food stamps,” and a “person standing outside smoking on a cold February day.” An anonymous survey of undergraduate students revealed mostly negative assumptions. Tauber asks the reader to consider how being exposed to these types of assumptions might affect someone’s day-to-day life.

The expectations people have of us affect us in countless subtle ways each day. Although we rarely notice it (unless we are on the receiving end of overt racism, sexism, and other forms of bias), those expectations dictate the opportunities we are offered, how we are spoken to, and the praise and criticism we receive. Individually, these knocks and nudges have minimal impact. In the long run, they might dictate whether we succeed or fail or fall somewhere on the spectrum in between.

The important point to note about the Pygmalion effect is that it creates a literal change in what occurs. There is nothing mystical about the effect. When we expect someone to perform well in any capacity, we treat them in a different way. Teachers tend to show more positive body language towards students they expect to be gifted. They may teach them more challenging material, offer more chances to ask questions, and provide personalized feedback. As Carl Sagan declared, “The visions we offer our children shape the future. It matters what those visions are. Often they become self-fulfilling prophecies. Dreams are maps.”

A perfect illustration is the case of James Sweeney and George Johnson, as described in Pygmalion in Management. Sweeney was a teacher at Tulane University, where Johnson worked as a porter. Aware of the Pygmalion effect, Sweeney had a hunch that he could teach anyone to be a competent computer operator. He began his experiment, offering Johnson lessons each afternoon. Other university staff were dubious, especially as Johnson appeared to have a low IQ. But the Pygmalion effect won out and the former janitor eventually became responsible for training new computer operators.

The Pygmalion effect is a powerful secret weapon. Who wouldn’t want to help their children get smarter, help employees and leaders be more competent, and generally push others to do well? That’s possible if we raise our standards and see others in the best possible light. It is not necessary to actively attempt to intervene. Without even realizing it, we can nudge others towards success. If that sounds too good to be true, remember that the effect holds up for everything from rats to CEOs.

Members of our Learning Community can discuss this article here.

Kristin Dombek: The Selfishness of Others

I’ll bet you think this article is about you.

“We all know selfishness when we see it,” writes essayist Kristin Dombek opening The Selfishness of Others: An Essay on The Fear of Narcissism. She’s right. We see it everywhere from TV to family and lovers. Playing in the tension between pathology and common selfishness, her book offers a thought-provoking look at how narcissism became a cultural phenomenon and repository for our fears.

What is wrong with the narcissist she asks?

This is harder to know. If you see the smile on the face of a murderer, you must run. But if you are unlucky enough to love someone who seems suddenly so into himself that he doesn’t care who he hurts, someone who turns from warm to gone when he doesn’t need you, so self-adoring or wounded he meets criticism with violence or icy rage, who turns into another person in front of your eyes, or simply turns away when he said he’d be there—if you love someone who seems to have the particular twenty-first-century selfishness in some more subtle, or worse, invisible way, you will likely go to the internet for help.

The internet of course offers answers to even the wrong questions.

You’ll read, in that seizable portion of the self-help internet we might call, awkwardly, the narcisphere, a story that can change the way you see everything if you start believing in it, giving you the uncanny but slightly exciting sensation that you’re living in a movie. It’s familiar, this movie, as if you’ve seen in before and it’s a creepy one, but you have the most important role in the script. You’re the hero.

The basic script plays out like this.

At first, the narcissist is extraordinarily charming, even kind and sweet. Then, after a while, he seems full of himself. It could be a “he” or a “she,” but let’s stick with “he.” That’s what you start to think, when you know someone like this: he’s full of himself. But the narcissist is empty.

Normal, healthy people are full of self, a kind of substance like a soul or personhood that, if you have it, emanates warmly from inside of you toward the outside of you. No one knows what it is, but everyone agrees that narcissists do not have it. Disturbingly, however, they are often better than anyone else at seeming to have it. Because what they have inside is empty space, they have had to make a study of the selves of others in order to invent something that looks and sounds like one. Narcissists are imitators par excellence. The murderer plagiarized most of his manifesto, obviously and badly, but often narcissists are so good at imitating that you won’t even notice. And they do not copy the small, boring parts of selves. They take what they think are the biggest, most impressive parts of other selves, and devise a hologram of self that seems superpowered. Let’s call it “selfiness,” this simulacrum of a superpowered self. Sometimes they seem crazy or are really dull, but often, perhaps because they have had to try harder than most to make it, the selfiness they’ve come up with is qualitatively better, when you first encounter it, than the ordinary, naturally occurring selves of normal, healthy people.

[…]

Because for the narcissist, this appreciation of you is entirely contingent on the idea that you will help him to maintain his selfiness. If you do not, or if you are near him when someone or something does not, then God help you. When that picture shatters, his hurt and his rage will be unmatched in its heat or, more often, its coldness. He will unfriend you, stop following you, stop returning your emails, stop talking to you completely. He will cheat on you without seeming to think it’s a big deal, or break up with you, when he has said he’d be with you forever. He will fire you casually and without notice. Whatever hurts most, he will do it. Whatever you need the most, he will withhold it. He cannot feel other people’s feelings, but he is uncannily good at figuring out how to demolish yours.

[…]

It isn’t that the narcissist is just not a good person; she’s like a caricature of what we mean by “not a good person.” She’s not just bad; she’s a living, breathing lesson in what badness is.

Immanuel Kant offered a formulation for how to do the right thing: Asking yourself, if everyone acted this way, would the world be a better place? Good people, we tend to believe, will treat others as the ends themselves, not the means. Narcissists, along with psychopaths, do the opposite. For them, people are the means toward other ends. “If everyone were to follow suit,” Dombek writes, “the world would go straight to hell.”

The realization that the narcissist, not so much selfish as not really having a self, changes everything. Suddenly you can see them for what they are: puppets or clowns. While they may look human, they are not.

So what should you do when you are confronted with a narcissist?

It seems no matter what you answer, you’ll be haunted forever. With equal certainty the internet offers two pieces of common advice: love them and expect nothing and hope that they change, or run as fast and as far as you can.

If the prevailing wisdom that narcissism is becoming more and more common is indeed true, today’s prevailing advice doesn’t scale.

Kant’s advice no longer holds. But that is not the worst of it. Running is an act of the very same coldness described by the diagnosis. What if the only way to escape a narcissist is to act like one yourself?

The question of the selfishness of others, though, leads quickly to the very difficult question of how we know things about others at all, and the mind-knotting question of how we know things at all.

Dombek goes on to explore provocative questions of ourselves—most of us can be put in environments where we display situational narcissisms; why is having a boyfriend or boss like having a villain; why do the narcissistic descriptions of others (“in moments you quietly bury deep inside you”) remind you of yourself.

 

Moving the Finish Line: The Goal Gradient Hypothesis

Imagine a sprinter running an Olympic race. He’s competing in the 1600 meter run.

The first two laps he runs at a steady but hard pace, trying to keep himself consistently near the head, or at least the middle, of the pack, hoping not to fall too far behind while also conserving energy for the whole race.

About 800 meters in, he feels himself start to fatigue and slow. At 1000 meters, he feels himself consciously expending less energy. At 1200, he’s convinced that he didn’t train enough.

Now watch him approach the last 100 meters, the “mad dash” for the finish. He’s been running what would be an all-out sprint to us mortals for 1500 meters, and yet what happens now, as he feels himself neck and neck with his competitors, the finish line in sight?

He speeds up. That energy drag is done. The goal is right there, and all he needs is one last push. So he pushes.

This is called the Goal Gradient Effect, or more precisely, the Goal Gradient Hypothesis. Its effect on biological creatures is not just a feeling, but a real and measurable thing.

***

The first person to try explaining the goal gradient hypothesis was an early behavioural psychologist named Clark L. Hull.

As with other animals, when it came to humans, Hull was a pretty hardcore “behaviourist”, thinking that human behaviour could eventually be reduced to mathematical prediction based on rewards and conditioning. As insane as this sounds now, he had a neat mathematical formula for human behaviour:

screen-shot-2016-10-14-at-12-34-26-pm

Some of his ideas eventually came to be seen as extremely limiting Procrustean Bed type models of human behavior, but the Goal Gradient Hypothesis was replicated many times over the years.

Hull himself wrote papers with titles like The Goal-Gradient Hypothesis and Maze Learning to explore the effect of the idea in rats. As Hull put it, “...animals in traversing a maze will move at a progressively more rapid pace as the goal is approached.” Just like the runner above.

Most of the work Hull focused on were animals rather than humans, showing somewhat unequivocally that in the context of approaching a reward, the animals did seem to speed up as the goal approached, enticed by the end of the maze. The idea was, however, resurrected in the human realm in 2006 with a paper entitled The Goal-Gradient Hypothesis Resurrected: Purchase Acceleration, Illusionary Goal Progress, and Customer Retention. (link)

The paper examined consumer behaviour in the “goal gradient” sense and found, alas, it wasn’t just rats that felt the tug of the “end of the race” — we do too. Examining a few different measurable areas of human behaviour, the researchers found that consumers would work harder to earn incentives as the goal came in sight, and that after the reward was earned, they’d slow down their efforts:

We found that members of a café RP accelerated their coffee purchases as they progressed toward earning a free coffee. The goal-gradient effect also generalized to a very different incentive system, in which shorter goal distance led members to visit a song-rating Web site more frequently, rate more songs during each visit, and persist longer in the rating effort. Importantly, in both incentive systems, we observed the phenomenon of post-reward resetting, whereby customers who accelerated toward their first reward exhibited a slowdown in their efforts when they began work (and subsequently accelerated) toward their second reward. To the best of our knowledge, this article is the first to demonstrate unequivocal, systematic behavioural goal gradients in the context of the human psychology of rewards.

Fascinating.

***

If we’re to take the idea seriously, the Goal Gradient Hypothesis has some interesting implications for leaders and decision-makers.

The first and most important is probably that incentive structures should take the idea into account. This is a fairly intuitive (but often unrecognized) idea: Far-away rewards are much less motivating than near term ones. Given the chance to earn $1,000 at the end of this month, and each thereafter, or $12,000 at the end of the year, which would you be more likely to work hard for?

What if I pushed it back even more but gave you some “interest” to compensate: Would you work harder for the potential to earn $90,000 five years from now or to earn $1,000 this month, followed by $1,000 the following month, and so on, every single month during five year period?

Companies like Nucor take the idea seriously: They pay bonuses to lower-level employees based on monthly production, not letting it wait until the end of the year. Essentially, the end of the maze happens every 30 days rather than once per year. The time between doing the work and the reward is shortened.

The other takeaway comes to consumer behaviour, as referenced in the marketing paper. If you’re offering rewards for a specific action from your customer, do you reward them sooner, or later?

The answer is almost always going to be “sooner”. In fact, the effect may be strong enough that you can get away with less total rewards by increasing their velocity.

Lastly, we might be able to harness the Hypothesis in our personal lives.

Let’s say we want to start reading more. Do we set a goal to read 52 books this year and hold ourselves accountable, or to read 1 book a week? What about 25 pages per day?

Not only does moving the goalposts forward tend to increase our motivation, but we repeatedly prove to ourselves that we’re capable of accomplishing them. This is classic behavioural psychology: Instant rewards rather than delayed. (Even if they’re psychological.) Not only that, but it forces us to avoid procrastination — leaving 35 books to be read in the last two months of the year, for example.

Those three seem like useful lessons, but here’s a challenge: Try synthesizing a new rule or idea of your own, combining the Goal Gradient Effect with at least one other psychological principle, and start testing it out in your personal life or in your organization. Don’t let useful nuggets sit around; instead, start eating the broccoli.

The Fundamental Attribution Error: Why Predicting Behavior is so Hard


“Psychologists refer to the inappropriate use of dispositional
explanation as the fundamental attribution error, that is,
explaining situation-induced behavior as caused by
enduring character traits of the agent.”
— Jon Elster

***

The problem with any concept of “character” driving behavior is that “character” is pretty hard to pin down. We call someone “moral” or “honest,” we call them “courageous” or “naive” or any other number of names. The implicit connotation is that someone “honest” in one area will be “honest” in most others, or someone “moral” in one situation is going to be “moral” elsewhere.

Old-time folk psychology supports the notion, of course. As Jon Elster points out in his wonderful book Explaining Social Behavior, folk wisdom would have us believe that much of this “predicting and understanding behavior” thing is pretty darn easy! Simply ascertain character, and use that as a basis to predict or explain action.

People are often assumed to have personality traits (introvert, timid, etc.) as well as virtues (honesty, courage, etc.) or vices (the seven deadly sins, etc.). In folk psychology, these features are assumed to be stable over time and across situations. Proverbs in all languages testify to this assumption. “Who tells one lie will tell a hundred.” “Who lies also steals.” “Who steals an egg will steal an ox.” “Who keeps faith in small matters, does so in large ones.” “Who is caught red-handed once will always be distrusted.” If folk psychology is right, predicting and explaining behavior should be easy.

A single action will reveal the underlying trait or disposition and allow us to predict behavior on an indefinite number of other occasions when the disposition could manifest itself. The procedure is not tautological, as it would be if we took cheating on an exam as evidence of dishonesty and then used the trait of dishonesty to explain the cheating. Instead, it amounts to using cheating on an exam as evidence for a trait (dishonesty) that will also cause the person to be unfaithful to a spouse. If one accepts the more extreme folk theory that all virtues go together, the cheating might also be used to predict cowardice in battle or excessive drinking. 

This is a very natural and tempting way to approach the understanding of people. We like to think of actions that “speak volumes” about others’ character, thus using that as a basis to predict or understand their behavior in other realms.

For example, let’s say you were interviewing a financial advisor. He shows up on time, in a nice suit, and buys lunch. He says all the right words. Will he handle your money correctly?

Almost all of us would be led to believe he would, reasoning that his sharp appearance, timeliness, and generosity point towards his “good character”.

But what the study of history shows us is that appearances are flawed, and behavior in one context often does not have correlation to behavior in other contexts. Judging character becomes complex when we appreciate the situational nature of our actions. The U.S. President Lyndon Johnson was an arrogant bully and a liar who stole an election when he was young. He also fought like hell to pass the Civil Rights Act, something almost no other politician could have done.

Henry Ford standardized and streamlined the modern automobile and made it affordable to the masses, while paying “better than fair” wages to his employees and generally treating them well and with respect, something many “Titans” of business had trouble with in his day. He was also a notorious anti-Semite! If it’s true that “He who is moral in one respect is also moral in all respects,” then what are we to make of this?

Jon Elster has some other wonderful examples coming from the world of music, regarding impulsivity versus discipline:

The jazz musician Charlie Parker was characterized by a doctor who knew him as “a man living from moment to moment. A man living for the pleasure principle, music, food, sex, drugs, kicks, his personality arrested at an infantile level.” Another great jazz musician, Django Reinhardt, had an even more extreme present-oriented attitude in his daily life, never saving any of his substantial earnings, but spending them on whims or on expensive cars, which he quickly proceeded to crash. In many ways he was the incarnation of the stereotype of “the Gypsy.” Yet you do not become a musician of the caliber of Parker and Reinhardt if you live in the moment in all respects. Proficiency takes years of utter dedication and concentration. In Reinhardt’s case, this was dramatically brought out when he damaged his left hand severely in a fire and retrained himself so that he could achieve more with two fingers than anyone else with four. If these two musicians had been impulsive and carefree across the board — if their “personality” had been consistently “infantile” — they could never have become such consummate artists.

Once we realize this truth, it seems obvious. We begin seeing it everywhere. Dan Ariely wrote a book about situational dishonesty and cheating which we have written about before. Judith Rich Harris based her theory of child development on the idea that children do not behave the same elsewhere as they do at home, misleading parents into thinking they were molding their children. Good interviewing and hiring is a notoriously difficult problem because we are consistently misled into thinking that what we learn in the interview process is representative of the interviewee’s general competence. Books have been written about the Halo Effect, a similar idea that good behavior in one area creates a “halo” around all behavior.

The reason we see this everywhere is because it’s how the world works!

This basic truth is called the Fundamental Attribution Error, the belief that behavior in one context carries over with any consistency into other areas.

Studying the error leads us to conclude that we have a natural tendency to:

A. Over-rate some general consideration of “character” and,
B. Under-rate the “power of the situation”, and its direct incentives, to compel a variety of behavior.

Elster describes a social psychology experiment that effectively demonstrates how quickly any thought of “morality” can be lost in the right situation:

In another experiment, theology students were told to prepare themselves to give a brief talk in a nearby building. One-half were told to build the talk around the Good Samaritan parable(!), whereas the others were given a more neutral topic. One group was told to hurry since the people in the other building were waiting for them, whereas another was told that they had plenty of time. On their way to the other building, subjects came upon a man slumping in the doorway, apparently in distress. Among the students who were told they were late, only 10 percent offered assistance; in the other group, 63 percent did so. The group that had been told to prepare a talk on the Good Samaritan was not more likely to behave as one. Nor was the behavior of the students correlated with answers to a questionnaire intended to measure whether their interest in religion was due to the desire for personal salvation or to a desire to help others. The situational factor — being hurried or not — had much greater explanatory power than any dispositional factor.

So with a direct incentive in front of them — not wanting to be late when people were waiting for them, which could cause shame — the idea of being a Good Samaritan was thrown right out the window! So much for good character.

What we need to appreciate is that, in the words of Elster, “Behavior is often no more stable than the situations that shape it.” A shy young boy on the playground might be the most outgoing and aggressive boy in his group of friends. A moral authority in the realm of a religious institution might well cheat on their taxes. A woman who treats her friends poorly might treat her family with reverence and care.

We can’t throw the baby out with the bathwater, of course. Elster refers to contingent response tendencies that would carry from situation to situation, but they tend to be specific rather than general. If we break down character into specific interactions between person and types of situations, we can understand things a little more accurately.

Instead of calling someone a “liar,” we might understand that they lie on their taxes but are honest with their spouse. Instead of calling someone a “hard worker,” we might come to understand that they drive hard in work situations, but simply cannot be bothered to work around the house. And so on. We should pay attention to the interplay between the situation, the incentives and the nature of the person, rather than just assuming that a broad  character trait applies in all situations.

This carries two corollaries:

A. As we learn to think more accurately, we get one step closer to understanding human nature as it really is. We can better understand the people with whom we coexist.

B. We might better understand ourselves! Imagine if you could be the rare individual whose positive traits truly did carry over into all, or at least all important, situations. You would be traveling an uncrowded road.

***

Want More? Check out our ever-growing database of mental models.

Why You’re Not Motivated and 3 Handy Tools to Fix It

It is rare that a book devotes almost half of itself to explaining the concrete implementation of its core ideas. In Drive: The Surprising Truth About What Motivates Us, Daniel Pink includes a toolkit which he says, “is your guide to taking the ideas in this book and putting them into action.”

The toolkit covers ways to increase your motivation in the context of individuals, organizations, parents, educators, and even exercise.

Pink dedicates a section in the toolkit to Strategies for Awakening Your Motivation. There are a few great tips here that are worth highlighting.

The Motivation Toolkit

Give Yourself a ‘Flow Test’

Mihaly Csikszentmihalyi did more than discover the concept of flow. He also introduced an ingenious new technique to measure it. Csikszentmihalyi and his University of Chicago team equipped participants in their research studies with electronic pagers. Then they paged people at random intervals (approximately eight times a day) for a week, asking them to describe their mental state at that moment. Compared with previous methods, these real-time reports proved far more honest and revealing.

You can use Csikzentimihalyi’s methodological innovation in your own quest for mastery by giving yourself a ‘flow test.’ Set a reminder on your computer or mobile phone to go off at forty random times in a week. Each time your device beeps, write down what you’re doing, how you’re feeling, and whether you’re in ‘flow.’ Record your observations, look at the patterns, and consider the following questions:

  • Which moments produced feelings of ‘flow’? Where were you? What were you working on? Who were you with?
  • Are certain times of day more flow-friendly than others? How could you restructure your day based on your findings?
  • How might you increase the number of optimal experiences and reduce the moments when you felt disengaged or distracted?
  • If you’re having doubts about your job or career, what does this exercise tell you about your true source of intrinsic motivation?”

Just Say No – With a List

Most of us have a to-do list. But legendary management guru Tom Peters also has what he calls “to don’t” list – an inventory of behaviours and practices that sap his energy, divert his focus, and ought to be avoided. Follow his lead and each week craft your own agenda of avoidance. Staying motivated – directing your own life, making progress, and pursuing purpose – isn’t easy. So get rid of the unnecessary obligations, time-wasting distractions, and useless burdens that stand in your way. And the first step in bulldozing these obstacles is to enumerate them. As Peters puts it, “What you decide not to do is probably more important than what you decide to do.”

Stop-doing lists are not new. If you’re wondering what you should stop doing, here are 9 unproductive habits you can stop right now.

Move Five Steps Closer to Mastery

One key to mastery is what Florida State University psychology professor Anders Ericsson calls deliberate practice – a ‘lifelong period of… effort to improve performance in a specific domain.’ Deliberate practice isn’t running a few miles each day or banging on the piano for twenty minutes each morning. It’s much more purposeful, focused, and, yes painful. Follow these steps – over and over again for a decade – and you just might become a master:

  • Remember that deliberate practise has one objective: to improve performance. ‘People who play tennis once a week for years don’t get any better if they do the same thing each time,’ Ericsson has said. ‘Deliberate practise is about changing your performance, setting new goals and straining yourself to reach a bit higher each time.’
  • Repeat, repeat, repeat. Repetition matters. Basketball greats don’t shoot ten free throws at the end of team practise; they shoot five hundred.
  • Seek constant, critical feedback. If you don’t know how you’re doing, you won’t know what to improve.
  • Focus ruthlessly on where you need help. While many of us work on what we’re already good at, says Ericsson, ‘those who get better work on their weaknesses.’
  • Prepare for the process to be mentally and physically exhausting. That’s why so few people commit to it, but that’s why it works.

The toolkit also includes a summary of each chapter and even a ‘Drive Discussion Guide;’ a list of twenty questions designed to start a conversation and help you think more deeply about the concepts he has covered. These are some of the fundamentals of active reading and also remind us a bit of the Feynman technique for learning concepts more deeply.

***

Still Curious? Follow up with eight ways to say no, Steve Jobs on focus, and the difference between successful people and very successful people.

Our Genes and Our Behavior

“But now we are starting to show genetic influence on individual differences using DNA. DNA is a game changer; it’s a lot harder to argue with DNA than it is with a twin study or an adoption study.”
— Robert Plomin

***

It’s not controversial to say that our genetics help explain our physical traits. Tall parents will, on average, have tall children. Overweight parents will, on average, have overweight children. Irish parents have Irish looking kids. This is true to the point of banality and only a committed ignorant would dispute it.

It’s slightly more controversial to talk about genes influencing behavior. For a long time, it was denied entirely. For most of the 20th century, the “experts” in human behavior had decided that “nurture” beat “nature” with a score of 100-0. Particularly influential was the child’s early life — the way their parents treated them in the womb and throughout early childhood. (Thanks Freud!)

So, where are we at now?

Genes and Behavior

Developmental scientists and behavioral scientists eventually got to work with twin studies and adoption studies, which tended to show that certain traits were almost certainly heritable and not reliant on environment, thanks to the natural controlled experiments of twins separated at birth. (This eventually provided fodder for Judith Rich Harris’s wonderful work on development and personality.)

All throughout, the geneticists, starting with Gregor Mendel and his peas, kept on working. As behavioral geneticist Robert Plomin explains, the genetic camp split early on. Some people wanted to understand the gene itself in detail, using very simple traits to figure it out (eye color, long or short wings, etc.) and others wanted to study the effect of genes on complex behavior, generally:

People realized these two views of genetics could come together. Nonetheless, the two worlds split apart because Mendelians became geneticists who were interested in understanding genes. They would take a convenient phenotype, a dependent measure, like eye color in flies, just something that was easy to measure. They weren’t interested in the measure, they were interested in how genes work. They wanted a simple way of seeing how genes work.

By contrast, the geneticists studying complex traits—the Galtonians—became quantitative geneticists. They were interested in agricultural traits or human traits, like cardiovascular disease or reading ability, and would use genetics only insofar as it helped them understand that trait. They were behavior centered, while the molecular geneticists were gene centered. The molecular geneticists wanted to know everything about how a gene worked. For almost a century these two worlds of genetics diverged.

Eventually, the two began to converge. One camp (the gene people) figured out that once we could sequence the genome, they might be able to understand more complicated behavior by looking directly at genes in specific people with unique DNA, and contrasting them against one another.

The reason why this whole gene-behavior game is hard is because, as Plomin makes clear, complex traits like intelligence are not like eye color. There’s no “smart gene” — it comes from the interaction of thousands of different genes and can occur in a variety of combinations. Basic Mendel-style counting (the sort of dominant/recessive eye color gene thing you learned in high school biology) doesn’t work in analyzing the influence of genes on complex traits:

The word gene wasn’t invented until 1903. Mendel did his work in the mid-19th century. In the early 1900s, when Mendel was rediscovered, people finally realized the impact of what he did, which was to show the laws of inheritance of a single gene. At that time, these Mendelians went around looking for Mendelian 3:1 segregation ratios, which was the essence of what Mendel showed, that inheritance was discreet. Most of the socially, behaviorally, or agriculturally important traits aren’t either/or traits, like a single-gene disorder. Huntington’s disease, for example, is a single-gene dominant disorder, which means that if you have that mutant form of the Huntington’s gene, you will have Huntington’s disease. It’s necessary and sufficient. But that’s not the way complex traits work.

The importance of genetics is hard to understate, but until the right technology came along, we could only observe it indirectly. A study might have shown that 50% of the variance in cognitive ability was due to genetics, but we had no idea which specific genes, in which combinations, actually produced smarter people.

But the Moore’s law style improvement in genetic testing means that we can cheaply and effectively map out entire genomes for a very low cost. And with that, the geneticists have a lot of data to work with, a lot of correlations to begin sussing out. The good thing about finding strong correlations between genes and human traits is that we know which one is causative: The gene! Obviously, your reading ability doesn’t cause you to have certain DNA; it must be the other way around. So “Big Data” style screening is extremely useful, once we get a little better at it.

***

The problem is that, so far, the successes have been a bit minimal. There are millions of “ATCG” base pairs to check on.  As Plomin points out, we can only pinpoint about 20% of the specific genetic influence for something simple like height, which we know is about 90% heritable. Complex traits like schizophrenia are going to take a lot of work:

We’ve got to be able to figure out where the so-called missing heritability is, that is, the gap between the DNA variants that we are able to identify and the estimates we have from twin and adoption studies. For example, height is about 90 percent heritable, meaning, of the differences between people in height, about 90 percent of those differences can be explained by genetic differences. With genome-wide association studies, we can account for 20 percent of the variance of height, or a quarter of the heritability of height. That’s still a lot of missing heritability, but 20 percent of the variance is impressive.

With schizophrenia, for example, people say they can explain 15 percent of the genetic liability. The jury is still out on how that translates into the real world. What you want to be able to do is get this polygenic score for schizophrenia that would allow you to look at the entire population and predict who’s going to become schizophrenic. That’s tricky because the studies are case-control studies based on extreme, well-diagnosed schizophrenics, versus clean controls who have no known psychopathology. We’ll know soon how this polygenic score translates to predicting who will become schizophrenic or not.

It brings up an interesting question that gets us back to the beginning of the piece: If we know that genetics have an influence on some complex behavioral traits (and we do), and we can with the continuing progress of science and technology, sequence a baby’s genome and predict to a certain extent their reading level, facility with math, facility with social interaction, etc., do we do it?

Well, we can’t until we get a general recognition that genes do indeed influence behavior and do have predictive power as far as how children perform. So far, the track record on getting educators to see that it’s all quite real is pretty bad. Like the Freudians before, there’s a resistance to the “nature” aspect of the debate, probably influenced by some strong ideologies:

If you look at the books and the training that teachers get, genetics doesn’t get a look-in. Yet if you ask teachers, as I’ve done, about why they think children are so different in their ability to learn to read, and they know that genetics is important. When it comes to governments and educational policymakers, the knee-jerk reaction is that if kids aren’t doing well, you blame the teachers and the schools; if that doesn’t work, you blame the parents; if that doesn’t work, you blame the kids because they’re just not trying hard enough. An important message for genetics is that you’ve got to recognize that children are different in their ability to learn. We need to respect those differences because they’re genetic. Not that we can’t do anything about it.

It’s like obesity. The NHS is thinking about charging people to be fat because, like smoking, they say it’s your fault. Weight is not as heritable as height, but it’s highly heritable. Maybe 60 percent of the differences in weight are heritable. That doesn’t mean you can’t do anything about it. If you stop eating, you won’t gain weight, but given the normal life in a fast-food culture, with our Stone Age brains that want to eat fat and sugar, it’s much harder for some people.

We need to respect the fact that genetic differences are important, not just for body mass index and weight, but also for things like reading disability. I know personally how difficult it is for some children to learn to read. Genetics suggests that we need to have more recognition that children differ genetically, and to respect those differences. My grandson, for example, had a great deal of difficulty learning to read. His parents put a lot of energy into helping him learn to read. We also have a granddaughter who taught herself to read. Both of them now are not just learning to read but reading to learn.

Genetic influence is just influence; it’s not deterministic like a single gene. At government levels—I’ve consulted with the Department for Education—I don’t think they’re as hostile to genetics as I had feared, they’re just ignorant of it. Education just doesn’t consider genetics, whereas teachers on the ground can’t ignore it. I never get static from them because they know that these children are different when they start. Some just go off on very steep trajectories, while others struggle all the way along the line. When the government sees that, they tend to blame the teachers, the schools, or the parents, or the kids. The teachers know. They’re not ignoring this one child. If anything, they’re putting more energy into that child.

It’s frustrating for Plomin because he knows that eventually DNA mapping will get good enough that real, and helpful, predictions will be possible. We’ll be able to target kids early enough to make real differences — earlier than problems actually manifest — and hopefully change the course of their lives for the better. But so far, no dice.

Education is the last backwater of anti-genetic thinking. It’s not even anti-genetic. It’s as if genetics doesn’t even exist. I want to get people in education talking about genetics because the evidence for genetic influence is overwhelming. The things that interest them—learning abilities, cognitive abilities, behavior problems in childhood—are the most heritable things in the behavioral domain. Yet it’s like Alice in Wonderland. You go to educational conferences and it’s as if genetics does not exist.

I’m wondering about where the DNA revolution will take us. If we are explaining 10 percent of the variance of GCSE scores with a DNA chip, it becomes real. People will begin to use it. It’s important that we begin to have this conversation. I’m frustrated at having so little success in convincing people in education of the possibility of genetic influence. It is ignorance as much as it is antagonism.

Here’s one call for more reality recognition.

***

Still Interested? Check out a book by John Brookman of Edge.org with a curated collection of articles published on genetics.