Tag: Decision Making

Moving the Finish Line: The Goal Gradient Hypothesis

Imagine a sprinter running an Olympic race. He’s competing in the 1600 meter run.

The first two laps he runs at a steady but hard pace, trying to keep himself consistently near the head, or at least the middle, of the pack, hoping not to fall too far behind while also conserving energy for the whole race.

About 800 meters in, he feels himself start to fatigue and slow. At 1000 meters, he feels himself consciously expending less energy. At 1200, he’s convinced that he didn’t train enough.

Now watch him approach the last 100 meters, the “mad dash” for the finish. He’s been running what would be an all-out sprint to us mortals for 1500 meters, and yet what happens now, as he feels himself neck and neck with his competitors, the finish line in sight?

He speeds up. That energy drag is done. The goal is right there, and all he needs is one last push. So he pushes.

This is called the Goal Gradient Effect, or more precisely, the Goal Gradient Hypothesis. Its effect on biological creatures is not just a feeling, but a real and measurable thing.

The Math of Human Behavior

The first person to try explaining the goal gradient hypothesis was an early behavioral psychologist named Clark L. Hull.

As with other animals, when it came to humans, Hull was a pretty hardcore behaviorist, thinking that human behavior could eventually be reduced to mathematical prediction based on rewards and conditioning. As insane as this sounds now, he had a neat mathematical formula for human behavior:

screen-shot-2016-10-14-at-12-34-26-pm

Some of his ideas eventually came to be seen as extremely limiting Procrustean Bed type models of human behavior, but the Goal Gradient Hypothesis was replicated many times over the years.

Hull himself wrote papers with titles like The Goal-Gradient Hypothesis and Maze Learning to explore the effect of the idea in rats. As Hull put it, “...animals in traversing a maze will move at a progressively more rapid pace as the goal is approached.” Just like the runner above.

Most of the work Hull focused on were animals rather than humans, showing somewhat unequivocally that in the context of approaching a reward, the animals did seem to speed up as the goal approached, enticed by the end of the maze. The idea was, however, resurrected in the human realm in 2006 with a paper entitled The Goal-Gradient Hypothesis Resurrected: Purchase Acceleration, Illusionary Goal Progress, and Customer Retention. (link)

The paper examined consumer behavior in the “goal gradient” sense and found, alas, it wasn’t just rats that felt the tug of the “end of the race” — we do too. Examining a few different measurable areas of human behavior, the researchers found that consumers would work harder to earn incentives as the goal came within sight and that after the reward was earned, they’d slow down their efforts:

We found that members of a café RP accelerated their coffee purchases as they progressed toward earning a free coffee. The goal-gradient effect also generalized to a very different incentive system, in which shorter goal distance led members to visit a song-rating Web site more frequently, rate more songs during each visit, and persist longer in the rating effort. Importantly, in both incentive systems, we observed the phenomenon of post-reward resetting, whereby customers who accelerated toward their first reward exhibited a slowdown in their efforts when they began work (and subsequently accelerated) toward their second reward. To the best of our knowledge, this article is the first to demonstrate unequivocal, systematic behavioural goal gradients in the context of the human psychology of rewards.

Fascinating.

Putting The Goal Gradient Hypothesis to Work

If we’re to take the idea seriously, the Goal Gradient Hypothesis has some interesting implications for leaders and decision-makers.

The first and most important is probably that incentive structures should take the idea into account. This is a fairly intuitive (but often unrecognized) idea: Far-away rewards are much less motivating than near term ones. Given a chance to earn $1,000 at the end of this month, and each after that, or $12,000 at the end of the year, which would you be more likely to work hard for?

What if I pushed it back even more but gave you some “interest” to compensate: Would you work harder for the potential to earn $90,000 five years from now or to earn $1,000 this month, followed by $1,000 the following month, and so on, every single month during five year period?

Companies like Nucor take the idea seriously: They pay bonuses to lower-level employees based on monthly production, not letting it wait until the end of the year. Essentially, the end of the maze happens every 30 days rather than once per year. The time between doing the work and the reward is shortened.

The other takeaway comes to consumer behavior, as referenced in the marketing paper. If you’re offering rewards for a specific action from your customer, do you reward them sooner, or later?

The answer is almost always going to be “sooner.” In fact, the effect may be strong enough that you can get away with less total rewards by increasing their velocity.

Lastly, we might be able to harness the Hypothesis in our personal lives.

Let’s say we want to start reading more. Do we set a goal to read 52 books this year and hold ourselves accountable, or to read 1 book a week? What about 25 pages per day?

Not only does moving the goalposts forward tend to increase our motivation, but we repeatedly prove to ourselves that we’re capable of accomplishing them. This is classic behavioral psychology: Instant rewards rather than delayed. (Even if they’re psychological.) Not only that, but it forces us to avoid procrastination — leaving 35 books to be read in the last two months of the year, for example.

Those three seem like useful lessons, but here’s a challenge: Try synthesizing a new rule or idea of your own, combining the Goal Gradient Effect with at least one other psychological principle, and start testing it out in your personal life or in your organization. Don’t let useful nuggets sit around; instead, start eating the broccoli.

Peter Bevelin on Seeking Wisdom, Mental Models, Learning, and a Lot More

One of the most impactful books we’ve ever come across is the wonderful Seeking Wisdom: From Darwin to Munger, written by the Swedish investor Peter Bevelin. In the spirit of multidisciplinary learning, Seeking Wisdom is a compendium of ideas from biology, psychology, statistics, physics, economics, and human behavior.

Mr. Bevelin is out with a new book full of wisdom from Warren Buffett & Charlie Munger: All I Want to Know is Where I’m Going to Die So I Never Go There. We were fortunate enough to have a chance to interview Peter recently, and the result is the wonderful discussion below.

***

What was the original impetus for writing these books?

The short answer: To improve my thinking. And when I started writing on what later became Seeking Wisdom I can express it even simpler: “I was dumb and wanted to be less dumb.” As Munger says: “It’s ignorance removal…It’s dishonorable to stay stupider than you have to be.” And I had done some stupid things and I had seen a lot of stupidity being done by people in life and in business.

A seed was first planted when I read Charlie Munger’s worldly wisdom speech and another one where he referred to Darwin as a great thinker. So I said to myself: I am 42 now. Why not take some time off business and spend a year learning, reflecting and write about the subject Munger introduced to me – human behavior and judgments.

None of my writings started out as a book project. I wrote my first book – Seeking Wisdom – as a memorandum for myself with the expectation that I could transfer some of its essentials to my children. I learn and write because I want to be a little wiser day by day. I don’t want to be a great-problem-solver. I want to avoid problems – prevent them from happening and doing right from the beginning. And I focus on consequential decisions. To paraphrase Buffett and Munger – decision-making is not about making brilliant decisions, but avoiding terrible ones. Mistakes and dumb decisions are a fact of life and I’m going to make more, but as long as I can avoid the big or “fatal” ones I’m fine.

So I started to read and write to learn what works and not and why. And I liked Munger’s “All I want to know is where I’m going to die so I’ll never go there” approach. And as he said, “You understand it better if you go at it the way we do, which is to identify the main stupidities that do bright people in and then organize your patterns for thinking and developments, so you don’t stumble into those stupidities.” Then I “only” had to a) understand the central “concept” and its derivatives and describe it in as simple way as possible for me and b) organize what I learnt in a way that was logical and useful for me.

And what better way was there to learn this from those who already knew this?

After I learnt some things about our brain, I understood that thinking doesn’t come naturally to us humans – most is just unconscious automatic reactions. Therefore I needed to set up the environment and design a system that helped me make it easier to know what to do and prevent and avoid harm. Things like simple rules of thumbs, tricks and filters. Of course, I could only do that if I first had the foundation. And as the years have passed, I’ve found that filters are a great way to save time and misery. As Buffett says, “I process information very quickly since I have filters in my mind.” And they have to be simple – as the proverb says, “Beware of the door that has too many keys.” The more complicated a process is, the less effective it is.

Why do I write? Because it helps me understand and learn better. And if I can’t write something down clearly, then I have not really understood it. As Buffett says, “I learn while I think when I write it out. Some of the things, I think I think, I find don’t make any sense when I start trying to write them down and explain them to people … And if it can’t stand applying pencil to paper, you’d better think it through some more.”

My own test is one that a physicist friend of mine told me many years ago, ‘You haven’t really understood an idea if you can’t in a simple way describe it to almost anyone.’ Luckily, I don’t have to understand zillion of things to function well.

And even if some of mine and others thoughts ended up as books, they are all living documents and new starting points for further, learning, un-learning and simplifying/clarifying. To quote Feynman, “A great deal of formulation work is done in writing the paper, organizational work, organization. I think of a better way, a better way, a better way of getting there, of proving it. I never do much — I mean, it’s just cleaner, cleaner and cleaner. It’s like polishing a rough-cut vase. The shape, you know what you want and you know what it is. It’s just polishing it. Get it shined, get it clean, and everything else.

Which book did you learn the most from the experience of writing/collecting?

Seeking Wisdom because I had to do a lot of research – reading, talking to people etc. Especially in the field of biology and brain science since I wanted to first understand what influences our behavior. I also spent some time at a Neurosciences Institute to get a better understanding of how our anatomy, physiology and biochemistry constrained our behavior.

And I had to work it out my own way and write it down in my own words so I really could understand it. It took a lot of time but it was a lot of fun to figure it out and I learnt much more and it stuck better than if I just had tried to memorize what somebody else had already written. I may not have gotten everything letter perfect but good enough to be useful for me.

As I said, the expectation wasn’t to create a book. In fact, that would have removed a lot of my motivation. I did it because I had an interest in becoming better. It goes back to the importance of intrinsic motivation. As I wrote in Seeking Wisdom: “If we reward people for doing what they like to do anyway, we sometimes turn what they enjoy doing into work. The reward changes their perception. Instead of doing something because they enjoy doing it, they now do it because they are being paid. The key is what a reward implies. A reward for our achievements makes us feel that we are good at something thereby increasing our motivation. But a reward that feels controlling and makes us feel that we are only doing it because we’re paid to do it, decreases the appeal.

It may sound like a cliché but the joy was in the journey – reading, learning and writing – not the destination – the finished book. Has the book made a difference for some people? Yes, I hope so but often people revert to their old behavior. Some of them are the same people who – to paraphrase something that is attributed to Churchill – occasionally should check their intentions and strategies against their results. But reality is what Munger once said, “Everyone’s experience is that you teach only what a reader almost knows, and that seldom.” But I am happy that my books had an impact and made a difference to a few people. That’s enough.

Why did the new book (All I Want To Know Is Where I’m Going To Die So I’ll Never Go There) have a vastly different format?

It was more fun to write about what works and not in a dialogue format. But also because vivid and hopefully entertaining “lessons” are easier to remember and recall. And you will find a lot of quotes in there that most people haven’t read before.

I wanted to write a book like this to reinforce a couple of concepts in my head. So even if some of the text sometimes comes out like advice to the reader, I always think about what the mathematician Gian-Carlo Rota once said, “The advice we give others is the advice that we ourselves need.”

How do you define Mental Models?

Some kind of representation that describes how reality is (as it is known today) – a principle, an idea, basic concepts, something that works or not – that I have in my head that helps me know what to do or not. Something that has stood the test of time.

For example some timeless truths are:

  • Reality is that complete competitors – same product/niche/territory – cannot coexist (Competitive exclusion principle). What works is going where there is no or very weak competition + differentiation/advantages that others can’t copy (assuming of course we have something that is needed/wanted now and in the future)
  • Reality is that we get what we reward for. What works is making sure we reward for what we want to achieve.

I favor underlying principles and notions that I can apply broadly to different and relevant situations. Since some models don’t resemble reality, the word “model” for me is more of an illustration/story of an underlying concept, trick, method, what works etc. that agrees with reality (as Munger once said, “Models which underlie reality”) and help me remember and more easily make associations.

But I don’t judge or care how others label it or do it – models, concepts, default positions … The important thing is that whatever we use, it reflects and agrees with reality and that it works for us to help us understand or explain a situation or know what to do or not do. Useful and good enough guide me. I am pretty pragmatic – whatever works is fine. I follow Deng Xiaoping, “I don’t care whether the cat is black or white as long as it catches mice.” As Feynman said, “What is the best method to obtain the solution to a problem? The answer is, any way that works.

I’ll tell you about a thing Feynman said on education which I remind myself of from time to time in order not to complicate things (from Richard P. Feynman, Michael A. Gottlieb, Ralph Leighton, Feynman’s Tips on Physics: A Problem-Solving Supplement to the Feynman Lectures on Physics):

“There’s a round table on three legs. Where should you lean on it, so the table will be the most unstable?”
The student’s solution was, “Probably on top of one of the legs, but let me see: I’ll calculate how much force will produce what lift, and so on, at different places.”
Then I said, “Never mind calculating. Can you imagine a real table?”
“But that’s not the way you’re supposed to do it!”
“Never mind how you’re supposed to do it; you’ve got a real table here with the various legs, you see? Now, where do you think you’d lean? What would happen if you pushed down directly over a leg?”
“Nothin’!”
I say, “That’s right; and what happens if you push down near the edge, halfway between two of the legs?”
“It flips over!”
I say, “OK! That’s better!”
The point is that the student had not realized that these were not just mathematical problems; they described a real table with legs. Actually, it wasn’t a real table, because it was perfectly circular, the legs were straight up and down, and so on. But it nearly described, roughly speaking, a real table, and from knowing what a real table does, you can get a very good idea of what this table does without having to calculate anything – you know darn well where you have to lean to make the table flip over. So, how to explain that, I don’t know! But once you get the idea that the problems are not mathematical problems but physical problems, it helps a lot.
Anyway, that’s just two ways of solving this problem. There’s no unique way of doing any specific problem. By greater and greater ingenuity, you can find ways that require less and less work, but that takes experience.

Which mental models “carry the most freight?” (Related follow up: Which concepts from Buffett/Munger/Mental Models do you find yourself referring to or appreciating most frequently?)

Ideas from biology and psychology since many stupidities are caused by not understanding human nature (and you get illustrations of this nearly every day). And most of our tendencies were already known by the classic writers (Publilius Syrus, Seneca, Aesop, Cicero etc.)

Others that I find very useful both in business and private is the ideas of Quantification (without the fancy math), Margin of safety, Backups, Trust, Constraints/Weakest link, Good or Bad Economics slash Competitive advantage, Opportunity cost, Scale effects. I also think Keynes idea of changing your mind when you get new facts or information is very useful.

But since reality isn’t divided into different categories but involves a lot of factors interacting, I need to synthesize many ideas and concepts.

Are there any areas of the mental models approach you feel are misunderstood or misapplied?

I don’t know about that but what I often see among many smart people agrees with Munger’s comment: “All this stuff is really quite obvious and yet most people don’t really know it in a way where they can use it.”

Anyway, I believe if you really understand an idea and what it means – not only memorizing it – you should be able to work out its different applications and functional equivalents. Take a simple big idea – think on it – and after a while you see its wider applications. To use Feynman’s advice, “It is therefore of first-rate importance that you know how to “triangulate” – that is, to know how to figure something out from what you already know.” As a good friend says, “Learn the basic ideas, and the rest will fill itself in. Either you get it or you don’t.”

Most of us learn and memorize a specific concept or method etc. and learn about its application in one situation. But when the circumstances change we don’t know what to do and we don’t see that the concept may have a wider application and can be used in many situations.

Take for example one big and useful idea – Scale effects. That the scale of size, time and outcomes changes things – characteristics, proportions, effects, behavior…and what is good or not must be tied to scale. This is a very fundamental idea from math. Munger described some of this idea’s usefulness in his worldly wisdom speech. One effect from this idea I often see people miss and I believe is important is group size and behavior. That trust, feeling of affection and altruistic actions breaks down as group size increases, which of course is important to know in business settings. I wrote about this in Seeking Wisdom (you can read more if you type in Dunbar Number on Google search). I know of some businesses that understand the importance of this and split up companies into smaller ones when they get too big (one example is Semco).

Another general idea is “Gresham’s Law” that can be generalized to any process or system where the bad drives out the good. Like natural selection or “We get what we select for” (and as Garrett Hardin writes, “The more general principle is: We get whatever we reward for).

While we are on the subject of mental models etc., let me bring up another thing that distinguishes the great thinkers from us ordinary mortals. Their ability to quickly assess and see the essence of a situation – the critical things that really matter and what can be ignored. They have a clear notion of what they want to achieve or avoid and then they have this ability to zoom in on the key factor(s) involved.

One reason to why they can do that is because they have a large repertoire of stored personal and vicarious experiences and concepts in their heads. They are masters at pattern recognition and connection. Some call it intuition but as Herbert Simon once said, “The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition.

It is about making associations. For example, roughly like this:
Situation X Association (what does this remind me of?) to experience, concept, metaphor, analogy, trick, filter… (Assuming of course we are able to see the essence of the situation) What counts and what doesn’t? What works/not? What to do or what to explain?

Let’s take employing someone as an example (or looking at a business proposal). This reminds me of one key factor – trustworthiness and Buffett’s story, “If you’re looking for a manager, find someone who is intelligent, energetic and has integrity. If he doesn’t have the last, make sure he lacks the first two.”

I believe Buffett and Munger excel at this – they have seen and experienced so much about what works and not in business and behavior.

Buffett referred to the issue of trust, chain letters and pattern recognition at the latest annual meeting:

You can get into a lot of trouble with management that lacks integrity… If you’ve got an intelligent, energetic guy or woman who is pursuing a course of action, which gets put on the front page it could make you very unhappy. You can get into a lot of trouble. ..We’ve seen patterns…Pattern recognition is very important in evaluating humans and businesses. Pattern recognition isn’t one hundred percent and none of the patterns exactly repeat themselves, but there are certain things in business and securities markets that we’ve seen over and over and frequently come to a bad end but frequently look extremely good in the short run. One which I talked about last year was the chain letter scheme. You’re going to see chain letters for the rest of your life. Nobody calls them chain letters because that’s a connotation that will scare you off but they’re disguised as chain letters and many of the schemes on Wall Street, which are designed to fool people, have that particular aspect to it…There were patterns at Valeant certainly…if you go and watch the Senate hearings, you will see there are patterns that should have been picked up on.

This is what he wrote on chain letters in the 2014 annual report:

In the late 1960s, I attended a meeting at which an acquisitive CEO bragged of his “bold, imaginative accounting.” Most of the analysts listening responded with approving nods, seeing themselves as having found a manager whose forecasts were certain to be met, whatever the business results might be. Eventually, however, the clock struck twelve, and everything turned to pumpkins and mice. Once again, it became evident that business models based on the serial issuances of overpriced shares – just like chain-letter models – most assuredly redistribute wealth, but in no way create it. Both phenomena, nevertheless, periodically blossom in our country – they are every promoter’s dream – though often they appear in a carefully-crafted disguise. The ending is always the same: Money flows from the gullible to the fraudster. And with stocks, unlike chain letters, the sums hijacked can be staggering.

And of course, the more prepared we are or the more relevant concepts and “experiences” we have in our heads, the better we all will be at this. How do we get there? Reading, learning and practice so we know it “fluently.” There are no shortcuts. We have to work at it and apply it to the real world.

As a reminder to myself so I understand my limitation and “circle”, I keep a paragraph from Munger’s USC Gould School of Law Commencement Address handy so when I deal with certain issues, I don’t fool myself into believing I am Max Planck when I’m really the Chauffeur:

In this world I think we have two kinds of knowledge: One is Planck knowledge, that of the people who really know. They’ve paid the dues, they have the aptitude. Then we’ve got chauffeur knowledge. They have learned to prattle the talk. They may have a big head of hair. They often have fine timbre in their voices. They make a big impression. But in the end what they’ve got is chauffeur knowledge masquerading as real knowledge.

Which concepts from Buffett/Munger/Mental Models do you find most counterintuitive?

One trick or notion I see many of us struggling with because it goes against our intuition is the concept of inversion – to learn to think “in negatives” which goes against our normal tendency to concentrate on for example, what we want to achieve or confirmations instead of what we want to avoid and disconfirmations. Another example of this is the importance of missing confirming evidence (I call it the “Sherlock trick”) – that negative evidence and events that don’t happen, matter when something implies they should be present or happen.

Another example that is counterintuitive is Newton’s 3d law that forces work in pairs. One object exerts a force on a second object, but the second object also exerts a force equal and opposite in direction to the force acting on it – the first object. As Newton wrote, “If you press a stone with your finger, the finger is also pressed by the stone.” Same as revenge (reciprocation).

Who are some of the non-obvious, or under-the-radar thinkers that you greatly admire?

One that immediately comes to mind is one I have mentioned in the introduction in two of my books is someone I am fortunate to have as a friend – Peter Kaufman. An outstanding thinker and a great businessman and human being. On a scale of 1 to 10, he is a 15.

What have you come to appreciate more with Buffett/Munger’s lessons as you’ve studied them over the years?

Their ethics and their ethos of clarity, simplicity and common sense. These two gentlemen are outstanding in their instant ability to exclude bad ideas, what doesn’t work, bad people, scenarios that don’t matter, etc. so they can focus on what matters. Also my amazement that their ethics and ideas haven’t been more replicated. But I assume the answer lies in what Munger once said, “The reason our ideas haven’t spread faster is they’re too simple.”

This reminds me something my father-in-law once told me (a man I learnt a lot from) – the curse of knowledge and the curse of academic title. My now deceased father-in-law was an inventor and manager. He did not have any formal education but was largely self-taught. Once a big corporation asked for his services to solve a problem their 60 highly educated engineers could not solve. He solved the problem. The engineers said, “It can’t be that simple.” It was like they were saying that, “Here we have 6 years of school, an academic title, lots of follow up education. Therefore an engineering problem must be complicated”. Like Buffett once said of Ben Graham’s ideas, “I think that it comes down to those ideas – although they sound so simple and commonplace that it kind of seems like a waste to go to school and get a PhD in Economics and have it all come back to that. It’s a little like spending eight years in divinity school and having somebody tell you that the 10 commandments were all that counted. There is a certain natural tendency to overlook anything that simple and important.”

(I must admit that in the past I had a tendency to be extra drawn to elegant concepts and distracting me from the simple truths.)

What things have you come to understand more deeply in the past few years?

  • That I don’t need hundreds of concepts, methods or tricks in my head – there are a few basic, time-filtered fundamental ones that are good enough. As Munger says, “The more basic knowledge you have the less new knowledge you have to get.” And when I look at something “new”, I try to connect it to something I already understand and if possible get a wider application of an already existing basic concept that I already have in my head.
  • Neither do I have to learn everything to cover every single possibility – not only is it impossible but the big reason is well explained by the British statistician George Box. He said that we shouldn’t be preoccupied with optimal or best procedures but good enough over a range of possibilities likely to happen in practice – circumstances which the world really present to us.
  • The importance of “Picking my battles” and focus on the long-term consequences of my actions. As Munger said, “A majority of life’s errors are caused by forgetting what one is really trying to do.”
  • How quick most of us are in drawing conclusions. For example, I am often too quick in being judgmental and forget how I myself behaved or would have behaved if put in another person’s shoes (and the importance of seeing things from many views).
  • That I have to “pick my poison” since there is always a set of problems attached with any system or approach – it can’t be perfect. The key is try to move to a better set of problems one can accept after comparing what appear to be the consequences of each.
  • How efficient and simplified life is when you deal with people you can trust. This includes the importance of the right culture.
  • The extreme importance of the right CEO – a good operator, business person and investor.
  • That luck plays a big role in life.
  • That most predictions are wrong and that prevention, robustness and adaptability is way more important. I can’t help myself – I have to add one thing about the people who give out predictions on all kinds of things. Often these are the people who live in a world where their actions have no consequences and where their ideas and theories don’t have to agree with reality.
  • That people or businesses that are foolish in one setting often are foolish in another one (“The way you do anything, is the way you do everything”).
  • Buffett’s advice that “A checklist is no substitute for thinking.” And that sometimes it is easy to overestimate one’s competency in a) identifying or picking what the dominant or key factors are and b) evaluating them including their predictability. That I believe I need to know factor A when I really need to know B – the critical knowledge that counts in the situation with regards to what I want to achieve.
  • Close to this is that I sometimes get too involved in details and can’t see the forest for the trees and I get sent up too many blind alleys. Just as in medicine where a whole body scan sees too much and sends the doctor up blind alleys.
  • The wisdom in Buffett’s advice that “You only have to be right on a very, very few things in your lifetime as long as you never make any big mistakes…An investor needs to do very few things right as long as he or she avoids big mistakes.”

What’s the best investment of time/effort/money that you’ve ever made?

The best thing I have done is marrying my wife. As Buffett says and it is so so true, “Choosing a spouse is the most important decision in your life…You need everything to be stable, and if that decision isn’t good, it may affect every other decision in life, including your business decisions…If you are lucky on health and…on your spouse, you are a long way home.”

A good “investment” is taking the time to continuously improve. It just takes curiosity and a desire to know and understand – real interest. And for me this is fun.

What does your typical day look like? (How much time do you spend reading… and when?)

Every day is a little different but I read every day.

What book has most impacted your life?

There is not one single book or one single idea that has done it. I have picked up things from different books (still do). And there are different books and articles that made a difference during different periods of my life. Meeting and learning from certain people and my own practical experiences has been more important in my development. As an example – When I was in my 30s a good friend told me something that has been very useful in looking at products and businesses. He said I should always ask who the real customer is: “Who ultimately decides what to buy and what are their decision criteria and how are they measured and rewarded and who pays?

But looking back, if I have had a book like Poor Charlie’s Almanack when I was younger I would have saved myself some misery. And of course, when it comes to business, managing and investing, nothing beats learning from Warren Buffett’s Letters to Berkshire Hathaway Shareholders.

Another thing I have found is that it is way better to read and reread fewer books but good and timeless ones and then think. Unfortunately many people absorb too many new books and information without thinking.

Let me finish this with some quotes from my new book that I believe we all can learn from:

  • “There’s no magic to it…We haven’t succeeded because we have some great, complicated systems or magic formulas we apply or anything of the sort. What we have is just simplicity itself.” – Buffett
  • “Our ideas are so simple that people keep asking us for mysteries when all we have are the most elementary ideas…There’s nothing remarkable about it. I don’t have any wonderful insights that other people don’t have. Just slightly more consistently than others, I’ve avoided idiocy…It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.” – Munger
  • “It really is simple – just avoid doing the dumb things. Avoiding the dumb things is the most important.” – Buffett

Finally, I wish you and your readers an excellent day – Everyday!

 

Why Fiddling With Prices Doesn’t Work


“The fact is, if you don’t find it reasonable that prices should reflect relative scarcity,
then fundamentally you don’t accept the market economy,
because this is about as close to the essence of the market as you can find.”

— Joseph Heath

***

Inevitably, when the price of a good or service rises rapidly, there follows an accusation of price-gouging. The term carries a strong moral admonition on the price-gouger, in favor of the price-gougee. Gas shortages are a classic example. With a local shortage of gasoline, gas stations will tend to mark up the price of gasoline to reflect the supply issue. This is usually rewarded with cries of unfairness. But does that really make sense?

In his excellent book Economics Without Illusions, Joseph Heath argues that it doesn’t.

In fact, this very scenario is market pricing reacting just as it should. With gasoline in short supply, the market price rises too so that those who need gasoline have it available, and those who simply want it do not. The price system ensures that everyone makes their choice correctly. If you’re willing to pay up, you pay up. If you’re not, you make alternative arrangements – drive less, use less heat, etc. This is exactly what market pricing is for – to give us a reference as we make our choices. But it’s still hard for many well-intentioned people to understand. Let’s think it through a little, with Heath’s help.

***

As Heath points out in the book, the objection to so-called “price gouging” goes back at least to the Roman Emperor Diocletian, who in AD 301 imposed an Edict of Maximum Prices:

If the excesses perpetrated by persons of unlimited and frenzied avarice could be checked by some self-restraint—this avarice which rushes for gain and profit with no thought for mankind; or if the general welfare could endure without harm this riotous license by which, in its unfortunate state, it is being very seriously injured every day, the situation could perhaps be faced with dissembling and silence, with the hope that human forbearance might alleviate the cruel and pitiable situation.

And with that, Diocletian set a hard cap on the price of over a thousand different items. Some were tangible, like wheat and barley, and some were intangible, like farm labor and barber services.

This was, of course, very dumb and did not last very long as people realized that one barber and another were not equal, that wheat and barley might have local supply constraints, and that an arbitrary government price was not the fair one for most of the 1000+ items.

Inflation vs. Supply

As Heath points out in his book, there are two separate issues to untangle when we talk about “price-gouging” — general inflation and constraints on supply. The two are very different, and confusing a supply issue for general inflation leads to a lot of wrong thinking:

If you wander into a Polish supermarket and discover that a kilo of carrots is selling for four zlotys, you probably haven’t learned very much. It’s only once you find out what a pound of potatoes costs, and a chicken, and a pint of beer, that you begin to discover whether carrots are expensive or cheap.

As a result, the price of everything going up is analytically equivalent to the price of nothing going up. It follows that if the price of everything seems to be going up, it must be because the price of at least one thing is (inconspicuously) going down. Usually that inconspicuous item with the falling price is hidden in plain sight — money. We tend to overlook money because it’s not directly consumed; it simply circulates, thus we forget that it has a price. We think of “four zlotys per kilo” as the price of carrots, expressed in zlotys, while forgetting that it is also the price of zlotys, expressed in carrots.

As Garrett Hardin would well recognize, part of the problem is the way language misleads us. When the price of stuff is going up, we don’t always make the equivalent connection that the value of our money is going down. And thus, we can often confuse a rising price environment for greedy so and so’s who are simply reacting to the declining value of money.

Often, governments hurt the value of money purposely. In Diocletian’s time, a denarius coin went from being made entirely of silver to being made of about 2% silver and 98% base metals – the origin of the term currency debasement. In a world of inflation, what seems like greed is often an illusion caused by money losing its value generally (a complex phenomenon in its own right).

To see the flow-through effects of this, imagine that all wage-earners were given a significant raise next month. Sounds good, right? Problem is, the increased cost of labor would be passed through in the form of higher prices for everything, or alternatively, businesses would figure out how to operate with fewer workers altogether. The owners of society’s capital don’t just sit back and lose money — they figure out a new plan or reallocate their resources elsewhere.

Thus, a wage increase would put us right back to where we started. This is why the minimum wage debate isn’t simply a humanitarian “business versus workers” issue — there are no easy answers. (In other words, The consequences have consequences.)

Prices are simply signals which allow us to make decisions on how much we really need that thing. If each of us was handed $5,000 to spend each month, we could choose to spend X amount on food, Y amount on housing, and Z amount of organic 97% cacao chocolate. The alternative would be a state planner sitting in a high tower trying to fix prices based on how he or she thought everyone should make their food/housing/chocolate allocation for the month. The history of planned economies would show this to be a majorly bad idea.

This leads us to our next point which is that, of course, our income allocations are not the same. Might price-fixing help level the playing field?

Fixing What, Exactly?

Heath quotes the economist Abba Lerner who once said that the problem for the poor is not that prices are too high, but that they don’t have enough money. (“The solution of poverty lay not with the manipulation of prices but with the distribution of money income.”)

On this, Heath turns to the example of electricity prices, an occasional hot-button issue which leads to subsidies because high electricity prices are seen as regressive — poor people spend a larger percentage of their money on electric power than those more well-off. Why not subsidize electricity prices to help?

The problem is that it’s a massively inefficient way to help, and puts a lot of dollars into pockets of those who don’t need it. Citing Canadian statistics on the use of subsidies to keep electricity prices down, Heath writes:

The middle-income quintile spends an average of $1,117 per year (2.4% of income), while the upper quintile spends $1,522 per year (1.1% of income). This means that the $250 million annual gift being bestowed upon the poor is coupled with a $408 million gift to the middle class and a $556 million gift to the richest 20% of the population. Needless to say, a welfare program that required giving $2 to a rich person for every $1 directed to a poor person would hardly be regarded as progressive (despite the fact that, when expressed as a percentage of income, the poor person is receiving “more”).

Of course, finding a way to get the entire $1.2 billion to the people who truly need it, through a deserving program, would be a far better solution, and one that would also avoid encouraging people to use more electricity than they need (which artificially lower prices can do).

This kind of thing happens, but worse, when it comes to rent control, the system of fixing rental prices for apartments in cities. In addition to subsidizing some of the wrong people, who also have access to rent-controlled housing, the lower prices tend to distort the market for apartment and housing construction.

With apartments so affordable, people who might otherwise have purchased a house now choose to rent, crowding out some people who could never afford a home at all. And with prices artificially low, fewer apartment houses are built! Not a great outcome for the people rent control hopes to help.

To understand why think about the massive spike in energy prices leading up to the 2008 financial crisis. At one time, oil neared $140 per barrel and natural gas reached $13 per MMbtu. The result was somewhat predictable: A massive investment went into the energy complex, leading to new resources and new technologies, while demand quickly abated. Almost no one correctly predicted that 8 years later, oil would be sitting below $50 per barrel and natural gas around $2 per MMbtu. This is, of course, how pricing markets are supposed to work. The signals did their job. Artificial prices for metropolitan apartments don’t allow the market to do this job effectively.

Relative Scarcity: The Key to Understanding Market Prices

The main problem with manipulating and fixing prices is a misunderstanding of what determines prices. What usually determines prices in a true market is relative scarcity, the intersection between how much you want a particular good relative to another one, and the availability of that good. As our wants and needs change, and available supplies change, prices go up and down (ignoring, for now, speculative factors, which play a huge role in some price markets).

What exactly are we paying for when we buy an item?

Clearly, it’s not just the cost of the physical thing being produced. A cup of coffee costs a lot more than a few beans and some water. The total cost is something Heath calls the “social cost” of the good, which includes the entire chain of costs and opportunity costs in producing it:

Whenever someone consumes a good (say, a cup of coffee), this can be thought of as creating a benefit for that individual, combined with a loss for the rest of society (all the time and trouble it took to produce that cup of coffee, now gone). Paying for things is our way of compensating all the people who have been inconvenienced by our consumption. (Next time you buy a cup of coffee at Starbucks, imagine yourself saying to the barista, “I’m sorry that you had to serve me coffee when you could have been doing other things. And please communicate my apologies to the others as well: the owner, the landlord, the shipping company, the Columbian peasants. Here’s $1.75 for all the trouble. Please divide it amongst yourselves.)

“Social cost” represents the level of renunciation, or foregone consumption, imposed upon the rest of society by each individual’s own consumption. This is a fairly abstract notion, since it’s not just that the good could have been consumed by someone else, but that the labor and resources that went into making that good could have been used to produce something else, which then could have been consumed by someone else. (So when I drink a cup of coffee, I am not only taking away that cup of coffee from all those who might like to have drunk it, but taking away vegetables from those who might like to have used the land to grow food, clothing from those who might like to have employed the agricultural workers in a garment factory, and so on.)

[…]

If the price of coffee tracks changes in supply and demand, it will tend to reflect this level of hardship. If the rest of us really want coffee, then we will be prepared to pay more for it, and so the price will rise. Coffee will become more “dear” (as the British would say), reflecting the fact that the person who drinks it is denying the rest of us something we really want. Thus the coffee-drinker had better really want it in order to justify depriving us of it. His willingness to pay the higher price is precisely what ensures that he does, in fact, really want it.

At the price where the hardship of creating a certain amount of some good meets the desire for a good, a price emerges. It’s this “market clearing” price which efficiently allocates most of society’s resources the way we need them allocated.

If prices are systematically lower than they should be, consumers benefit from society’s hard work in a way that might be better allocated elsewhere, where some other group would happily pay more for the same level of “social costs” imposed, and the producers would receive more for all their work.

Conversely, if prices are too high, then consumers don’t really get to be as happy as they should be relative to the modest “social cost” they’ve imposed. Each outcome is inefficient and produces less happiness and material wealth. A well-established pricing mechanism does the job of sending the right signals about wants, needs, and supplies.

Income Over Pricing

Heath makes a final important point about the inequality of income in society, and that in many cases, people who have had a rough hand dealt to them do deserve help. It’s just that playing with the pricing mechanism is usually the worst way to do it — as we saw above, you hand people money who don’t need it while distorting an efficient allocation of resources throughout society. Heath calls this the just price fallacy — the idea that some alternative level of prices are more “fair” and that we should intervene to ensure them. The “just price fallacy” fails because it doesn’t ask the crucial question: And then what?

Returning to the dictum that poor people simply don’t have enough money (ridiculous as it sounds), the better method is to attack the other side — income — through the system of taxation and other mechanisms, things which we do in great heaps in modern society, but will always be argued over. If market prices tend to efficiently signal suppliers about the wants and needs of society, we can usually help the less fortunate best by giving them more “claim checks” rather than distorting the very thing that works.

***

Still Interested? Try reading more from the wonderful book Economics Without Illusions, where Heath takes on some fallacies from the left and some fallacies from the right in the economic debate.

For more from Farnam Street, check out Charlie Munger’s speech on what could make the economics profession work a little better or check out economist John Kay’s recommendations on books about economics in the real world.

How (Supposedly) Rational People Make Decisions

There are four principles that Gregory Mankiw outlines in his multi-disciplinary economics textbook Principles of Economics.

I got the idea for reading an Economics textbook from Charlie Munger, the billionaire business partner of Warren Buffett. He said:

Economics was always more multidisciplinary than the rest of soft science. It just reached out and grabbed things as it needed to. And that tendency to just grab whatever you need from the rest of knowledge if you’re an economist has reached a fairly high point in Mankiw’s new textbook Principles of Economics. I checked out that textbook. I must have been one of the few businessmen in America that bought it immediately when it came out because it had gotten such a big advance. I wanted to figure out what the guy was doing where he could get an advance that great. So this is how I happened to riffle through Mankiw’s freshman textbook. And there I found laid out as principles of economics: opportunity cost is a superpower, to be used by all people who have any hope of getting the right answer. Also, incentives are superpowers.

So we know that we can add Opportunity cost and incentives to our list of Mental Models.

Let’s dig in.

Principle 1: People Face Trade-offs

You have likely heard the old saying, “There is no such thing as a free lunch.” There is much to this old adage and it’s one we often forget when making decisions. To get more of something we like we almost always have to give up something else we like. A good heuristic in life is that if someone offers you something for nothing, turn it down.

Making decisions requires trading off one goal against another.

Consider a student who must decide how to allocate her most valuable resource—her time. She can spend all of her time studying economics, spend all of it studying psychology, or divide it between the two fields. For every hour she studies one subject, she gives up an hour she could have used studying the other. And for every hour she spends studying, she gives up an hour that she could have spent napping, bike riding, watching TV, or working at her part-time job for some extra spending money.

Or consider parents deciding how to spend their family income. They can buy food, clothing, or a family vacation. Or they can save some of the family income for retirement or for children’s college education. When they choose to spend an extra dollar on one of these goods, they have one less dollar to spend on some other good.

These are rather simple examples but Mankiw offers some more complicated ones. Consider the trade-off that society faces between efficiency and equality.

Efficiency means that society is getting the maximum benefits from its scarce resources. Equality means that those benefits are distributed uniformly among society’s members. In other words, efficiency refers to the size of the economic pie, and equality refers to how the pie is divided into individual slices.

When government policies are designed, these two goals often conflict. Consider, for instance, policies aimed at equalizing the distribution of economic well-being. Some of these policies, such as the welfare system or unemployment insurance, try to help the members of society who are most in need. Others, such as the individual income tax, ask the financially successful to contribute more than others to support the government. Though they achieve greater equality, these policies reduce efficiency. When the government redistributes income from the rich to the poor, it reduces the reward for working hard; as a result, people work less and produce fewer goods and services. In other words, when the government tries to cut the economic pie into more equal slices, the pie gets smaller.

Principle 2: The Cost of Something Is What You Give Up to Get It

Because of trade-offs, people face decisions between the costs and benefits of one course of action and the cost and benefits of another course. But costs are not as obvious as they might first appear — we need to apply some second-order thinking:

Consider the decision to go to college. The main benefits are intellectual enrichment and a lifetime of better job opportunities. But what are the costs? To answer this question, you might be tempted to add up the money you spend on tuition, books, room, and board. Yet this total does not truly represent what you give up to spend a year in college.

There are two problems with this calculation. First, it includes some things that are not really costs of going to college. Even if you quit school, you need a place to sleep and food to eat. Room and board are costs of going to college only to the extent that they are more expensive at college than elsewhere. Second, this calculation ignores the largest cost of going to college—your time. When you spend a year listening to lectures, reading textbooks, and writing papers, you cannot spend that time working at a job. For most students, the earnings they give up to attend school are the single largest cost of their education.

The opportunity cost of an item is what you give up to get that item. When making any decision, decision makers should be aware of the opportunity costs that accompany each possible action. In fact, they usually are. College athletes who can earn millions if they drop out of school and play professional sports are well aware that the opportunity cost of their attending college is very high. It is not surprising that they often decide that the benefit of a college education is not worth the cost.

Principle 3: Rational People Think at the Margin

For the sake of simplicity economists normally assume that people are rational. While this causes many problems, there is an undercurrent of truth to the fact that people systematically and purposefully “do the best they can to achieve their objectives, given opportunities.” There are two parts to rationality. The first is that your understanding of the world is correct. Second you maximize the use of your resources toward your goals.

Rational people know that decisions in life are rarely black and white but usually involve shades of gray. At dinnertime, the question you face is not “Should I fast or eat like a pig?” More likely, you will be asking yourself “Should I take that extra spoonful of mashed potatoes?” When exams roll around, your decision is not between blowing them off and studying twenty-four hours a day but whether to spend an extra hour reviewing your notes instead of watching TV. Economists use the term marginal change to describe a small incremental adjustment to an existing plan of action. Keep in mind that margin means “edge,” so marginal changes are adjustments around the edges of what you are doing. Rational people often make decisions by comparing marginal benefits and marginal costs.

Thinking at the margin works for business decisions.

Consider an airline deciding how much to charge passengers who fly standby. Suppose that flying a 200-seat plane across the United States costs the airline $100,000. In this case, the average cost of each seat is $100,000/200, which is $500. One might be tempted to conclude that the airline should never sell a ticket for less than $500. But a rational airline can increase its profits by thinking at the margin. Imagine that a plane is about to take off with 10 empty seats and a standby passenger waiting at the gate is willing to pay $300 for a seat. Should the airline sell the ticket? Of course, it should. If the plane has empty seats, the cost of adding one more passenger is tiny. The average cost of flying a passenger is $500, but the marginal cost is merely the cost of the bag of peanuts and can of soda that the extra passenger will consume. As long as the standby passenger pays more than the marginal cost, selling the ticket is profitable.

This also helps answer the question of why diamonds are so expensive and water is so cheap.

Humans need water to survive, while diamonds are unnecessary; but for some reason, people are willing to pay much more for a diamond than for a cup of water. The reason is that a person’s willingness to pay for a good is based on the marginal benefit that an extra unit of the good would yield. The marginal benefit, in turn, depends on how many units a person already has. Water is essential, but the marginal benefit of an extra cup is small because water is plentiful. By contrast, no one needs diamonds to survive, but because diamonds are so rare, people consider the marginal benefit of an extra diamond to be large.

A rational decision maker takes an action if and only if the marginal benefit of the action exceeds the marginal cost.

Principle 4: People Respond to Incentives

Incentives induce people to act. If you use a rational approach to decision making that involves trade offs and comparing costs and benefits, you respond to incentives. Charlie Munger once said: “Never, ever, think about something else when you should be thinking about the power of incentives.”

Incentives are crucial to analyzing how markets work. For example, when the price of an apple rises, people decide to eat fewer apples. At the same time, apple orchards decide to hire more workers and harvest more apples. In other words, a higher price in a market provides an incentive for buyers to consume less and an incentive for sellers to produce more. As we will see, the influence of prices on the behavior of consumers and producers is crucial for how a market economy allocates scarce resources.

Public policymakers should never forget about incentives: Many policies change the costs or benefits that people face and, as a result, alter their behavior. A tax on gasoline, for instance, encourages people to drive smaller, more fuel-efficient cars. That is one reason people drive smaller cars in Europe, where gasoline taxes are high, than in the United States, where gasoline taxes are low. A higher gasoline tax also encourages people to carpool, take public transportation, and live closer to where they work. If the tax were larger, more people would be driving hybrid cars, and if it were large enough, they would switch to electric cars.

Failing to consider how policies and decisions affect incentives often results in unforeseen results.

Biases and Blunders

Nudge: Improving Decisions About Health, Wealth, and Happiness

You would be hard pressed to come across a reading list on behavioral economics that doesn’t mention Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard Thaler and Cass Sunstein.

It is a fascinating look at how we can create environments or ‘choice architecture’ to help people make better decisions. But one of the reasons it’s been so influential is because it helps us understand why people sometimes make bad decisions in the first place. If we really want to understand how we can nudge people into making better choices, it’s important to understand why they often make such poor ones.

Let’s take a look at how Thaler and Sunstein explain some of our common mistakes in a chapter aptly called ‘Biases and Blunders.’

Anchoring and Adjustment

Humans have a tendency to put too much emphasis on one piece of information when making decisions. When we overweigh one piece of information and make assumptions based on it, we call that an anchor. Say I borrow a 400-page-book from a friend and I think to myself, the last book I read was about 300 pages and I read it in 5 days so I’ll let my friend know I’ll have her book back to her in 7 days. Problem is, I’ve only compared one factor related to me reading books and now I’ve made a decision without taking into account many other factors which could affect the outcome. For example, is the new book a topic I will digest at the same rate? Will I have the same time over those 7 days for reading? I have looked at number of pages but are the number of words per page similar?

As Thaler and Sunstein explain:

This process is called ‘anchoring and adjustment.’ You start with some anchor, the number you know, and adjust in the direction you think is appropriate. So far, so good. The bias occurs because the adjustments are typically insufficient.

Availability Heuristic

This is the tendency of our mind to overweigh information that is recent and readily available. What did you think about the last time you read about a plane crash? Did you start thinking about you being in a plane crash? Imagine how much it would weigh on your mind if you were set to fly the next day.

We assess the likelihood of risks by asking how readily examples come to mind. If people can easily think of relevant examples, they are far more likely to be frightened and concerned than if they cannot.

Accessibility and salience are closely related to availability, and they are important as well. If you have personally experienced a serious earthquake, you’re more likely to believe that an earthquake is likely than if you read about it in a weekly magazine. Thus, vivid and easily imagined causes of death (for example, tornadoes) often receive inflated estimates of probability, and less-vivid causes (for example, asthma attacks) receive low estimates, even if they occur with a far greater frequency (here, by a factor of twenty). Timing counts too: more recent events have a greater impact on our behavior, and on our fears, than earlier ones.

Representativeness Heuristic

Use of the representativeness heuristic can cause serious misperceptions of patterns in everyday life. When events are determined by chance, such as a sequence of coin tosses, people expect the resulting string of heads and tails to be representative of what they think of as random. Unfortunately, people do not have accurate perceptions of what random sequences look like. When they see the outcomes of random processes, they often detect patterns that they think have great meaning but in fact are just due to chance.

It would seem as though we have issues with randomness. Our brains automatically want to see patterns when none may exist. Try a coin toss experiment on yourself. Simply flip a coin and keep track if it’s heads or tails. At some point you will hit ‘a streak’ of either heads or tails and you will notice that you experience a sort of cognitive dissonance; you know that ‘a streak’ at some point is statistically probable but you can’t help but thinking the next toss has to break the streak because for some reason in your head it’s not right. That unwillingness to accept randomness, our need for a pattern, often clouds our judgement when making decisions.

Unrealistic Optimism

We have touched upon optimism bias in the past. Optimism truly is a double-edged sword. On one hand it is extremely important to be able to look past a bad moment and tell yourself that it will get better. Optimism is one of the great drivers of human progress.

On the other hand, if you never take those rose-coloured glasses off, you will make mistakes and take risks that could have been avoided. When assessing the possible negative outcomes associated with risky behaviour we often think ‘it won’t happen to me.’ This is a brain trick: We are often insensitive to the base rate.

Unrealistic optimism is a pervasive feature of human life; it characterizes most people in most social categories. When they overestimate their personal immunity from harm, people may fail to take sensible preventive steps. If people are running risks because of unrealistic optimism, they might be able to benefit from a nudge.

Loss Aversion

When they have to give something up, they are hurt more than they are pleased if they acquire the very same thing.

We are familiar with loss aversion in the context described above but Thaler and Sunstein take the concept a step further and explain how it plays a role in ‘default choices.’ Loss aversion can make us so fearful of making the wrong decision that we don’t make any decision. This explains why so many people settle for default options.

The combination of loss aversion with mindless choosing implies that if an option is designated as the ‘default,’ it will attract a large market share. Default options thus act as powerful nudges. In many contexts defaults have some extra nudging power because consumers may feel, rightly or wrongly, that default options come with an implicit endorsement from the default setter, be it the employer, government, or TV scheduler.

Of course, this is not the only reason default options are so popular. “Anchoring,” which we mentioned above, plays a role here. Our mind anchors immediately to the default option, especially in unfamiliar territory for us.

We also have the tendency towards inertia, given that mental effort is tantamount to physical effort – thinking hard requires physical resources. If we don’t know the difference between two 401(k) plans and they both seem similar, why expend the mental effort to switch away from the default investment option? You may not have that thought consciously; it often happens as a “click, whirr.

State of Arousal

Our prefered definition requires recognizing that people’s state of arousal varies over time. To simplify things we will consider just the two endpoints: hot and cold. When Sally is very hungry and appetizing aromas are emanating from the kitchen, we can say she is in a hot state. When Sally is thinking abstractly on Tuesday about the right number of cashews she should consume before dinner on Saturday, she is in a cold state. We will call something ‘tempting’ if we consume more of it when hot than when cold. None of this means that decisions made in a cold state are always better. For example, sometimes we have to be in a hot state to overcome our fears about trying new things. Sometimes dessert really is delicious, and we do best to go for it. Sometimes it is best to fall in love. But it is clear that when we are in a hot state, we can often get into a lot of trouble.

For most of us, however, self-control issues arise because we underestimate the effect of arousal. This is something the behavioral economist George Loewenstein (1996) calls the ‘hot-cold empathy gap.’ When in a cold state, we do not appreciate how much our desires and our behavior reflects a certain naivete about the effects that context can have on choice.

The concept of arousal is analogous to mood. At the risk of stating the obvious, our mood can play a definitive role in our decision making. We all know it, but how many among us truly use that insight to make better decisions?

This is one reason we advocate decision journals when it comes to meaningful decisions (probably no need to log in your cashew calculations); a big part of tracking your decisions is your mood when you make themA zillion contextual clues go into your state of arousal, but taking a quick pause to note which state you’re in as you make a decision can make a difference over time.

Mood is also affected by chemicals. This one may be familiar to you coffee (or tea) addicts out there. Do you recall the last time you felt terrible or uncertain about a decision when you were tired, only to feel confident and spunky about the same topic after a cup of java?

Or, how about alcohol? There’s a reason it’s called a “social lubricant” – our decision making changes when we’ve consumed enough of it.

Lastly, the connection between sleep and mood goes deep. Need we say more?

Peer Pressure

Peer pressure is another tricky nudge that can be both positive or negative. We can be nudged to make better decisions when we think that our peer group is doing the same. If we think our neighbors conserve more energy or recycle more, we start making a better effort to reduce our consumption and recycle. If we think the people around us are eating better and exercising more we tend to do the same. Information we get from peer groups can also help us make better decisions because of ‘collaborative filtering’; the choices of our peer groups help us filter out and narrow down our choices. If your friends who share similar views and tastes as you recommend book X, then you may like it as well. (Google, Amazon and Netflix are built on this principle).

However, if we are all reading the same book because we constantly see people with it, but none of us actually like it, then we all lose. We run off the mountain with the other lemmings.

Social influences come in two basic categories. The first involves information. If many people do something or think something, their actions and their thoughts convey information about what might be best for you to do or think. The second involves peer pressure. If you care about what other people think about you (perhaps in the mistaken belief that they are paying some attention to what you are doing), then you might go along with the crowd to avoid their wrath or curry their favor.

An important problem here is ‘pluralistic ignorance’ – that is, ignorance, on the part of all or most, about what other people think. We may follow a practice or a tradition not because we like it, or even think it defensible, but merely because we think that most other people like it. Many social practices persist for this reason, and a small shock, or nudge, can dislodge them.

How do we beat social influence? It’s very difficult, and not always desirable: If you are about to enter a building a lot of people are running away from, there’s a better than good chance you should too. But this useful instinct leads us awry.

A simple algorithm, when you feel yourself acting out of social proof, is to ask yourself: Would I still do this if everyone else was not?

***

For more, check out Nudge.

How Analogies Reveal Connections, Spark Innovation, and Sell Our Greatest Ideas

Image Source: XKCD
Source: xkcd.com

 

John Pollack is a former Presidential Speechwriter. If anyone knows the power of words to move people to action, shape arguments, and persuade, it is he.

In Shortcut: How Analogies Reveal Connections, Spark Innovation, and Sell Our Greatest Ideas, he explores the powerful role of analogy in persuasion and creativity.

One of the key tools he uses for this is analogy.

While they often operate unnoticed, analogies aren’t accidents, they’re arguments—arguments that, like icebergs, conceal most of their mass and power beneath the surface. In arguments, whoever has the best argument wins.

But analogies do more than just persuade others — they also play a role in innovation and decision making.

From the bloody Chicago slaughterhouse that inspired Henry Ford’s first moving assembly line, to the “domino theory” that led America into the Vietnam War, to the “bicycle for the mind” that Steve Jobs envisioned as a Macintosh computer, analogies have played a dynamic role in shaping the world around us.

Despite their importance, many people have only a vague sense of the definition.

What is an Analogy?

In broad terms, an analogy is simply a comparison that asserts a parallel—explicit or implicit—between two distinct things, based on the perception of a share property or relation. In everyday use, analogies actually appear in many forms. Some of these include metaphors, similes, political slogans, legal arguments, marketing taglines, mathematical formulas, biblical parables, logos, TV ads, euphemisms, proverbs, fables and sports clichés.

Because they are so disguised they play a bigger role than we consciously realize. Not only do analogies effectively make arguments, but they trigger emotions. And emotions make it hard to make rational decisions.

While we take analogies for granted, the ideas they convey are notably complex.

All day every day, in fact, we make or evaluate one analogy after the other, because some comparisons are the only practical way to sort a flood of incoming data, place it within the content of our experience, and make decisions accordingly.

Remember the powerful metaphor — that arguments are war. This shapes a wide variety of expressions like “your claims are indefensible,” “attacking the weakpoints,” and “You disagree, OK shoot.”

Or consider the Map and the Territory — Analogies give people the map but explain nothing of the territory.

Warren Buffett is one of the best at using analogies to communicate effectively. One of my favorite analogies is when he noted “You never know who’s swimming naked until the tide goes out.” In other words, when times are good everyone looks amazing. When times suck, hidden weaknesses are exposed. The same could be said for analogies:

We never know what assumptions, deceptions, or brilliant insights they might be hiding until we look beneath the surface.

Most people underestimate the importance of a good analogy. As with many things in life, this lack of awareness comes at a cost. Ignorance is expensive.

Evidence suggests that people who tend to overlook or underestimate analogy’s influence often find themselves struggling to make their arguments or achieve their goals. The converse is also true. Those who construct the clearest, most resonant and apt analogies are usually the most successful in reaching the outcomes they seek.

The key to all of this is figuring out why analogies function so effectively and how they work. Once we know that, we should be able to craft better ones.

Don’t Think of an Elephant

Effective, persuasive analogies frame situations and arguments, often so subtly that we don’t even realize there is a frame, let alone one that might not work in our favor. Such conceptual frames, like picture frames, include some ideas, images, and emotions and exclude others. By setting a frame, a person or organization can, for better or worse, exert remarkable influence on the direction of their own thinking and that of others.

He who holds the pen frames the story. The first person to frame the story controls the narrative and it takes a massive amount of energy to change the direction of the story. Sometimes even the way that people come across information, shapes it — stories that would be a non-event if disclosed proactively became front page stories because someone found out.

In Don’t Think of an Elephant, George Lakoff explores the issue of framing. The book famously begins with the instruction “Don’t think of an elephant.”

What’s the first thing we all do? Think of an elephant, of course. It’s almost impossible not to think of an elephant. When we stop consciously thinking about it, it floats away and we move on to other topics — like the new email that just arrived. But then again it will pop back into consciousness and bring some friends — associated ideas, other exotic animals, or even thoughts of the GOP.

“Every word, like elephant, evokes a frame, which can be an image of other kinds of knowledge,” Lakoff writes. This is why we want to control the frame rather than be controlled by it.

In Shortcut Pollack tells of Lakoff talking about an analogy that President George W. Bush made in the 2004 State of the Union address, in which he argued the Iraq war was necessary despite the international criticism. Before we go on, take Bush’s side here and think about how you would argue this point – how would you defend this?

In the speech, Bush proclaimed that “America will never seek a permission slip to defend the security of our people.”

As Lakoff notes, Bush could have said, “We won’t ask permission.” But he didn’t. Instead he intentionally used the analogy of permission slip and in so doing framed the issue in terms that would “trigger strong, more negative emotional associations that endured in people’s memories of childhood rules and restrictions.”

Commenting on this, Pollack writes:

Through structure mapping, we correlate the role of the United States to that of a young student who must appeal to their teacher for permission to do anything outside the classroom, even going down the hall to use the toilet.

But is seeking diplomatic consensus to avoid or end a war actually analogous to a child asking their teacher for permission to use the toilet? Not at all. Yet once this analogy has been stated (Farnam Street editorial: and tweeted), the debate has been framed. Those who would reject a unilateral, my-way-or-the-highway approach to foreign policy suddenly find themselves battling not just political opposition but people’s deeply ingrained resentment of childhood’s seemingly petty regulations and restrictions. On an even subtler level, the idea of not asking for a permission slip also frames the issue in terms of sidestepping bureaucratic paperwork, and who likes bureaucracy or paperwork.

Deconstructing Analogies

Deconstructing analogies, we find out how they function so effectively. Pollack argues they meet five essential criteria.

  1. Use the highly familiar to explain something less familiar.
  2. Highlight similarities and obscure differences.
  3. Identify useful abstractions.
  4. Tell a coherent story.
  5. Resonate emotionally.

Let’s explore how these work in greater detail. Let’s use the example of master-thief, Bruce Reynolds, who described the Great Train Robbery as his Sistine Chapel.

The Great Train Robbery

In the dark early hours of August 8, 1963, an intrepid gang of robbers hot-wired a six-volt battery to a railroad signal not far from the town of Leighton Buzzard, some forty miles north of London. Shortly, the engineer of an approaching mail train, spotting the red light ahead, slowed his train to a halt and sent one of his crew down the track, on foot, to investigate. Within minutes, the gang overpowered the train’s crew and, in less than twenty minutes, made off with the equivalent of more than $60 million in cash.

Years later, Bruce Reynolds, the mastermind of what quickly became known as the Great Train Robbery, described the spectacular heist as “my Sistine Chapel.”

Use the familiar to explain something less familiar

Reynolds exploits the public’s basic familiarity with the famous chapel in the Vatican City, which after Leonardo da Vinci’s Mona Lisa is perhaps the best-known work of Renaissance art in the world. Millions of people, even those who aren’t art connoisseurs, would likely share the cultural opinion that the paintings in the chapel represent “great art” (as compared to a smaller subset of people who might feel the same way about Jackson Pollock’s drip paintings, or Marcel Duchamp’s upturned urinal).

Highlight similarities and obscure differences

Reynold’s analogy highlights, through implication, similarities between the heist and the chapel—both took meticulous planning and masterful execution. After all, stopping a train and stealing the equivalent of $60m—and doing it without guns—does require a certain artistry. At the same time, the analogy obscures important differences. By invoking the image of a holy sanctuary, Reynolds triggers a host of associations in the audience’s mind—God, faith, morality, and forgiveness, among others—that camouflage the fact that he’s describing an action few would consider morally commendable, even if the artistry involved in robbing that train was admirable.

Identify useful abstractions

The analogy offers a subtle but useful abstraction: Genius is genius and art is art, no matter what the medium. The logic? If we believe that genius and artistry can transcend genre, we must concede that Reynolds, whose artful, ingenious theft netted millions, is an artist.

Tell a coherent story

The analogy offers a coherent narrative. Calling the Great Train Robbery his Sistine Chapel offers the audience a simple story that, at least on the surface makes sense: Just as Michelangelo was called by God, the pope, and history to create his greatest work, so too was Bruce Reynolds called by destiny to pull off the greatest robbery in history. And if the Sistine Chapel endures as an expression of genius, so too must the Great Train Robbery. Yes, robbing the train was wrong. But the public perceived it as largely a victimless crime, committed by renegades who were nothing if not audacious. And who but the most audacious in history ever create great art? Ergo, according to this narrative, Reynolds is an audacious genius, master of his chosen endeavor, and an artist to be admired in public.

There is an important point here. The narrative need not be accurate. It is the feelings and ideas the analogy evokes that make it powerful. Within the structure of the analogy, the argument rings true. The framing is enough to establish it succulently and subtly. That’s what makes it so powerful.

Resonate emotionally

The analogy resonates emotionally. To many people, mere mention of the Sistine Chapel brings an image to mind, perhaps the finger of Adam reaching out toward the finger of God, or perhaps just that of a lesser chapel with which they are personally familiar. Generally speaking, chapels are considered beautiful, and beauty is an idea that tends to evoke positive emotions. Such positive emotions, in turn, reinforce the argument that Reynolds is making—that there’s little difference between his work and that of a great artist.

Jumping to Conclusions

Daniel Kahneman explains the two thinking structures that govern the way we think: System one and system two . In his book, Thinking Fast and Slow, he writes “Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake are acceptable, and if the jump saves much time and effort.”

“A good analogy serves as an intellectual springboard that helps us jump to conclusions,” Pollack writes. He continues:

And once we’re in midair, flying through assumptions that reinforce our preconceptions and preferences, we’re well on our way to a phenomenon known as confirmation bias. When we encounter a statement and seek to understand it, we evaluate it by first assuming it is true and exploring the implications that result. We don’t even consider dismissing the statement as untrue unless enough of its implications don’t add up. And consider is the operative word. Studies suggest that most people seek out only information that confirms the beliefs they currently hold and often dismiss any contradictory evidence they encounter.

The ongoing battle between fact and fiction commonly takes place in our subconscious systems. In The Political Brain: The Role of Emotion in Deciding the Fate of the Nation, Drew Westen, an Emory University psychologist, writes: “Our brains have a remarkable capacity to find their way toward convenient truths—even if they are not all true.”

This also helps explain why getting promoted has almost nothing to do with your performance.

Remember Apollo Robbins? He’s a professional pickpocket. While he has unique skills, he succeeds largely through the choreography of people’s attention. “Attention,” he says “is like water. It flows. It’s liquid. You create channels to divert it, and you hope that it flows the right way.”

“Pickpocketing and analogies are in a sense the same,” Pollack concludes, “as the misleading analogy picks a listener’s mental pocket.”

And this is true whether someone else diverts our attention through a resonant but misleading analogy—“Judges are like umpires”—or we simply choose the wrong analogy all by ourselves.

Reasoning by Analogy

We rarely stop to see how much of our reasoning is done by analogy. In a 2005 study published in the Harvard Business Review, Giovanni Gavettie and Jan Rivkin wrote: “Leaders tend to be so immersed in the specifics of strategy that they rarely stop to think how much of their reasoning is done by analogy.” As a result they miss things. They make connections that don’t exist. They don’t check assumptions. They miss useful insights. By contrast “Managers who pay attention to their own analogical thinking will make better strategic decisions and fewer mistakes.”

***

Shortcut goes on to explore when to use analogies and how to craft them to maximize persuasion.