Tag: Incentives

Charlie Munger: How to Teach Business School

From Charlie Munger at the 2011 Berkshire Hathaway Shareholders Meeting:

Costco of course is a business that became the best in the world in its category. And it did it with an extreme meritocracy, and an extreme ethical duty—self-imposed to take all its cost advantages as fast as it could accumulate them and pass them on to the customers. And of course they’ve created ferocious customer loyalty. It’s been a wonderful business to watch—and of course strange things happen when you do that and when you do that long enough. Costco has one store in Korea that will do over $400 million in sales this year. These are figures that can’t exist in retail, but of course they do. So that’s an example of somebody having the right managerial system, the right personnel solution, the right ethics, the right diligence, etcetera, etcetera. And that is quite rare. If once or twice in your lifetime you’re associated with such a business you’re a very lucky person.

The more normal business is a business like, say, General Motors, which became the most successful business of its kind in the world and wiped out its common shareholders… what, last year? That is a very interesting story—and if I were teaching business school I would have Value-Line-type figures that took me through the entire history of General Motors and I would try to relate the changes in the graph and data to what happened in the business. To some extent, they faced a really difficult problem—heavily unionized business, combined with great success, and very tough competitors that came up from Asia and elsewhere in Europe. That is a real problem which of course… to prevent wealth from killing people—your success turning into a disadvantage—is a big problem in business.

And so there are all these wonderful lessons in those graphs. I don’t know why people don’t do it. The graphs don’t even exist that I would use to teach. I can’t imagine anybody being dumb enough not to have the kind of graphs I yearn for. [Laughter] But so far as I know there’s no business school in the country that’s yearning for these graphs. Partly the reason they don’t want it is if you taught a history of business this way, you’d be trampling on the territories of all the professors and sub-disciplines—you’d be stealing some of their best cases. And in bureaucracies, even academic bureaucracies, people protect their own turf. And of course a lot of that happened at General Motors. [Applause]

I really think the world … that’s the way it should be taught. Harvard Business School once taught it much that way—and they stopped. And I’d like to make a case study as to why they stopped. [Laughter] I think I can successfully guess. It’s that the course of history of business trampled on the territory of barons of other disciplines like the baron of marketing, the baron of finance, the baron of whatever.

IBM is an interesting case. There’s just one after another that are just utterly fascinating. I don’t think they’re properly taught at all because nobody wants to do the full sweep.


The Great Ideas of the Social Sciences

What are the most important ideas ever put forward in social science?

I’m not asking what are the best ideas, so the truth of them is only obliquely relevant: a very important idea may be largely false. (I think it still must contain some germ of truth, or it would have no plausibility.) Think of it this way: if you were teaching a course called “The Great Ideas of the Social Sciences,” what would you want to make sure you included?

The list:

  • The state as the individual writ large (Plato)
  • Man is a political/social animal (Aristotle)
  • The city of God versus the city of man (Augustine)
  • What is moral for the individual may not be for the ruler (Machiavelli)
  • Invisible hand mechanisms (Hume, Smith, Ferguson)
  • Class struggle (Marx, various liberal thinkers)
  • The subconscious has a logic of its own (Freud)
  • Malthusian population theory
  • The labor theory of value (Ricardo, Marx)
  • Marginalism (Menger, Jevons, Walras)
  • Utilitarianism (Bentham, Mill, Mill)
  • Contract theory of the state (Hobbes, Locke, Rousseau)
  • Sapir-Worf hypothesis
  • Socialist calculation problem (Mises, Hayek)
  • The theory of comparative advantage (Mill, Ricardo)
  • Game theory (von Neumann, Morgenstern, Schelling)
  • Languages come in families (Jones, Young, Bopp)
  • Theories of aggregate demand shortfall (Malthus, Sismondi, Keynes)
  • History as an independent mode of thought (Dilthey, Croce, Collingwood, Oakeshott)
  • Public choice theory (Buchanan, Tullock)
  • Rational choice theory (who?)
  • Equilibrium theorizing (who?)

Nassim Taleb on How to Prevent Other Financial Crises

Let us start with our conclusion, which is also a simple policy recommendation, and one that is not just easy to implement but has been part of history until recent days. We believe that “less is more” in complex systems—that simple heuristics and protocols are necessary for complex problems as elaborate rules often lead to “multiplicative branching” of side effects that cumulatively may have first order effects. So instead of relying on thousands of meandering pages of regulation, we should enforce a basic principle of “skin in the game” when it comes to financial oversight: “The captain goes down with the ship; every captain and every ship.”

That’s Nassim Taleb in a newly published paper. “In other words,” Taleb and his co-author argue, “nobody should be in a position to have the upside without sharing the downside, particularly when others may be harmed.”

They sounds an awful lot like Warren Buffett:

If I were running things if a bank had to go to the government for help, the CEO and his wife would forfeit all their net worth. … And that would apply to any C.E.O. that had been there in the previous two years. …I think you have to change the incentives. The incentives a few years ago were try and report higher quarterly earnings. It’s nice to have carrots, but you need sticks. The idea that some guy who’s worth $500 million leaves and only has $50 million left is not much of a stick as far as I’m concerned.

Incentives matter. Hammurabi’s code, formulated nearly 4,000 years ago, serves as a great example:

If a builder builds a house for a man and does not make its construction firm, and the house which he has built collapses and causes the death of the owner of the house, that builder shall be put to death.

“This principle,” Taleb writes, “has been applied by all civilizations, from the Roman heuristic that engineers spend time sleeping under the bridges they have built, to the maritime rule that the captain should be last to leave the ship when there is a risk of sinking”

Making Good Citizenship Fun — Richard Thaler

Interesting article by Richard Thaler on encouraging good citizenship by making the desired behavior more fun:

Lotteries are just one way to provide positive reinforcement. Their power comes from the fact that the chance of winning the prize is overvalued. Of course you can simply pay people for doing the right thing, but if the payment is small, it could well backfire. …

An alternative to lotteries is a frequent-flyer-type reward program, where the points can be redeemed for something fun. A free goodie can be a better inducement than cash since it offers that rarest of commodities, a guilt-free pleasure. This sort of reward system has been successfully used in England to encourage recycling. In the Royal Borough of Windsor and Maidenhead outside of London, citizens could sign up for a rewards program in which they earned points depending on the weight of the material they recycled. The points were good for discounts at merchants in the area. Recycling increased by 35 percent.

Owning up to Your Ignorance by Saying “I Don’t Know”

Justin Landis wrote an interesting blog post in response to a Freakonomics podcast “Why Is ‘I Don’t Know’ So Hard to Say?”

The highlight:

[T]hose of us who live in the business world are certainly incentivized to focus on what we know over what we don’t know. And whether we’re talking about closing a deal with an important client or simply competing with peers for a limited number of positions in a given field, this holds true. By highlighting what we know and tactfully hiding what we don’t, we present the illusion of mastery, and this is thought to (and generally does) inspire confidence in our ability to execute. The incentive here is clear, and the results are pretty clear as well, at least in my experience. The unintended consequence is that while we’re all focusing on what we know and making sure others are aware of those things, there is still a lot that we don’t know. In some cases, we may not even really know what we don’t know. And this is the most dangerous place to be in.

Not knowing what you don’t know is dangerous indeed. In this position, you are clueless as to how to minimize gaps in your knowledge, which could hinder your ability to excel in business and in life. Furthermore, if you are in a position to teach, mentor, and/or coach others, then people are looking to you for knowledge and ideas. But instead of spreading knowledge and idea, you’d be disseminating false information and ignorance. And who needs more ignorance in a world that’s already full of it?

If I asked you if you knew how your cellphone works, you’d probably say yes, right? But if I asked you why your screen reacts to the touch of your fingertips or how the power button turns on your phone, what would you say? The illusion of explanatory depth (IOED) is a term that was first coined by Leonid Rozenblit and Frank Keil in their 2002 research paper, The misunderstood limits of folk science: an illusion of explanatory depth. From the abstract:

[People feel they understand complex phenomena with far greater precision, coherence, and depth than they really do; they are subject to an illusion—an illusion of explanatory depth. The illusion is far stronger for explanatory knowledge than many other kinds of knowledge, such as that for facts, procedures or narratives. The illusion for explanatory knowledge is most robust where the environment supports real-time explanations with visible mechanisms.

But consider this: “Fake it ‘til you make it” can be seen as the internal battle cry of people who feel out of their depth. However, despite their insecurities, they still pursue what they desire. This imaginary mask of confidence that hides their shortcomings becomes the buoy that keeps them afloat. Oftentimes, they end up doing what they set out to do and exceed expectations. Sometimes that individual can morph into the person they were pretending to be in the first place. It’s an idealistic outcome, but it does happen.

Needless to say, we must not forget the potentially disastrous outcomes that occur when this transformation doesn’t happen and they go on fooling others and themselves instead.

If we acknowledge our ignorance, an empty space in our knowledge that needs to be filled, we need to carefully consider how we say “I don’t know”. Right now, I don’t have the answer but I will find out for you. I don’t know but I know someone that does. Or simply, I’m sorry, but as much as I wish I could help you, I’m just not the right person to solve this problem. I don’t want to put your project in jeopardy.

Depending on who you are, facing this reality can either be uncomfortable, terrifying or liberating.

Daniel Kahneman Answers

In one of the more in-depth and wide-ranging Q&A sessions on the freakonomics blog has run, Daniel Kahneman, whose new book is called Thinking, Fast and Slow, answered 22 questions posted by readers.

Three of the questions that caught my attention:

Q. As you found, humans will take huge, irrational risks to avoid taking a loss. Couldn’t that explain why so many Penn State administrators took the huge risk of not disclosing a sexual assault?

A. In such a case, the loss associated with bringing the scandal into the open now is large, immediate and easy to imagine, whereas the disastrous consequences of procrastination are both vague and delayed. This is probably how many cover-up attempts begin. If people were certain that cover-ups would have very bad personal consequences (as happened in this case), we may see fewer cover-ups in future. From that point of view, the decisive reaction of the board of the University is likely to have beneficial consequences down the road.

Q. Problems in healthcare quality may be getting worse before they get better, and there are countless difficult decisions that will have to be made to ensure long-term system improvement. But on a daily basis, doctors and nurses and patients are each making a variety of decisions that shape healthcare on a smaller but more tangible level. How can the essence of Thinking, Fast and Slow be extracted and applied to the individual decisions that patients and providers make so that the quality of healthcare is optimized?

A. I don’t believe that you can expect the choices of patients and providers to change without changing the situation in which they operate. The incentives of fee-for-service are powerful, and so is the social norm that health is priceless (especially when paid for by a third party). Where the psychology of behavior change and the nudges of behavioral economics come into play is in planning for a transition to a better system. The question that must be asked is, “How can we make it easy for physicians and patients to change in the desired direction?”, which is closely related to, “Why don’t they already want the change?” Quite often, when you raise this question, you may discover that some inexpensive tweaks in the context will substantially change behavior. (For example, we know that people are more likely to pay their taxes if they believe that other people pay their taxes.)

Q:How can I identify my incentives and values so I can create a personal program of behavioral conditioning that associates incentives with the behavior likely to achieve long-term goals?

A. The best sources I know for answers to the question are included in the book Nudge by Richard Thaler and Cass Sunstein, in the book Mindless Eating by Brian Wansink, and in the books of the psychologist Robert Cialdini.

Read what you’ve been missing. Subscribe to Farnam Street via Email, RSS, or Twitter.

From Daniel Kahneman Answers Your Questions