Tag: Incentives

Charlie Munger: How to Teach Business School

From Charlie Munger at the 2011 Berkshire Hathaway Shareholders Meeting:

Costco of course is a business that became the best in the world in its category. And it did it with an extreme meritocracy, and an extreme ethical duty—self-imposed to take all its cost advantages as fast as it could accumulate them and pass them on to the customers. And of course they’ve created ferocious customer loyalty. It’s been a wonderful business to watch—and of course strange things happen when you do that and when you do that long enough. Costco has one store in Korea that will do over $400 million in sales this year. These are figures that can’t exist in retail, but of course they do. So that’s an example of somebody having the right managerial system, the right personnel solution, the right ethics, the right diligence, etcetera, etcetera. And that is quite rare. If once or twice in your lifetime you’re associated with such a business you’re a very lucky person.

The more normal business is a business like, say, General Motors, which became the most successful business of its kind in the world and wiped out its common shareholders… what, last year? That is a very interesting story—and if I were teaching business school I would have Value-Line-type figures that took me through the entire history of General Motors and I would try to relate the changes in the graph and data to what happened in the business. To some extent, they faced a really difficult problem—heavily unionized business, combined with great success, and very tough competitors that came up from Asia and elsewhere in Europe. That is a real problem which of course… to prevent wealth from killing people—your success turning into a disadvantage—is a big problem in business.

And so there are all these wonderful lessons in those graphs. I don’t know why people don’t do it. The graphs don’t even exist that I would use to teach. I can’t imagine anybody being dumb enough not to have the kind of graphs I yearn for. [Laughter] But so far as I know there’s no business school in the country that’s yearning for these graphs. Partly the reason they don’t want it is if you taught a history of business this way, you’d be trampling on the territories of all the professors and sub-disciplines—you’d be stealing some of their best cases. And in bureaucracies, even academic bureaucracies, people protect their own turf. And of course a lot of that happened at General Motors. [Applause]

I really think the world … that’s the way it should be taught. Harvard Business School once taught it much that way—and they stopped. And I’d like to make a case study as to why they stopped. [Laughter] I think I can successfully guess. It’s that the course of history of business trampled on the territory of barons of other disciplines like the baron of marketing, the baron of finance, the baron of whatever.

IBM is an interesting case. There’s just one after another that are just utterly fascinating. I don’t think they’re properly taught at all because nobody wants to do the full sweep.

Source

The Great Ideas of the Social Sciences

What are the most important ideas ever put forward in social science?

I’m not asking what are the best ideas, so the truth of them is only obliquely relevant: a very important idea may be largely false. (I think it still must contain some germ of truth, or it would have no plausibility.) Think of it this way: if you were teaching a course called “The Great Ideas of the Social Sciences,” what would you want to make sure you included?

The list:

  • The state as the individual writ large (Plato)
  • Man is a political/social animal (Aristotle)
  • The city of God versus the city of man (Augustine)
  • What is moral for the individual may not be for the ruler (Machiavelli)
  • Invisible hand mechanisms (Hume, Smith, Ferguson)
  • Class struggle (Marx, various liberal thinkers)
  • The subconscious has a logic of its own (Freud)
  • Malthusian population theory
  • The labor theory of value (Ricardo, Marx)
  • Marginalism (Menger, Jevons, Walras)
  • Utilitarianism (Bentham, Mill, Mill)
  • Contract theory of the state (Hobbes, Locke, Rousseau)
  • Sapir-Worf hypothesis
  • Socialist calculation problem (Mises, Hayek)
  • The theory of comparative advantage (Mill, Ricardo)
  • Game theory (von Neumann, Morgenstern, Schelling)
  • Languages come in families (Jones, Young, Bopp)
  • Theories of aggregate demand shortfall (Malthus, Sismondi, Keynes)
  • History as an independent mode of thought (Dilthey, Croce, Collingwood, Oakeshott)
  • Public choice theory (Buchanan, Tullock)
  • Rational choice theory (who?)
  • Equilibrium theorizing (who?)

Making Good Citizenship Fun — Richard Thaler

Interesting article by Richard Thaler on encouraging good citizenship by making the desired behavior more fun:

Lotteries are just one way to provide positive reinforcement. Their power comes from the fact that the chance of winning the prize is overvalued. Of course you can simply pay people for doing the right thing, but if the payment is small, it could well backfire. …

An alternative to lotteries is a frequent-flyer-type reward program, where the points can be redeemed for something fun. A free goodie can be a better inducement than cash since it offers that rarest of commodities, a guilt-free pleasure. This sort of reward system has been successfully used in England to encourage recycling. In the Royal Borough of Windsor and Maidenhead outside of London, citizens could sign up for a rewards program in which they earned points depending on the weight of the material they recycled. The points were good for discounts at merchants in the area. Recycling increased by 35 percent.

Owning up to Your Ignorance by Saying “I Don’t Know”

Justin Landis wrote an interesting blog post in response to a Freakonomics podcast “Why Is ‘I Don’t Know’ So Hard to Say?”

The highlight:

[T]hose of us who live in the business world are certainly incentivized to focus on what we know over what we don’t know. And whether we’re talking about closing a deal with an important client or simply competing with peers for a limited number of positions in a given field, this holds true. By highlighting what we know and tactfully hiding what we don’t, we present the illusion of mastery, and this is thought to (and generally does) inspire confidence in our ability to execute. The incentive here is clear, and the results are pretty clear as well, at least in my experience. The unintended consequence is that while we’re all focusing on what we know and making sure others are aware of those things, there is still a lot that we don’t know. In some cases, we may not even really know what we don’t know. And this is the most dangerous place to be in.

Not knowing what you don’t know is dangerous indeed. In this position, you are clueless as to how to minimize gaps in your knowledge, which could hinder your ability to excel in business and in life. Furthermore, if you are in a position to teach, mentor, and/or coach others, then people are looking to you for knowledge and ideas. But instead of spreading knowledge and idea, you’d be disseminating false information and ignorance. And who needs more ignorance in a world that’s already full of it?

If I asked you if you knew how your cellphone works, you’d probably say yes, right? But if I asked you why your screen reacts to the touch of your fingertips or how the power button turns on your phone, what would you say? The illusion of explanatory depth (IOED) is a term that was first coined by Leonid Rozenblit and Frank Keil in their 2002 research paper, The misunderstood limits of folk science: an illusion of explanatory depth. From the abstract:

[People feel they understand complex phenomena with far greater precision, coherence, and depth than they really do; they are subject to an illusion—an illusion of explanatory depth. The illusion is far stronger for explanatory knowledge than many other kinds of knowledge, such as that for facts, procedures or narratives. The illusion for explanatory knowledge is most robust where the environment supports real-time explanations with visible mechanisms.

But consider this: “Fake it ‘til you make it” can be seen as the internal battle cry of people who feel out of their depth. However, despite their insecurities, they still pursue what they desire. This imaginary mask of confidence that hides their shortcomings becomes the buoy that keeps them afloat. Oftentimes, they end up doing what they set out to do and exceed expectations. Sometimes that individual can morph into the person they were pretending to be in the first place. It’s an idealistic outcome, but it does happen.

Needless to say, we must not forget the potentially disastrous outcomes that occur when this transformation doesn’t happen and they go on fooling others and themselves instead.

If we acknowledge our ignorance, an empty space in our knowledge that needs to be filled, we need to carefully consider how we say “I don’t know”. Right now, I don’t have the answer but I will find out for you. I don’t know but I know someone that does. Or simply, I’m sorry, but as much as I wish I could help you, I’m just not the right person to solve this problem. I don’t want to put your project in jeopardy.

Depending on who you are, facing this reality can either be uncomfortable, terrifying or liberating.

Daniel Kahneman Answers

In one of the more in-depth and wide-ranging Q&A sessions on the freakonomics blog has run, Daniel Kahneman, whose new book is called Thinking, Fast and Slow, answered 22 questions posted by readers.

Three of the questions that caught my attention:

Q. As you found, humans will take huge, irrational risks to avoid taking a loss. Couldn’t that explain why so many Penn State administrators took the huge risk of not disclosing a sexual assault?

A. In such a case, the loss associated with bringing the scandal into the open now is large, immediate and easy to imagine, whereas the disastrous consequences of procrastination are both vague and delayed. This is probably how many cover-up attempts begin. If people were certain that cover-ups would have very bad personal consequences (as happened in this case), we may see fewer cover-ups in future. From that point of view, the decisive reaction of the board of the University is likely to have beneficial consequences down the road.

Q. Problems in healthcare quality may be getting worse before they get better, and there are countless difficult decisions that will have to be made to ensure long-term system improvement. But on a daily basis, doctors and nurses and patients are each making a variety of decisions that shape healthcare on a smaller but more tangible level. How can the essence of Thinking, Fast and Slow be extracted and applied to the individual decisions that patients and providers make so that the quality of healthcare is optimized?

A. I don’t believe that you can expect the choices of patients and providers to change without changing the situation in which they operate. The incentives of fee-for-service are powerful, and so is the social norm that health is priceless (especially when paid for by a third party). Where the psychology of behavior change and the nudges of behavioral economics come into play is in planning for a transition to a better system. The question that must be asked is, “How can we make it easy for physicians and patients to change in the desired direction?”, which is closely related to, “Why don’t they already want the change?” Quite often, when you raise this question, you may discover that some inexpensive tweaks in the context will substantially change behavior. (For example, we know that people are more likely to pay their taxes if they believe that other people pay their taxes.)

Q:How can I identify my incentives and values so I can create a personal program of behavioral conditioning that associates incentives with the behavior likely to achieve long-term goals?

A. The best sources I know for answers to the question are included in the book Nudge by Richard Thaler and Cass Sunstein, in the book Mindless Eating by Brian Wansink, and in the books of the psychologist Robert Cialdini.

Read what you’ve been missing. Subscribe to Farnam Street via Email, RSS, or Twitter.

From Daniel Kahneman Answers Your Questions

Rating Teachers is Educational Seduction

One of the most interesting studies I’ve come across is the case of Dr. Myron L. Fox.

Dr. Fox, an authority on the application of mathematics to human behavior, presented a lecture on “Mathematical Game Theory as Applied to Physician Education” to a group of highly trained educators. These educators were then asked to rate Dr. Fox’s lecture for educational content. Would a group of highly trained educators give the appropriate rating to Dr. Fox?

Only problem? The lecture was rigged from the start. The real goal of the study was to see if an “an experienced group of educators participating in a new learning situation can feel satisfied that they have learned despite irrelevant, conflicting, and meaningless content conveyed by the lecturer.”

That’s right, Dr. Fox was a fraud—an actor designed to look distinguished and sound authoritative. His source material was nothing more than a sufficiently understandable scientific article intended for lay readers. Dr. Fox was coached to present his topic with “an excessive use of double talk, neologisms, non sequiturs, and contradictory statements.” All of this was to be “interspersed with parenthetical humor and meaningless references to unrelated topics.”

It turns out that student ratings of educators depend largely on personality variables and not educational content.

But we can take this a little further.

Consider the case of Panagiotis Ipeirotis, a computer science professor at New York University’s Stern School of Business, who recently caught at least 20% of his students cheating.

Ipeirotis confronted his students and by the end of the semester, 22 of the 108 students in his class had admitted to cheating on assignments. Doing what any educator should do, Ipeirotis levied poor grades on the students who cheated.

But in a classroom students have the power. “When it came time to fill out teacher evaluations, the students hit their professor hard, and his average rating went down about a point. As a result, the newly tenured professor received the lowest annual salary increase he has ever gotten, and the school specifically cited the lower evaluation score, he says.”

So, what an interesting incentive system this is: (1) teacher pay is driven, at least in part, by student ratings; (2) student ratings can be manipulated by reducing educational content and focusing on being “liked”; and (3) teachers who catch students cheating can be punished by those very students.