Tag: Mental Models

Towards a Greater Synthesis: Steven Pinker on How to Apply Science to the Humanities

The fundamental idea behind Farnam Street is to learn to think across disciplines and synthesize, using ideas in combination to solve problems in novel ways.

An easy example would be to take a fundamental idea of psychology like the concept of a near-miss (deprival super-reaction) and use it to help explain the success of a gambling enterprise. Or, similarly, using the idea of the endowment effect to help explain why lotteries are a lot more successful if you allow people to choose their own numbers. Sometimes we take ideas from hard science, like the idea of runaway feedback (think of a nuclear reaction gaining steam), to explain why small problems can become large problems or small advantages can become large ones.

This kind of reductionism and synthesis helps one understand the world at a fundamental level and solve new problems.

We’re sometimes asked about untapped ways that this thinking can be applied. In hearing this, it occasionally seems that people fall into the trap of believing all of the great cross-disciplinary thinking has been done. Or maybe even that all of the great thinking has been done, period.

Steven-Pinker-by-Rebecca-Goldstein

Harvard psychologist Steven Pinker is here to say we have a long way to go.

We’ve written before about Pinker’s ideas on a broad education and on writing, but he’s also got a great essay on Edge.org called Writing in the 21st Century wherein he addresses some of the central concepts of his book on writing — The Sense of Style. While the book’s ideas are wonderful, later in the article he moves to a more general point useful for our purposes: Systematic application of the “harder” sciences to the humanities is a huge untapped source of knowledge.

He provides some examples that are fascinating in their potential:

This combination of science and letters is emblematic of what I hope to be a larger trend we spoke of earlier, namely the application of science, particularly psychology and cognitive science, to the traditional domains of humanities. There’s no aspect of human communication and cultural creation that can’t benefit from a greater application of psychology and the other sciences of mind. We would have an exciting addition to literary studies, for example, if literary critics knew more about linguistics.Poetry analysts could apply phonology (the study of sound structure) and the cognitive psychology of metaphor. An analysis of plot in fiction could benefit from a greater understanding of the conflicts and confluences of ultimate interests in human social relationships. The genre of biography would be deepened by an understanding of the nature of human memory, particularly autobiographical memory. How much of the memory of our childhood is confabulated? Memory scientists have a lot to say about that. How much do we polish our image of ourselves in describing ourselves to others, and more importantly, recollecting our own histories? Do we edit our memories in an Orwellian manner to make ourselves more coherent in retrospect? Syntax and semantics are relevant as well. How does a writer use the tense system of English to convey a sense of immediacy or historical distance?

In music the sciences of auditory and speech perception have much to contribute to understanding how musicians accomplish their effects. The visual arts could revive an old method of analysis going back to Ernst Gombrich and Rudolf Arnheim in collaboration with the psychologist Richard Gregory. Indeed, even the art itself in the 1920s was influenced by psychology, thanks in part to Gertrude Stein, who as an undergraduate student of William James did a wonderful thesis on divided attention, and then went to Paris and brought the psychology of perception to the attention of artists like Picasso and Braque. Gestalt psychology may have influenced Paul Klee and the expressionists. Since then we have lost that wonderful synergy between the science of visual perception and the creation of visual art.

Going beyond the arts, the social sciences, such as political science could benefit from a greater understanding of human moral and social instincts, such as the psychology of dominance, the psychology of revenge and forgiveness, and the psychology of gratitude and social competition. All of them are relevant, for example, to international negotiations. We talk about one country being friendly to another or allying or competing, but countries themselves don’t have feelings. It’s the elites and leaders who do, and a lot of international politics is driven by the psychology of its leaders.

In this short section alone, Pinker offers realistically that we can apply:

  • Linguistics to literature
  • Phonology and psychology to poetry
  • The biology of groups to understand fiction
  • The biology of memory to understand biography
  • Semantics to understand historical writing
  • Psychology and biology to understand art and music
  • Psychology and biology to understand politics

Turns out, there’s a huge amount of thinking left to be done. Effectively, Pinker is asking us to imitate the scientist Linus Pauling, who sought to systematically understand chemistry by using the next most fundamental discipline, physics, an approach which led to great breakthroughs and a consilience of knowledge in the two fields which is taken for granted in modern science.

Towards a Greater Synthesis

Even if we’re not trying to make great scientific advances, think about how we could apply this idea to all of our lives. Fields like basic mathematics, statistics, biology, physics, and psychology provide deep insight into the “higher level” functions of humanity like law, medicine, politics, business, and social groups. Or, as Munger has put it, “When you get down to it, you’ll find worldly wisdom looks pretty darn academic.” And it isn’t as hard as it sounds: We don’t need to understand the deep math of relativity to grasp the idea that two observers can see the same event in a different way depending on perspective. The rest of the world’s models are similar, although having some mathematical fluency is necessary.

Pinker, like Munger, doesn’t stop there. He also believes in what Munger calls the ethos of hard science, which is a way of rigorously considering the problems of the practical world.

Even beyond applying the findings of psychology and cognitive science and social and affective neuroscience, it’s the mindset of science that ought to be exported to cultural and intellectual life as a whole. That consists in increased skepticism and scrutiny about factual conventional wisdom: How much of what you think is true really is true if you go to the numbers? For me this has been a salient issue in analyzing violence, because the conventional wisdom is that we’re living in extraordinarily violent times.

But if you take into account the psychology of risk perception, as pioneered by Daniel Kahneman, Amos Tversky, Paul Slovic, Gerd Gigerenzer, and others, you realize that the conventional wisdom is systematically distorted by the source of our information about the world, namely the news. News is about the stuff that happens; it’s not about the stuff that doesn’t happen. Human risk perception is affected by memorable examples, according to Tversky and Kahneman’s availability heuristic. No matter what the rate of violence is objectively, there are always enough examples to fill the news. And since our perception of risk is influenced by memorable examples, we’ll always think we’re living in violent times. It’s only when you apply the scientific mindset to world events, to political science and history, and try to count how many people are killed now as opposed to ten years ago, a hundred years ago, or a thousand years ago that you get an accurate picture about the state of the world and the direction that it’s going, which is largely downward. That conclusion only came from applying an empirical mindset to the traditional subject matter of history and political science.

Nassim Taleb has been on a similar hunt for a long time (although, amusingly, he doesn’t like Pinker’s book on violence at all). The question is relatively straightforward: How do we know what we know? Traditionally, what we know has simply been based on what we can see, something now called the availability bias. In other words, because we see our grandmother live to 95 years old while eating carrots every day, we think carrots prevent cancer. (A conflation of correlation and causation.)

But Pinker and Taleb call for a higher standard called empiricism, which requires pushing beyond anecdote into an accumulation of sound data to support a theory, with disconfirming examples weighted as heavily as confirming ones. This shift from anecdote to empiricism led humanity to make some of its greatest leaps of understanding, yet we’re still falling into the trap regularly, an outcome which itself can be explained by evolutionary biology and modern psychology. (Hint: It’s in the deep structure of our minds to extrapolate.)

Learning to Ask Why

Pinker continues with a claim that Munger would dearly appreciate: The search for explanations is how we push into new ideas. The deeper we push, the better we understand.

The other aspect of the scientific mindset that ought to be exported to the rest of intellectual life is the search for explanations. That is, not to just say that history is one damn thing after another, that stuff happens, and there’s nothing we can do to explain why, but to relate phenomena to more basic or general phenomena … and to try to explain those phenomena with still more basic phenomena. We’ve repeatedly seen that happen in the sciences, where, for example, biological phenomena were explained in part at the level of molecules, which were explained by chemistry, which was explained by physics.

There’s no reason that that this process of explanation can’t continue. Biology gives us a grasp of the brain, and human nature is a product of the organization of the brain, and societies unfold as they do because they consist of brains interacting with other brains and negotiating arrangements to coordinate their behavior, and so on.

This idea certainly takes heat. The biologist E.O. Wilson calls it Consilience, and has gone as far as saying that all human knowledge can eventually be reduced to extreme fundamentals like mathematics and particle physics. (Leading to something like The Atomic Explanation of the Civil War.)

Whether or not you take it to such an extreme depends on your boldness and your confidence in the mental acuity of human beings. But even if you think Wilson is crazy, you can still learn deeply from the more fundamental knowledge in the world. This push to reduce things to their simplest explanations (but not simpler) is how we array all new knowledge and experience on a latticework of mental models.

For example, instead of taking Warren Buffett’s dictum that markets are irrational on its face, try to understand why. What about human nature and the dynamics of human groups leads to that outcome? What about biology itself leads to human nature? And so on. You’ll eventually hit a wall, that’s a certainty, but the further you push, the more fundamentally you understand the world. Elon Musk calls this first principles thinking and credits it with helping him do things in engineering and business that almost everyone considered impossible.

***

From there, Pinker concludes with a thought that hits near and dear to our hearts:

There is no “conflict between the sciences and humanities,” or at least there shouldn’t be. There should be no turf battle as to who gets to speak about what matters. What matters are ideas. We should seek the ideas that give us the deepest, richest, best-informed understanding of the human condition, regardless of which people or what discipline originates them. That has to include the sciences, but it can’t come only from the sciences. The focus should be on ideas, not on people, disciplines, or academic traditions.


Still Interested?
Start building your mental models and read some more Pinker for more goodness.

Shane Parrish on Mental Models, Decision Making, Charlie Munger, Farnam Street, And More

An interview I gave that I think you’ll enjoy as I talk about reading, mental models, investing, learning and more.

***

Shane Parrish is the curator for the popular Farnam Street Blog, an intellectual hub of curated interestingness that covers topics like human misjudgment, decision making, strategy, and philosophy. Shane is a strategist for both individuals and organizations and is dedicated to mastering the best of what other people have already figured out.

***

Can you discuss your background and the origins of Farnam Street?
Farnam Street started as a byproduct of my MBA. As I was going through that program it became evident that we were being taught to regurgitate material in a way that made marking easier. We weren’t honing our critical thinking skills or integrating multiple disciplines. We couldn’t challenge anything.

Eventually, I got frustrated. I didn’t give up on the MBA, but I did start using the time that I was previously investing in homework and started to focus on my own learning and development. At first it was mostly academic. I started going back to the original Kahneman and Tversky papers, and other material that was journal based, because I figured I’d probably never have access to such a wealth of journals again outside of school.

So I started the website and it was really just for me, not for anybody else. The original URL of the website was the zipcode for Berkshire Hathaway. I didn’t think anyone would find it. It eventually grew into a community of people interested in continuous learning, applying different models to certain problems, and developing ways to improve our minds in a practical way. The strong reception surprised me at first, but now the community has become very large, stimulating, and encouraging. I should point out that I don’t come up with anything original myself—I’m just trying to master the best of what other people like Buffett, Kaufman, Bevelin, and Munger have already figured out. In fact, that’s our tagline. It reminds me of something Munger said once when asked what he learned from Einstein, and he replied, only half-jokingly, “Well he taught me relativity. I wasn’t smart enough to figure that out on my own.” That seems like a bit of a wiseass remark, but there’s some untapped wisdom there.

What are your motivations for Farnam Street?
I want to embrace the opportunity I have, which has been created largely through luck, and I want to give readers and subscribers enormous value in three ways.

First, I want to help them make better decisions. To do our best to figure out how the world really works. Second, I want to help people discover new interests and connections across disciplines. Finally, I want to help people explore what it means to live a good life and how we should live. I hope by sharing my intellectual and personal journey I can help people better navigate theirs.

It seems pretty clear that you have a profound admiration for investors. Farnam Street is the street Berkshire Hathaway is located on, and you discuss Charlie Munger’s views quite a bit. What appeals to you about investing?

For Munger and Buffett specifically, it’s not necessarily that they’re just investors, it is that they’ve modeled a path of life that resonates with me. I also appreciate the values that are associated with their investment success. I think what they’ve done is they’ve taken other people’s ideas, stood on the shoulders of giants, so to speak, and applied those ideas in better ways than the people who came up with the ideas. For example, with regard to psychological biases and Kahneman’s work, Munger and Buffett have found a way to institutionalize this to a point where they can actually avoid most of these biases.

Whereas Kahneman himself just says something along the lines of, “I’ve studied biases all my life, but I’m not better.” Yet, these two guys from Omaha actually figured out how to be better.

It’s not just Kahneman and human biases. They’ve done it in a variety of disciplines like Michael Porter’s work on Competitive Strategy. They separately derived the same basic ideas, except in a way that gives them an enormous investing advantage. To my knowledge, Michael Porter has not done that. Of course, he may not have been trying to do so. Another great example is Ben Graham. He provided the bedrock that Warren Buffett built his brain on, but if you really think about it, Buffett was and is a much better investor. And lastly, regarding Munger, in my opinion, his method of organizing practical psychology is a lot better than the actual residents of that discipline, even the people who “taught” him the ideas through books.

Returning to investing, the field resonates with me because investors have skin in the game. Investors have clear accountability and measurable performance. That contrasts with many other types organizations. For the most part, investors are searching for the truth and constantly looking for ways they could be wrong and that they could be fooling themselves. There’s a pretty clear scoreboard.

Are you an investor yourself?
Yes. I used to be involved with a small registered investment advisor based in Massachusetts. I still invest personally and hope to return more of my focus to investing in the future. (Which, I’ve now done at Syrus Partners) Right now I’m focused on Farnam Street, which I see as the biggest opportunity ahead of me and the opportunity that I’m most excited about. There’s a lot to do.

Can you talk about what you have planned for Farnam Street?
I just hired somebody to help out at Farnam Street for the first time. His name is Jeff Annello. He’s amazing. It’s become more of a sustainable business. We are developing products. We have two courses coming out next year that we’re incredibly excited about (The Art of Reading). I think we have put over a year’s effort into one of the products, and we’re just starting the other one right now which will be released next fall.

Adapting your reading style to consider the type of material you are reading and why you are reading it makes you much more effective at skimming, understanding, synthesizing, and connecting ideas. If you take the same approach to reading everything, you will end up overwhelmed and frustrated.

We are launching “The Art of Reading” early in the year. That course is aimed at adapting Mortimer Adler’s theory of reading to the modern age, and giving people a structured way of going about learning from books, as opposed to simply reading them. Seems simple, but most of us never really pick it up.

Today we are bombarded constantly with information, and we often read all types of material in the same way. But that’s pretty ineffective. We don’t have to read everything the same way.

Reading is something you seem to know quite a lot about, but in a recent post, you discussed that you are purposefully reading fewer books. What is your thinking around that decision?
I fell into a trap with reading. It almost became a personal challenge that you can easily get wrapped up in. In 2014, I was basically reading a book every few days. I think I ended the year with over 140 books read, but I must have started at least 300. I realized I was reading just to finish the book. That meant I wasn’t getting as much out of it as I should. I ended up wasting a lot of time using that approach and it also impacted what I read. You have these subtle pressures to read smaller books and to digest things in a really quick way. I wasn’t spending enough time synthesizing the material with what I already knew and honing my understanding of an idea.

It’s not about how many books you read but what you get out of the books you read.

It’s not about how many books you read but what you get out of the books you read. One great book, read thoroughly and understood deeply, can have a more profound impact on your life than reading 300 books without really understanding the ideas in depth and having them available for practical problem-solving.

Can you discuss some of your techniques for absorbing and synthesizing as much information as possible?
There is a lot that can be done after simply finishing a chapter. I like to summarize the chapter in my own words. I also like to apply any learnings from the chapter to my life, either by looking backward to see where concepts may have applied or by looking forward to seeing if it might make sense to incorporate something into my daily routine. I think the reason to do that is twofold. One is to give me a better understanding of that learning, and two is really a check and balance, and a feedback loop. Have you ever watched TV and somebody comes in on a commercial and says, “What are you watching,” and you’re like, “I have no idea,” but you’ve been sitting there 20 minutes? Well, we can do that with books too. You’ll start reading, and paragraphs will fly by, and then you’ll have no idea what you were reading. It’s fine if you’re reading for entertainment, you might be able to catch up later, but if you’re reading for understanding, that’s something you want to avoid.

Part of what I want to do is develop a feedback process to make sure that I’m not doing that.

I try to make extensive use of book covers for notes about areas to revisit, potential connections to other concepts, and outlining the structure of the author’s argument. After I’ve finished a book, I usually put it on my desk for a week or two, let it sit, and then I come back to it. I reread all of my margin notes, my underlines, and highlights. Then I apply a different level of filtering to it and make a decision about what I want to do with the information now.

You also talk about the Feynman technique in some of your posts.

The Feynman technique is essentially explaining a concept or idea to yourself, on a piece of paper, as if you were teaching it to someone else with little background knowledge. When you’re learning something new, it’s all about going back and making sure you understand it.

Can you explain it in simple, jargon-free, language? Can you explain it in a way that is complete and demonstrates understanding? Can you take an idea and apply it to a problem outside of the original domain? Take out a piece of paper and find out.

I think that being able to do this at the end of a book is really important, especially if it’s a new subject for you. The process of doing that shows you where your gaps are; this is important feedback. If you have a gap in your understanding, you can circle back to the book to better understand that point. If you can’t explain it to somebody else, then you probably don’t understand it as well as you think you do. It doesn’t mean you don’t understand it, but the inability to articulate it is definitely a flag that it’s something you need to circle back to, or pay more attention to.

“Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities.”

— Andy Benoit

It seems like feedback mechanisms are a key part of your approach.

I think at the heart of it, you want to be an active reader. You want to selectively be an active reader and not a passive reader. These types of activities make sure that you’re reading actively. Writing notes in a book, for example, is really just a way to pound what you’re reading into your brain. You need engagement.

In a recent post, you brought up Peter Thiel’s concept of a “secret”. Essentially, what important truth do very few people agree with you on? I’d be really curious if you have something in mind that would fit this concept.

Ever since I came across this question I’ve been toying with it over and over in my head. I’m not sure I have a decent answer, but I’ll offer one of the things that I run into a lot but couldn’t really describe until Peter Kaufman pointed me to a quote by Andy Benoit, who wrote a piece in Sports Illustrated a while back. Benoit said “Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities.” I think he nailed it. This explains Berkshire Hathaway, the New England Patriots, Costco, Glenair, and a host of amazing organizations.

I’ve long had a feeling about this but couldn’t really pull it out of my subconscious into my conscious mind before. Benoit gave me the words. I think we generally believe that things need to be complicated but in essence, there is great value in getting the simple things right and then sticking with them, and that takes discipline. As military folks know, great discipline can beat great brainpower.

I know of many companies that invest millions of dollars into complicated leadership development programs, but they fail to treat their people right so the return on this investment isn’t even positive it’s negative because it fosters cynicism. Or consider companies that focus on complicated incentive plans—they never work. It’s very simple. If you relentlessly focus on the basics and develop a good corporate culture—like the one Ken Iverson mentions in his book Plain Talk—you surpass people who focus on the complex. Where I might disagree with Benoit a little is that I don’t think these are unrecognized as much as under-appreciated. People think the catechism has to be more complicated.

You discuss the power of multidisciplinary learning. Do you have any example where the multidisciplinary learning has been especially powerful for you? Munger has a number of examples of him arriving at a solution faster than an expert in a field as a virtue of Munger using concepts from other fields.

If you were a carpenter you wouldn’t want to show up for a job with an empty toolbox or only a hammer. No, you’d want to have as many different tools at your disposal as possible.

Nothing sucks up your time like poor decisions.

Furthermore, you’d want to know how to use them. You can’t build a house with only a hammer. And there is no point in having a saw in your toolbox if you don’t know how to use it. In this sense, we’re all carpenters. Only, our tools are the big ideas from multiple academic disciplines. If we have a lot of mental tools and the knowledge of how to wield them properly, we can start to think rationally about the world.

These tools allow us to make better initial decisions, help us better scramble out of bad situations, and think critically about what other people are telling us. You can’t overestimate the value of making good initial decisions. Nothing sucks up your time like poor decisions and yet, perversely, we often reward people for solving the very problems they should have avoided in the first place.

It’s a little weird, but in some organizations, you’re better off screwing up and fixing it than making a simple, correct, decision the first time. Think about portfolio managers trumpeting how they’ve “smartly sold” a stock at a loss of 20%, saving them a loss of 50%, but which a wiser person never would have purchased in the first place. The sale looks smart, but the easier decision would have been avoiding misery from the get-go. That kind of thing happens all over the place.

Multidisciplinary thinking also helps with cognitive diversity. In our annual workshop on decision making, Re:Think Decision Making, we talk about the importance of looking at a problem in multiple dimensions to better understand reality and identify the variables that will govern the situation—whether it’s incentives, adaptation, or proximity effects. But the only way you’re going to get to this level of understanding is to hold up the problem and look at it through the lens of multiple disciplines. These models represent how the world really works. Why wouldn’t you use them?

One important thing, for example, we can learn from ecology, is second-order thinking—“and then what?” I think that a lot of people forget that there’s a next phase to your thinking, and there’s a second and third order effect. I’ve been in a lot of meetings where decisions are made and very few people think to the second level. They get an idea that sounds good and they simply stop thinking. The brain shuts down. For example, we change classification systems or incentive systems in a way that addresses the available problems, but we rarely anticipate the new problems that will arise. It’s not easy. This is hard work.

Another example is when a salesman comes into a company and offers you some software program he claims is going to lower your operating costs and increase your profits. He’s got all these charts on how much more competitive you’ll be and how it will improve everything. You think this is great. You’re sold. Well, the second order thinking is to ask, how much of those cost savings are going to go to you and how much will be passed on to the customer? Well to a large extent that depends on the business you’re in. However, you can be damn sure the salesman is now knocking on your competitors’ door and telling them you just bought their product. We know thanks to people like Garrett Hardin, Howard Marks, and disciplines like ecology that there are second and third order effects. This is how the world really works.

Munger’s got a brain that I don’t have. I have to deal with what I’ve got. I’m not trying to come up with the fastest solution to a problem. It’s great to have a 30-second mind, but it’s not a race. Part of the issue I see over and over again is not that people don’t have the cognitive tools, but rather they don’t have time to actually think about a problem in a three-dimensional way.

If you think you’re going to come up with good solutions to complicated problems in 30 seconds and your name is not Charlie Munger, I wish you luck.

The rest of us should learn to say “I don’t know” or “Let me think about it” about ten times more frequently than we do.

It makes sense that second-order and third-order effects are underappreciated.
I think a lot of people get incentives wrong and it has disastrous implications on corporate culture. Let’s look at it from another angle – how would you intentionally design an incentive system that functioned horribly? You’d make it so complicated that few people understood it. You’d make everyone measured on individual and not team success. You’d have different variables and clauses and sub-clauses. No one would understand how their work impacts someone else. To make it even worse, you’d offer infrequent and small rewards. You’d offer a yearly bonus of maybe 5% of salary or something. And of course, you’d allow the people in it to game the system and the people running it to turn it into politics. I think we can all agree those are not desired outcomes and yet that is how many incentive systems work.

I think it’s important to focus on getting better at making decisions over time. It is about making the process slightly better than it was last time.

Do you have any thoughts on particularly powerful concepts or process implementations that can help investment organizations pursue investment excellence?
I think it’s important to focus on getting better at making decisions over time. It is about making the process slightly better than it was last time. These improvements compound like money. You really have to flip it on its head. What’s likely to not work well? Generally speaking, analysts tend to have a focused view of the world and they stay in their lane. Specialization certainly helps develop specific knowledge, but it also makes it hard to learn from the guy or girl next to you who has knowledge in a different industry, so you’re not improving your intuition as much as you’d probably want. It’s like chess. People once thought great chess players were great thinkers, but they’re not any better at general problem-solving than the rest of us. They’re just great chess players. Investment analysis is often the same way, especially if you’re siloed in some industry analyst position. It’s probably not making you a great thinker, but you are learning more about your industry.

have the organization learn and get better, we need to expose our decision-making process to others.

In order to have the organization learn and get better, we need to expose our decision making process to others. One way to do this is to highlight the variables we think are relevant. Start making clear why we made our decisions and the range of outcomes we thought were possible. It needs to be done in advance. A lot of people do this through a decision journal. Some accomplish this through a discussion that flushes out which variables you think will dominate the outcome and most importantly, why. Not only does that facilitate an environment where others can challenge your thought process, but over time it enables them to get a good feel for what you think are the key variables in that particular industry. That helps me expand my circle of competence. You don’t want an organization where the automobile analyst knows nothing about banking and the chemicals guy knows little about consumer products, and then a portfolio manager with a little surface knowledge of everything is pulling the trigger. I have never seen that work, but I’ve seen a lot of people try. The “everyone’s a generalist” approach has its own limitations, like a crippling lack of specialized knowledge.

So, obviously, any investment organization has to find a middle ground. How could it be otherwise? You must start with this basic and obvious truth to solve the problem.

Another challenge in the investment world is dealing with the sheer volume of the information. I get questions from portfolio managers all the time about how best to keep up with the information flow. They say “I get 500 emails a day. I have researchers’ work come to me at all hours. I have thousands of pages of material to read.”

Clearly, Berkshire Hathaway has done a really good job with this, with basically two guys doing all of the information processing—two really smart guys, but only two.

How do they do that?

Well, part of the answer is that Buffett and Munger are continuously learning about companies that do not change rapidly. They’re learning about companies that change slowly. That in and of itself is a major advantage. They also are operating in industries in which they know the key variables of determining an organization’s success or failure, and more importantly, ignoring the industries where they don’t. It’s a huge step to be able say to yourself “Look, I’m going to miss some enormous winners that were incredibly hard to see ahead of time. I’m OK with that.” Buffett and Munger can do it, but most struggle. So they stretch and invest in things where they really cannot accurately predict the odds of success or failure, all forces considered.

Probabilities being what they are, if you consistently invest in things with middling odds, you’ll have middling results. Again, how could it be otherwise? The key is knowing the difference between an obviously attractive situation and a difficult-to-predict one and being able to act on the former and sit on the latter. Of course, I’m over-simplifying a bit, but you can’t get around the fact that reality is reality. You have to find a way. And this will help you solve your information flow problem, because you’ll be tossing a lot of ideas out very quickly.

It seems like you would prefer the Buffett and Munger model over the approach of the average hedge fund with specialists?
If my job is being a neurosurgeon, I need to keep up-to-date with all the latest neurosurgery papers, academic articles, books, and talks because I’m very specialized in that one particular area and it’s relevant to my job and relevant to my livelihood.

If you look at investing holistically you can’t do that for every company in every industry. In my understanding, part of the reason Buffett and Munger have accumulated so much knowledge is that they focus on learning things that change slowly. That makes it easier to identify potential outcomes and determine the relevant variables. David Foster Wallace had this great quote, “Bees have to move very fast to stay still.” And that’s what most of us do. We move a lot to stay in the same place. Buffett and Munger are getting further ahead each day.

Unless physics changes, for example, it’s unlikely that we’ll see the development of more efficient ways to move bulk freight. It doesn’t seem subject to technological disruption, but instead will likely be aided by technology. Technology helps improve the management of your rail network, but it’s not going to replace the entire network anytime soon. I think that Berkshire is actually moving away from uncertainty by pursuing companies like this. If you don’t know the range of outcomes, you will have a hard time assessing probabilities. One of the things that decision journals help identify is outcomes outside of what we expected. That’s a very humbling experience. After identifying possible outcomes and applying confidence levels, it’s humbling to get it so wrong

You have also studied an investment firm that’s probably as different from Berkshire Hathaway as possible with your most recent podcast with Chris Dixon of Andreessen Horowitz. What are your thoughts on good decision making as applied in the venture capital world and how is it different than Berkshire Hathaway?

Chris was an excellent guest to have on The Knowledge Project. He operates in Venture Capital—a world I don’t get much exposure to. He has insight on things I know very little about: venture funding, how to structure a venture capital firm so that you are adding value, etc. And they’ve been very successful.

I think we’re largely operating in unprecedented territory given the magnitude of private valuations. In past decades, companies IPO’d at much lower valuations so public market investors could more easily participate in their success. I don’t know how this plays out, but talking to Chris was fascinating.

Andreessen Horowitz has a very different operational approach as compared to Berkshire Hathaway. As I understood it, they are trying to add value to the entrepreneurs. Also, they’ve moved away from a business or idea based sourcing process to one that is almost exclusively focused on the entrepreneur. That directly contradicts some of Buffett’s thoughts on the relative importance of a management team versus the underlying business.

It makes sense that they would have different approaches. I think it’s important to understand that there are things that we want to have in our mental tool box. But part of being an effective craftsman is knowing when they work and when they don’t. You can’t just pull out random tools and expect them to work.

In 2013, I did some consulting work on improving innovation in organizations and the most common thing that people were doing at the time to solve the innovation problem was copying Google’s 20% of time spent on independent innovative ideas.

You need to understand how that fits with the company culture.

I found this interesting for a number of reasons. It surprised me that every executive had it on the tip of their tongue, but there’s no large sample size for a successful innovation like this 20% idea. Google and, I think, 3M are the two most prominent examples. Google, at the time, I think they had only been around for 15 years. That’s a pretty small sample size for continuous innovation. Also, you need to understand how that fits with the company culture, and why it works even if you’re seeing it work. Why does it work at Google? Is it because of how it fits in the overall culture? The problem I see is that people are taking one piece of a large puzzle and thinking that it’s going to solve their problem. It might help. It might not. It’s just a tool. It reminds me of the group of blind people touching the different parts of the elephant.

Also, some of these innovation projects get done for the wrong reasons, and with the wrong incentives. If my boss asks me for ideas to help the company innovate and I give him an idea that sounds good, one that subconsciously reminds him of an article he read in Fortune about innovation, isn’t that basically good enough for me as an employee? Does it even matter if it works? In most organizations, am I really going to be held responsible for the success or failure of my innovation prescription? The organization might suffer, but will I suffer personally? Probably not. My lack of ability to think the problem through will probably be forgotten in time if the idea sounded good and relevant at the time. If it was defensible via Powerpoint. This is one reason hiring consultants rarely works as well as hoped.

So, we copy Google’s twenty percent innovation time. They’re an innovative company; they’re hip; they’re cool; we’re going to copy them. Okay, well, we can do that. It’s a good story. What gets lost is a potentially useful discussion like, “Maybe we should remove the things in our environment that take away from natural innovation, like all these meetings.” That’s a much tougher conversation, but just like taking away sugar works better than adding broccoli to your diet, taking things out of the corporate culture is often a better solution than adding new stuff. Munger has us paying attention to incentives because they really are driving the train. You have to get it right.

One big theme for you is the concept of life-long learning. What is your motivation to pursue it? Munger has called it a moral duty. Do you have similar feelings?
I wish I were as eloquent as him. I’ve always had to work harder. You just have to keep getting better every day. You have to keep learning. If you’re going to accomplish what you want to accomplish, it’s probably not through going home and watching Netflix every night, right? You have to learn how the world works. We have a huge statistical sample size of things that aren’t changing. There is an excellent letter by Chris Begg at East Coast Asset Management that discusses Peter Kaufman’s thoughts on this. Physics, math, and biology are things that change very, very slowly, if at all. Learning things in those disciplines is good. It’s practical, because that’s how the world works. Those are things that don’t change over time.

I think that, for me, it’s just become “How can I pass people that are smarter than me?” I think if I can get incrementally better every day, compounding will kick in and over a long enough time, I’m going to achieve the things that I want in life.

What could be better than constantly learning new things and discovering that you’re still curious? Most of us forget what it’s like to be six years old and asking “why?” all the time and trying to understand why things operate the way they do. It’s hard to still do that, but you can still carry that wonder with you into life and try to understand why things are happening and why success or failure happens.

Avoiding stupidity is easier than seeking brilliance. But that by itself is suboptimal. You also want to copy models of success.

We don’t necessarily have to come up with all of this stuff ourselves. We can see a better model and adopt it or, the parts of it that will help us along. Giving up on holding on to our own ideas is really important.

I don’t come up with almost anything that’s original. I aggregate and synthesize other people’s thoughts and put it into context for people. I think that those are things that I like to focus on, I have a passion for doing that. I’m doing it anyway because I get a lot of value out of reading, learning, and exploring the world, and I share that with people.

With regard to Mental Models, you spend a lot of time discussing their importance, but you also highlight their shortcomings. Can you discuss your view of the value of mental models?

It’s important to understand how we are likely to fool ourselves. Aside from the psychological factors, which Munger and Bevelin talk about extensively, there are other ways.

For example, we run organizations based on dashboards and metrics and we make decisions based on these numbers. Investors look at financial reports to make investment decisions.

We think that those numbers tell a story and, to some extent, they do. However, they don’t tell the full story. They are limited. For example, a strike-out can be a good thing in baseball. Players who suck statistically in one system can thrive as a part of another – the whole “Moneyball” idea lives here, and the Patriots have been extremely successful with a wide variety of talent. In business, reported depreciation can be widely off. The accounting could be gamed. A tailwind could be benefitting a business temporarily, soon to dissipate. Many companies look their absolute best, on historical figures, just before the big denouement.

“All models are false but some are useful.”

— George Box

There is a great quote by George Box who said “All models are false but some are useful.” Practically speaking, we have to work with reductions—like maps. A map with a scale of one foot to one foot wouldn’t be useful, would it? Knowing that we’re working with reductions of reality, not reality itself, should give us pause. We recently wrote a piece on Farnam Street called “The Map is Not the Territory,” which is a more in-depth exploration of the nuances behind this.

Knowing how to dig in and understand these maps and their limitations is important. A lot of models are core – they don’t change very much. Social proof is real. Incentives do drive human behavior, financial and otherwise. The margin of safety approach from engineering works across many, many practical areas of life. Those are the types of huge, important models you want to focus on as a part of becoming a generally wise person. You need to learn them and learn how to synthesize with them. From there, you layer in the models that are specific to your job or your area of desired expertise. If you’re a bank investor, you’re going to look to attain a deep fluency in bank accounting that a neurosurgeon wouldn’t need. But both the analyst and the surgeon can understand and use the margin of safety idea practically and profitably.

Essentially, they can be powerful if used correctly, but we can also over apply them in some ways?

They work sometimes and not other times. You need to be aware of limitations. The point here is just to be cautious—the map is not the terrain. It doesn’t tell the full story.

Do you have any other investors or companies outside of Berkshire Hathaway that really have some profound thinking or you really love reading their shareholder letters or you’ve learned a lot from? Anything like that that we can talk about?

Berkshire has an incredibly unique model of writing to shareholders, and frankly no one else is as good. One that’s slightly off the beaten path, although it’s become a lot better known over the past few years, is a Canadian company called Constellation Software (CSU). The CEO there is truly doing God’s work as far as how he reports to shareholders. Very clear presentation of the financial performance of the business, and a lucid and honest discussion of what’s going on.

There are two key components to reporting to shareholders well, as I see it. One is presenting, in as clear a way as possible, the results in the prior periods. Presented consistently and honestly over time. The second is being extremely forthcoming about why these figures came out the way they did; good or bad, warts and all. When Blue Chip Stamps was still a reporting company, Munger would write about See’s Candy. What did his summary table show every year? Pounds of candy sold, stores open, total revenue, total profits. The key variables. Then he explained in clear language why See’s was a good business and what had occurred in the most recent period, and if possible, what he foresaw in general for the following year. That’s what we need more of: give investors an updated report of the major drivers and then tell us what happened. Leave out the fluff. You don’t need to write essays like Buffett. Just help us understand the business and what’s going on.

This has been great, Shane. Thanks so much for your time.

The Three Buckets of Knowledge

“Every statistician knows that a large, relevant sample size is their best friend. What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history, you can pick your own number, I picked 20,000 years of recorded human behavior. Those are the three largest sample sizes we can access and the most relevant.”

— Peter Kaufman

When we seek to understand the world, we’re faced with a basic question: Where do I start? Which sources of knowledge are the most useful and the most fundamental?

FS takes its lead here from Charlie Munger, who argued that the “base” of your intellectual pyramid should be the great ideas from the big academic disciplines: Mental models (For a great book on the subject check out The Great Mental Models series.)

Similarly, Peter Kaufman’s idea, presented above, is that we can learn the most fundamental knowledge from the three oldest and most invariant forms of knowledge: Physics and math, from which we derive the rules the universe plays by; biology, from which we derive the rules life on Earth plays by; and human history, from which we derive the rules humans have played by.

With that starting point, we’ve explored a lot of ideas and read a lot of books, looking for connections among the big, broad areas of useful knowledge. Our search led us to a wonderful book called The Lessons of History, which we’ve previously discussed.

The book is a hundred-page distillation of the lessons learned in 50 years of work by two brilliant historians, Will and Ariel Durant. The Durants’ spent those years writing a sweeping 11-book, 10,000-page synthesis of the major figures and periods in human history, with an admitted focus on Western civilization.(Although they admirably tackle Eastern civilization up to 1930 or so in the epic Our Oriental Heritage.) With The Lessons of History, the pair sought to derive a few major lessons learned from the long pull.

Let’s explore a few ways in which the Durants’ work connects with the three buckets of human knowledge that help us understand the world at a deep level.

Lessons of Geologic Time

Durant has a classic introduction for this kind of “big synthesis” historical work:

Since man is a moment in astronomic time, a transient guest of the earth, a spore of his species, a scion of his race, a composite of body, character, and mind, a member of a family and a community, a believer or doubter of a faith, a unit in an economy, perhaps a citizen in a state or a soldier in an army, we may ask under the corresponding heads—astronomy, geology, geography, biology, ethnology, psychology, morality, religion, economics, politics, and war—what history has to say about the nature, conduct, and prospects of man. It is a precarious enterprise, and only a fool would try to compress a hundred centuries into a hundred pages of hazardous conclusions. We proceed.

The first topic Durant approaches is our relationship to the physical Earth, a group of knowledge we can place in the second bucket, in Kaufman’s terms. We must recognize that the varieties of geology and physical climate we live in have to a large extent determined the course of human history. (Jared Diamond would agree, that being a major component of his theory of human history.)

History is subject to geology. Every day the sea encroaches somewhere upon the land, or the land upon the sea; cities disappear under the water, and sunken cathedrals ring their melancholy bells. Mountains rise and fall in the rhythm of emergence and erosion; rivers swell and flood, or dry up, or change their course; valleys become deserts, and isthmuses become straits. To the geologic eye all the surface of the earth is a fluid form, and man moves upon it as insecurely as Peter walking on the waves to Christ.

There are some big, useful lessons we can draw from studying geologic time. The most obvious might be the concept of gradualism, or slow incremental change over time. This was most well-understood by Darwin, who applied that form of reasoning to understand the evolution of species. His hero was Charles Lyell, whose Principles of Geology created our understanding of a slow, move-ahead process on the long scale of geology.

And of course, that model is quite practically useful to us today — it is through slow, incremental, grinding change, punctuated at times by large-scale change when necessary and appropriate, that things move ahead most reliably. We might be reminded in the modern corporate world of General Electric, which ground ahead from an electric lamp company to an industrial giant, step-by-step over a long period which destroyed many thousands of lesser companies with less adaptive cultures.

We can also use this model to derive the idea of human nature as nearly fixed; it changes in geologic time, not human time. This explains why the fundamental problems of history tend to recur. We’re basically the same as we’ve always been:

History repeats itself in the large because human nature changes with geological leisureliness, and man is equipped to respond in stereotyped ways to frequently occurring situations and stimuli like hunger, danger, and sex. But in a developed and complex civilization individuals are more differentiated and unique than in a primitive society, and many situations contain novel circumstances requiring modifications of instinctive response; custom recedes, reasoning spreads; the results are less predictable. There is no certainty that the future will repeat the past. Every year is an adventure.

Lastly, Mother Nature’s long history also teaches us something of resilience, which is connected to the idea of grind-ahead change. Studying evolution helps us understand that what is fragile will eventually break under the stresses of competition: Most importantly, fragile relationships break, but strong win-win relationships have super glue that keeps parties together. We also learn that weak competitive positions are eventually rooted out due to competition and new environments, and that a lack of adaptiveness to changing reality is a losing strategy when the surrounding environment shifts enough. These and others are fundamental knowledge and work the same in human organizations as in Nature.

The Biology of History

Durant moves from geology into the realm of human biology: Our nature determines the “arena” in which the human condition can play out. Human biology gives us the rules of the chessboard, and the Earth and its inhabitants provide the environment in which we play the game. The variety of outcomes approaches infinity from this starting point. That’s why this “bucket” of human knowledge is such a crucial one to study. We need to know the rules.

Thinking with the first “bucket” of knowledge — the mathematics and physics that drive all things in the universe — it’s easy to derive that compounding multiplication can take a small population and make it a very large one over a comparatively short time. 2 becomes 4 becomes 8 becomes 16, and so on. But because we also know that the spoils of the physical world are finite, the “Big Model” of Darwinian natural selection flows naturally from the compounding math: As populations grow but their surroundings offer limitations, there must be a way to derive who gets the spoils.

Not only does this provide the basis for biological competition over resources, a major lesson in the second bucket, it also provides the basis for the political and economic systems in bucket three of human history: Our various systems of political and economic organization are fundamentally driven by decisions on how to give order and fairness to the brutal reality created by human competition.

In this vein, we have previously discussed Durant’s three lessons of biological history: Life is Competition. Life is Selection. Life must Replicate. These simple precepts lead to the interesting results in biology, and most relevant to us, to similar interesting results in human culture itself:

Like other departments of biology, history remains at bottom a natural selection of the fittest individuals and groups in a struggle wherein goodness receives no favors, misfortunes abound, and the final test is the ability to survive.

***

We do, however, need to be careful to think with the right “bucket” at the right time. Durant offers us a cautionary tale here: The example of the growth and decay of societies shows an area where the third bucket, human culture, offers a different reality than what a simple analogy from physics or biology might show. Cultural decay is not inevitable, as it might be with an element or a physical organism:

If these are the sources of growth, what are the causes of decay? Shall we suppose, with Spengler and many others, that each civilization is an organism, naturally and yet mysteriously endowed with the power of development and the fatality of death? It is tempting to explain the behavior of groups through analogy with physiology or physics, and to ascribe the deterioration of a society to some inherent limit in its loan and tenure of life, or some irreparable running down of internal force. Such analogies may offer provisional illumination, as when we compare the association of individuals with an aggregation of cells, or the circulation of money from banker back to banker with the systole and diastole of the heart.

But a group is no organism physically added to its constituent individuals; it has no brain or stomach of its own; it must think or feel with the brains or nerves of its members. When the group or a civilization declines, it is through no mystic limitation of a corporate life, but through the failure of its political or intellectual leaders to meet the challenges of change.

[…]

But do civilizations die? Again, not quite. Greek civilization is not really dead; only its frame is gone and its habitat has changed and spread; it survives in the memory of the race, and in such abundance that no one life, however full and long, could absorb it all. Homer has more readers now than in his own day and land. The Greek poets and philosophers are in every library and college; at this moment Plato is being studied by a hundred thousand discoverers of the “dear delight” of philosophy overspreading life with understanding thought. This selective survival of creative minds is the most real and beneficent of immortalities.

In this sense, the ideas that thrive in human history are not bound by the precepts of physics. Knowledge — the kind which can be passed from generation to generation in an accumulative way — is a unique outcome in the human culture bucket. Other biological creatures only pass down DNA, not accumulated learning. (Yuval Harari similarly declared that“The Cognitive Revolution is accordingly the point when history declared its independence from biology.”)

***

With that caveat in mind, the concept of passed-down ideas does have some predictable overlap with major mental models of the first two buckets of physics/math and biology.

The first is compounding: Ideas and knowledge compound in the same mathematical way that money or population does. If I have an idea and tell my idea to you, we both have the idea. If we each take that idea and recombine it with another idea we already had, we now have three ideas from a starting point of only one. If we can each connect that one idea to two ideas we had, we now have five ideas between us. And so on — you can see how compounding would take place as we told our friends about the five ideas and they told theirs. So the Big Model of compound interest works on ideas too.

The second interplay is to see that human ideas go through natural selection in the same way biological life does.

Intellect is therefore a vital force in history, but it can also be a dissolvent and destructive power. Out of every hundred new ideas ninety-nine or more will probably be inferior to the traditional responses which they propose to replace. No one man, however brilliant or well-informed, can come in one lifetime to such fullness of understanding as to safely judge and dismiss the customs or institutions of his society, for these are the wisdom of generations after centuries of experiment in the laboratory of history.

This doesn’t tell us that the best ideas survive any more than natural selection tells us that the best creatures survive. It just means, at the risk of being circular, that the ideas most fit for propagation are the ones that survive for a long time. Most truly bad ideas tend to get tossed out in the vicissitudes of time either through the early death of their proponents or basic social pressure. But any idea that strikes a fundamental chord in humanity can last a very long time, even if it’s wrong or harmful. It simply has to be memorable and have at least a kernel of intuitive truth.

For more, start thinking about the three buckets of knowledge, read Durant, and start getting to work on synthesizing as much as possible.

Joseph Tussman: Getting the World to Do the Work for You

“What the pupil must learn, if he learns anything at all, is that the world will do most of the work for you, provided you cooperate with it by identifying how it really works and aligning with those realities. If we do not let the world teach us, it teaches us a lesson.”

— Joseph Tussman

Nothing better sums up the ethos of Farnam Street than the quote above by Joseph Tussman.

How’s that for a guiding principle?

Tussman was a philosophy professor at Cal Berkley and an educational reformer. We got this beautiful quote from a friend of ours in California. Isn’t it brilliant?

The world will do a lot of the work for us if we only align with it, and stop fighting it because we want the world to work another way. What Tussman really does is identify a leverage point.

Leverage amplifies an input to provide a greater output. There are leverage points in all systems. To know the leverage point is to know where to apply your effort. Focusing on the leverage point will yield non-linear results. Doesn’t that sound like something we want to look for?

Working hard and being busy is not enough. Most people are taking two steps forward and one step back. They’re busy, but they haven’t moved anywhere.

We need to work smarter not harder.

What Tussman has done is identify a leverage point in life. One that will increase what you can accomplish (through tailwinds) and reduced friction. When we work smart rather than hard, we apply energy in the same direction.

The person who needs a new mental tool and doesn’t have it is already paying for it. This is how we should be thinking about the acquisition of worldly wisdom. We’re like plumbers who show up with a lot of wrenches but no blowtorches, and our results largely reflect that. We get the job half done in twice the time.

A better approach is the one Tussman suggests. Learn from the world. The best way to identify how the world works is to find the general principles that line up with historically significant sample sizes — those that apply, in the words of Peter Kaufman, “across the geological time scale of human, organic, and inorganic history.”

***

Members can discuss this post on the Learning Community Forum

Still Curious? Pair with Andy Benoit’s wisdom and make some time to think about them.

Three Filters Needed to Think Through Problems

One of the best parts of Garrett Hardin‘s wonderful Filters Against Folly is when he explores the three filters that help us interpret reality. No matter how much we’d like it to, the world does not only operate in our circle of competence. Thus we must learn ways to distinguish reality in areas where we lack even so much as a map.

“Most geniuses—especially those who lead others—prosper not by deconstructing intricate complexities but by exploiting unrecognized simplicities.”

— Andy Benoit

Mental Tools

We need not be a genius in every area but we should understand the big ideas of most disciplines and try to avoid fooling ourselves. That’s the core to the mental models approach. When you’re not an expert in a field, often the best approach is one that avoids stupidity. There are few better ways of avoiding stupidity than understanding how the world works.

Hardin begins by outlining his goal: to understand reality and understand human nature as it really is, removing premature judgment from the analysis.

He appropriately quotes Spinoza, who laid out his principles for political science thusly:

That I might investigate the subject matter of this science with the same freedom of spirit we generally use in mathematics, I have labored carefully not to mock, lament, or execrate human actions, but to understand them; and to this end I have looked upon passions such as love, hatred, anger, envy, ambition, pity, and other perturbations of the mind, not in the light of vices of human nature, but as properties just as pertinent to it as are heat, cold, storm, thunder, and the like to the nature of the atmosphere.

The goal of these mental filters, then, is to understand reality by improving our ability to judge the statements of experts, promoters, and persuaders of all kinds. As the saying goes, we are all laymen in some field.

Hardin writes:

What follows is one man’s attempt to show that there is more wisdom among the laity than is generally concluded, and that there are some rather simple methods of checking on the validity of the statements of experts.

1. The Literate Filter

The first filter through which we must interpret reality, says Hardin, is the literate filter: What do the words really mean? The key to remember is that Language is action. Language is not just a way to communicate or interpret; language acts as a call to, or just as importantly, an inhibitor to action.

The first step is to try to understand what is really being said. What do the words and the labels actually mean? If a politician proposes a “Poverty Assistance Plan,” that sounds almost inarguably good, no? Many a pork-barrel program has passed based on such labels alone.

But when you examine the rhetoric, you must ask what those words are trying to do: Promote understanding, or inhibit it? If the program had a rational method of assistance to the deserving poor, the label might be appropriate. If it was simply a way to reward undeserving people in his or her district for their vote, the label might be simply a way to fool. The literate filter asks if we understand the true intent behind the words.

In a chapter called “The Sins of the Literate,” Hardin discusses the misuse of language by examining literate, but innumerate, concepts like “indefinite” or “infinite”:

He who introduces the words “infinity” or any of its derivatives (“forever” or “never” for instance) is also trying to escape discussion. Unfortunately he does not honestly admit the operational meaning of the high-flown language used to close off discussion. “Non-negotiable” is a dated term, no longer in common use, but “infinity” endures forever.

Like old man Proteus of Greek mythology, the wish to escape debate disguises itself under a multitude of verbal forms: infinity, non-negotiable, never, forever, irresistible, immovable, indubitable, and the recent variant “not meaningfully finite.” All these words have the effect of moving discussion out of the numerate realm, where it belongs, and into a wasteland of pure literacy, where counting and measuring are repudiated.

Later, in the final chapter, Hardin repeats:

The talent for handling words is called “eloquence.” Talent is always desirable, but the talent may have an unfair, even dangerous, advantage over those with less talent. More than a century ago Ralph Waldo Emerson said, “The curse of this country is eloquent men.” The curse can be minimized by using words themselves to point out the danger of words. One of their functions is to act as inhibitors of thought. People need to be made allergic to such thought-stoppers as infinity, sacred, and absolute. The real world is a world of quantified entities: “infinity” and its like are no words for quantities but utterances used to divert attention from quantities and limits.

It is not just innumerate exaggeration we are guarding against, but the literate tendency to replace actors with abstractions, as Hardin calls it. He uses the example of donating money to a poor country (Country X), which on its face sounds noble:

Country X, which is an abstraction, cannot act. Those who act in its name are rich and powerful people. Human nature being what it is, we can be sure that these people will not voluntarily do anything to diminish either their power or their riches…

Not uncommonly, the major part of large quantities of food sent in haste to a poor country in the tropics rot on the docks or is eaten up by rats before it can be moved to the people who need it. The wastage is seldom adequately reported back to the sending country…(remember), those who gain personally from the shipping of food to poor nations gain whether fungi, rats, or people eat the food.

2. The Numerate Filter

Hardin is clear on his approach to numerical fluency: The ability to count, weigh, and compare values in a general or specific way is essential to understanding the claims of experts or assessing any problem rationally:

The numerate temperament is one that habitually looks for approximate dimensions, ratios, proportions, and rates of change in trying to grasp what is going on in the wold. Given effective education–a rare commodity, of course–a numerate orientation is probably within the reach of most people.

[…]

Just as “literacy” is used here to mean more than merely reading and writing, so also will “numeracy” be used to mean more than measuring and counting. Examination of the origins of the sciences shows that many major discoveries were made with very little measuring and counting. The attitude science requires of its practitioners is respect, bordering on reverence, for ration, proportions, and rates of change.

Rough and ready back-of-the-envelope calculations are often sufficient to reveal the outline of a new and important scientific discovery….In truth, the essence of many of the major insights of science can be grasped with no more than child’s ability to measure, count, and calculate.

***

To explain the use of the literate and numerate filters together, Hardin uses the example of the Delaney Amendment, passed in 1958 to restrict food additives. This example should be familiar to us today:

Concerned with the growing evidence that many otherwise useful substances can cause cancer, Congress degreed that henceforth, whenever a chemical at any concentration was found to cause cancer–in any fraction of any species of animal–that substance must be totally banned as an additive to human food.

From a literate standpoint, this sounds logical. The Amendment sought to eradicate harmful food additives that the free market had allowed to surface. However, says Hardin:

The Delaney Amendment is a monument to innumerate thought. “Safe” and “unsafe” are literate distinctions; nature is numerate. Everything is dangerous at some level. Even molecular oxygen, essential to human life, becomes lethal as the concentration approaches 100 percent.

[…]

Sensitivity is ordinarily expressed as “1 part per X,” where X is a large number. If a substance probably increases the incidence of cancer at a concentration of 1 part per 10,000, one should probably ban it at that concentration in food, and perhaps at 1 in 100,000. But what about 1 part per million?…In theory there is no final limit to sensitivity. What about 1 milligram per tank car? Or 1 milligram per terrestrial globe?

Obviously, some numerical limits must be applied. This is the usefulness of the numerate filter. As Charlie Munger says, “Quantify, always quantify.”

3. The Ecolacy Filter

Hardin introduces his final filter by requiring that we ask the question “And then what?”  There is perhaps no better question to prompt second-order thinking.

Even if we understand what is truly being said and have quantified the effects of a proposed policy or solution, it is imperative that we consider the second layer of effects or beyond. Hardin recognizes that this opens the door for potentially unlimited paralysis (the poorly understood and innumerate Butterfly Effect), which he boxes in by introducing his own version of the First Law of Ecology:

We can never merely do one thing.

This is to say, all proposed solutions and interventions will have a multitude of effects, and we must try our best to consider them in their totality. Most unintended consequences are just unanticipated consequences.

In proposing this filter, Hardin is very careful to guard against the Slippery Slope argument or the idea that one step in the wrong direction will lead us directly to Hell. This, he says, is a purely literate but wholly innumerate approach to thinking.

Those who take the wedge (Slippery Slope) argument with the utmost seriousness act as though they think human beings are completely devoid of practical judgment. Countless examples from everyday life show the pessimists are wrong…If we took the wedge argument seriously, we would pass a law forbidding all vehicles to travel at any speed greater than zero. That would be an easy way out of the moral problem. But we pass no such law.

In reality, the ecolate filter helps us understand the layers of unintended consequences. Take inflation:

The consequences of hyperinflation beautifully illustrate the meaning of the First Law of Ecology. A government that is unwilling or unable to stop the escalation of inflation does more than merely change the price of things; it turns loose a cascade of consequences the effects of which reach far into the future.

Prudent citizens who have saved their money in bank accounts and government bonds are ruined. In times of inflation people spend wildly with little care for value, because the choice and price of an object are less important than that one put his money into material things. Fatalism takes over as society sinks down into a culture of poverty….

***

In the end, the filters must be used wisely together. They are ways to understand reality, and cannot be divorced from one another. Hardin’s general approach to thinking sums up much like his multi-disciplinary friend Munger’s:

No single filter is sufficient for reaching a reliable decision, so invidious comparisons between the three is not called for. The well-educated person uses all of them.

Check out our prior posts about Filters Against Folly:

The Map Is Not the Territory

The Map is Not the Territory

The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. A map can also be a snapshot of a point in time, representing something that no longer exists. This is important to keep in mind as we think through problems and make better decisions.

“The map appears to us more real than the land.”

— D.H. Lawrence

The Relationship Between Map and Territory

In 1931, in New Orleans, Louisiana, mathematician Alfred Korzybski presented a paper on mathematical semantics. To the non-technical reader, most of the paper reads like an abstruse argument on the relationship of mathematics to human language, and of both to physical reality. Important stuff certainly, but not necessarily immediately useful for the layperson.

However, in his string of arguments on the structure of language, Korzybski introduced and popularized the idea that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. This has enormous practical consequences.

In Korzybski’s words:

A.) A map may have a structure similar or dissimilar to the structure of the territory.

B.) Two similar structures have similar ‘logical’ characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory.

C.) A map is not the actual territory.

D.) An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly…We may call this characteristic self-reflexiveness.

Maps are necessary, but flawed. (By maps, we mean any abstraction of reality, including descriptions, theories, models, etc.) The problem with a map is not simply that it is an abstraction; we need abstraction. A map with the scale of one mile to one mile would not have the problems that maps have, nor would it be helpful in any way.

To solve this problem, the mind creates maps of reality in order to understand it, because the only way we can process the complexity of reality is through abstraction. But frequently, we don’t understand our maps or their limits. In fact, we are so reliant on abstraction that we will frequently use an incorrect model simply because we feel any model is preferable to no model. (Reminding one of the drunk looking for his keys under the streetlight because “That’s where the light is!”)

The Map Is Not the Territory

Even the best and most useful maps suffer from limitations, and Korzybski gives us a few to explore: (A.) The map could be incorrect without us realizing it(B.) The map is, by necessity, a reduction of the actual thing, a process in which you lose certain important information; and (C.) A map needs interpretation, a process that can cause major errors. (The only way to truly solve the last would be an endless chain of maps-of-maps, which he called self-reflexiveness.)

With the aid of modern psychology, we also see another issue: the human brain takes great leaps and shortcuts in order to make sense of its surroundings. As Charlie Munger has pointed out, a good idea and the human mind act something like the sperm and the egg — after the first good idea gets in, the door closes. This makes the map-territory problem a close cousin of man-with-a-hammer tendency.

This tendency is, obviously, problematic in our effort to simplify reality. When we see a powerful model work well, we tend to over-apply it, using it in non-analogous situations. We have trouble delimiting its usefulness, which causes errors.

Let’s check out an example.

***

By most accounts, Ron Johnson was one the most successful and desirable retail executives by the summer of 2011. Not only was he handpicked by Steve Jobs to build the Apple Stores, a venture which had itself come under major scrutiny – one retort printed in Bloomberg magazine: “I give them two years before they’re turning out the lights on a very painful and expensive mistake” – but he had been credited with playing a major role in turning Target from a K-Mart look-alike into the trendy-but-cheap Tar-zhey by the late 1990s and early 2000s.

Johnson’s success at Apple was not immediate, but it was undeniable. By 2011, Apple stores were by far the most productive in the world on a per-square-foot basis, and had become the envy of the retail world. Their sales figures left Tiffany’s in the dust. The gleaming glass cube on Fifth Avenue became a more popular tourist attraction than the Statue of Liberty. It was a lollapalooza, something beyond ordinary success. And Johnson had led the charge.

“(History) offers a ridiculous spectacle of a fragment expounding the whole.”

— Will Durant

With that success, in 2011 Johnson was hired by Bill Ackman, Steven Roth, and other luminaries of the financial world to turn around the dowdy old department store chain JC Penney. The situation of the department store was dour: Between 1992 and 2011, the retail market share held by department stores had declined from 57% to 31%.

Their core position was a no-brainer though. JC Penney had immensely valuable real estate, anchoring malls across the country. Johnson argued that their physical mall position was valuable if for no other reason that people often parked next to them and walked through them to get to the center of the mall. Foot traffic was a given. Because of contracts signed in the ’50s, ’60s, and ’70s, the heyday of the mall building era, rent was also cheap, another major competitive advantage. And unlike some struggling retailers, JC Penney was making (some) money. There was cash in the register to help fund a transformation.

The idea was to take the best ideas from his experience at Apple; great customer service, consistent pricing with no markdowns and markups, immaculate displays, world-class products, and apply them to the department store. Johnson planned to turn the stores into little malls-within-malls. He went as far as comparing the ever-rotating stores-within-a-store to Apple’s “apps.” Such a model would keep the store constantly fresh, and avoid the creeping staleness of retail.

Johnson pitched his idea to shareholders in a series of trendy New York City meetings reminiscent of Steve Jobs’ annual “But wait, there’s more!” product launches at Apple. He was persuasive: JC Penney’s stock price went from $26 in the summer of 2011 to $42 in early 2012 on the strength of the pitch.

The idea failed almost immediately. His new pricing model (eliminating discounting) was a flop. The coupon-hunters rebelled. Much of his new product was deemed too trendy. His new store model was wildly expensive for a middling department store chain – including operating losses purposefully endured, he’d spent several billion dollars trying to effect the physical transformation of the stores. JC Penney customers had no idea what was going on, and by 2013, Johnson was sacked. The stock price sank into the single digits, where it remains two years later.

What went wrong in the quest to build America’s Favorite Store? It turned out that Johnson was using a map of Tulsa to navigate Tuscaloosa. Apple’s products, customers, and history had far too little in common with JC Penney’s. Apple had a rabid, young, affluent fan-base before they built stores; JC Penney’s was not associated with youth or affluence. Apple had shiny products, and needed a shiny store; JC Penney was known for its affordable sweaters. Apple had never relied on discounting in the first place; JC Penney was taking away discounts given prior, triggering massive deprival super-reaction.

“All models are wrong but some are useful.”

— George Box

In other words, the old map was not very useful. Even his success at Target, which seems like a closer analogue, was misleading in the context of JC Penney. Target had made small, incremental changes over many years, to which Johnson had made a meaningful contribution. JC Penney was attempting to reinvent the concept of the department store in a year or two, leaving behind the core customer in an attempt to gain new ones. This was a much different proposition. (Another thing holding the company back was simply its base odds: Can you name a retailer of great significance that has lost its position in the world and come back?)

The main issue was not that Johnson was incompetent. He wasn’t. He wouldn’t have gotten the job if he was. He was extremely competent. But it was exactly his competence and past success that got him into trouble. He was like a great swimmer that tried to tackle a grand rapid, and the model he used successfully in the past, the map that had navigated a lot of difficult terrain, was not the map he needed anymore. He had an excellent theory about retailing that applied in some circumstances, but not in others. The terrain had changed, but the old idea stuck.

***

One person who well understands this problem of the map and the territory is Nassim Taleb, author of the Incerto series – Antifragile , The Black SwanFooled by Randomness, and The Bed of Procrustes.

Taleb has been vocal about the misuse of models for many years, but the earliest and most vivid I can recall is his firm criticism of a financial model called Value-at Risk, or VAR. The model, used in the banking community, is supposed to help manage risk by providing a maximum potential loss within a given confidence interval. In other words, it purports to allow risk managers to say that, within 95%, 99%, or 99.9% confidence, the firm will not lose more than $X million dollars in a given day. The higher the interval, the less accurate the analysis becomes. It might be possible to say that the firm has $100 million at risk at any time at a 99% confidence interval, but given the statistical properties of markets, a move to 99.9% confidence might mean the risk manager has to state the firm has $1 billion at risk. 99.99% might mean $10 billion. As rarer and rarer events are included in the distribution, the analysis gets less useful. So, by necessity, the “tails” are cut off somewhere and the analysis is deemed acceptable.

Elaborate statistical models are built to justify and use the VAR theory. On its face, it seems like a useful and powerful idea; if you know how much you can lose at any time, you can manage risk to the decimal. You can tell your board of directors and shareholders, with a straight face, that you’ve got your eye on the till.

The problem, in Nassim’s words, is that:

A model might show you some risks, but not the risks of using it. Moreover, models are built on a finite set of parameters, while reality affords us infinite sources of risks.

In order to come up with the VAR figure, the risk manager must take historical data and assume a statistical distribution in order to predict the future. For example, if we could take 100 million human beings and analyze their height and weight, we could then predict the distribution of heights and weights on a different 100 million, and there would be a microscopically small probability that we’d be wrong. That’s because we have a huge sample size and we are analyzing something with very small and predictable deviations from the average.

But finance does not follow this kind of distribution. There’s no such predictability. As Nassim has argued, the “tails” are fat in this domain, and the rarest, most unpredictable events have the largest consequences. Let’s say you deem a highly threatening event (for example, a 90% crash in the S&P 500) to have a 1 in 10,000 chance of occurring in a given year, and your historical data set only has 300 years of data. How can you accurately state the probability of that event? You would need far more data.

Thus, financial events deemed to be 5, or 6, or 7 standard deviations from the norm tend to happen with a certain regularity that nowhere near matches their supposed statistical probability.  Financial markets have no biological reality to tie them down: We can say with a useful amount of confidence that an elephant will not wake up as a monkey, but we can’t say anything with absolute confidence in an Extremistan arena.

We see several issues with VAR as a “map,” then. The first that the model is itself a severe abstraction of reality, relying on historical data to predict the future. (As all financial models must, to a certain extent.) VAR does not say “The risk of losing X dollars is Y, within a confidence of Z.” (Although risk managers treat it that way). What VAR actually says is “the risk of losing X dollars is Y, based on the given parameters.” The problem is obvious even to the non-technician: The future is a strange and foreign place that we do not understand. Deviations of the past may not be the deviations of the future. Just because municipal bonds have never traded at such-and-such a spread to U.S. Treasury bonds does not mean that they won’t in the future. They just haven’t yet. Frequently, the models are blind to this fact.

In fact, one of Nassim’s most trenchant points is that on the day before whatever “worst case” event happened in the past, you would have not been using the coming “worst case” as your worst case, because it wouldn’t have happened yet.

Here’s an easy illustration. October 19, 1987, the stock market dropped by 22.61%, or 508 points on the Dow Jones Industrial Average. In percentage terms, it was then and remains the worst one-day market drop in U.S. history. It was dubbed “Black Monday.” (Financial writers sometimes lack creativity — there are several other “Black Monday’s” in history.) But here we see Nassim’s point: On October 18, 1987, what would the models use as the worst possible case? We don’t know exactly, but we do know the previous worst case was 12.82%, which happened on October 28, 1929. A 22.61% drop would have been considered so many standard deviations from the average as to be near impossible.

But the tails are very fat in finance — improbable and consequential events seem to happen far more often than they should based on naive statistics. There is also a severe but often unrecognized recursiveness problem, which is that the models themselves influence the outcome they are trying to predict. (To understand this more fully, check out our post on Complex Adaptive Systems.)

A second problem with VAR is that even if we had a vastly more robust dataset, a statistical “confidence interval” does not do the job of financial risk management. Says Taleb:

There is an internal contradiction between measuring risk (i.e. standard deviation) and using a tool [VAR] with a higher standard error than that of the measure itself.

I find that those professional risk managers whom I heard recommend a “guarded” use of the VAR on grounds that it “generally works” or “it works on average” do not share my definition of risk management. The risk management objective function is survival, not profits and losses. A trader according to the Chicago legend, “made 8 million in eight years and lost 80 million in eight minutes”. According to the same standards, he would be, “in general”, and “on average” a good risk manager.

This is like a GPS system that shows you where you are at all times but doesn’t include cliffs. You’d be perfectly happy with your GPS until you drove off a mountain.

It was this type of naive trust of models that got a lot of people in trouble in the recent mortgage crisis. Backward-looking, trend-fitting models, the most common maps of the financial territory, failed by describing a territory that was only a mirage: A world where home prices only went up. (Lewis Carroll would have approved.)

This was navigating Tulsa with a map of Tatooine.

***

The logical response to all this is, “So what?” If our maps fail us, how do we operate in an uncertain world? This is its own discussion for another time, and Taleb has gone to great pains to try and address the concern. Smart minds disagree on the solution. But one obvious key must be building systems that are robust to model error.

The practical problem with a model like VAR is that the banks use it to optimize. In other words, they take on as much exposure as the model deems OK. And when banks veer into managing to a highly detailed, highly confident model rather than to informed common sense, which happens frequently, they tend to build up hidden risks that will un-hide themselves in time.

If one were to instead assume that there were no precisely accurate maps of the financial territory, they would have to fall back on much simpler heuristics. (If you assume detailed statistical models of the future will fail you, you don’t use them.)

In short, you would do what Warren Buffett has done with Berkshire Hathaway. Mr. Buffett, to our knowledge, has never used a computer model in his life, yet manages an institution half a trillion dollars in size by assets, a large portion of which are financial assets. How?

The approach requires not only assuming a future worst case far more severe than the past, but also dictates building an institution with a robust set of backup systems, and margins-of-safety operating at multiple levels. Extra cash, rather than extra leverage. Taking great pains to make sure the tails can’t kill you. Instead of optimizing to a model, accepting the limits of your clairvoyance.

When map and terrain differ, follow the terrain.

The trade-off, of course, is short-run rewards much less great than those available under more optimized models. Speaking of this, Charlie Munger has noted:

Berkshire’s past record has been almost ridiculous. If Berkshire had used even half the leverage of, say, Rupert Murdoch, it would be five times its current size.

For Berkshire at least, the trade-off seems to have been worth it.

***

The salient point then is that in our march to simplify reality with useful models, of which Farnam Street is an advocate, we confuse the models with reality. For many people, the model creates its own reality. It is as if the spreadsheet comes to life. We forget that reality is a lot messier. The map isn’t the territory. The theory isn’t what it describes, it’s simply a way we choose to interpret a certain set of information. Maps can also be wrong, but even if they are essentially correct, they are an abstraction, and abstraction means that information is lost to save space. (Recall the mile-to-mile scale map.)

How do we do better? This is fodder for another post, but the first step is to realize that you do not understand a model, map, or reduction unless you understand and respect its limitations. We must always be vigilant by stepping back to understand the context in which a map is useful, and where the cliffs might lie. Until we do that, we are the turkey.