Tag: Mental Model

Margin of Safety: An Introduction to the Mental Model

Previously on Farnam Street, we covered the idea of Redundancy — a central concept in both the world of engineering and in practical life. Today we’re going to explore a related concept: Margin of Safety.

The margin of safety is another concept rooted in engineering and quality control. Let’s start there, then see where else our model might apply in practical life, and lastly, where it might have limitations.

* * *

Consider a highly-engineered jet engine part. If the part were to fail, the engine would also fail, perhaps at the worst possible moment—while in flight with passengers on board. Like most jet engine parts, let us assume the part is replaceable over time—though we don’t want to replace it too often (creating prohibitively high costs), we don’t expect it to last the lifetime of the engine. We design the part for 10,000 hours of average flying time.

That brings us to a central question: After how many hours of service do we replace this critical part? The easily available answer might be 9,999 hours. Why replace it any sooner than we have to? Wouldn’t that be a waste of money?

The first problem is, we know nothing of the composition of the 10,000 hours any individual part has gone through. Were they 10,000 particularly tough hours, filled with turbulent skies? Was it all relatively smooth sailing? Somewhere in the middle?

Just as importantly, how confident are we that the part will really last the full 10,000 hours? What if it had a slight flaw during manufacturing? What if we made an assumption about its reliability that was not conservative enough? What if the material degraded in bad weather to a degree we didn’t foresee?

The challenge is clear, and the implication obvious: we do not wait until the part has been in service for 9,999 hours. Perhaps at 7,000 hours, we seriously consider replacing the part, and we put a hard stop at 7,500 hours.

The difference between waiting until the last minute and replacing it comfortably early gives us a margin of safety. The sooner we replace the part, the more safety we have—by not pushing the boundaries, we leave ourselves a cushion. (Ever notice how your gas tank indicator goes on long before you’re really on empty? It’s the same idea.)

The principle is essential in bridge building. Let’s say we calculate that, on an average day, a proposed bridge will be required to support 5,000 tons at any one time. Do we build the structure to withstand 5,001 tons? I’m not interested in driving on that bridge. What if we get a day with much heavier traffic than usual? What if our calculations and estimates are little off? What if the material weakens over time at a rate faster than we imagined? To account for these, we build the bridge to support 20,000 tons. Only now do we have a margin of safety.

This fundamental engineering principle is useful in many practical areas of life, even for non-engineers. Let’s look at one we all face.

* * *

Take a couple earning $100,000 per year after taxes, or about $8,300 per month. In designing their life, they must necessarily decide what standard of living to enjoy. (The part which can be quantified, anyway.) What sort of monthly expenses should they allow themselves to accumulate?

One all-too-familiar approach is to build in monthly expenses approaching $8,000. A $4,000 mortgage, $1,000 worth of car payments, $1,000/month for private schools…and so on. The couple rationalizes that they have “earned” the right live large.

However, what if there are some massive unexpected expenditures thrown their way? (In the way life often does.) What if one of them lost their job and their combined monthly income dropped to $4,000?

The couple must ask themselves whether the ensuing misery is worth the lavish spending. If they kept up their $8,000/month spending habit after a loss of income, they would have to choose between two difficult paths: Rapidly eating into their savings or considerably downsizing their life. Either is likely to cause extreme misery from the loss of long-held luxuries.

Thinking in reverse, how can we avoid the potential misery?

A common refrain is to tell the couple to make sure they’ve stashed away some money in case of emergency, to provide a buffer. Often there is a specific multiple of current spending we’re told to have in reserve—perhaps 6-12 months. In this case, savings of $48,000-$96,000 should suffice.

However, is there a way we can build them a much larger margin for error?

Let’s say the couple decides instead to permanently limit their monthly spending to $4,000 by owning a smaller house, driving less expensive cars, and trusting their public schools. What happens?

Our margin of safety now compounds. Obviously, a savings rate exceeding 50% will rapidly accumulate in their favor — $4,300 put away by the first month, $8,600 by the second month, and so on. The mere act of systematically underspending their income rapidly gives them a cushion without much trying. If an unexpected expenditure comes up, they’ll almost certainly be ready.

The unseen benefit, and the extra margin of safety in this choice comes if either spouse loses their income – either by choice (perhaps to care for a child) or by bad luck (health issues). In this case, not only has a high savings rate accumulated in their favor but because their spending is systematically low, they are able to avoid tapping it altogether! Their savings simply stop growing temporarily while they live on one income. This sort of “belt and suspenders” solution is the essence of margin-of-safety thinking.

(On a side note: Let’s take it even one step further. Say their former $8,000 monthly spending rate meant they probably could not retire until age 70, given their current savings rate, investment choices, and desired lifestyle post-retirement. Reducing their needs to $4,000 not only provides them much needed savings, quickly accelerating their retirement date, but they now need even less to retire on in the first place. Retiring at 70 can start to look like retiring at 45 in a hurry.)

* * *

Clearly, the margin of safety model is very powerful and we’re wise to use it whenever possible to avoid failure. But it has limitations.

One obvious issue, most salient in the engineering world, comes in the tradeoff with time and money. Given an unlimited runway of time and the most expensive materials known to mankind, it’s likely that we could “fail-proof” many products to such a ridiculous degree as to be impractical in the modern world.

For example, it’s possible to imagine Boeing designing a plane that would have a fail rate indistinguishable from zero, with parts being replaced 10% into their useful lives, built with rare but super-strong materials, etc.—so long as the world was willing to pay $25,000 for a coach seat from Boston to Chicago. Given the impracticability of that scenario, our tradeoff has been to accept planes that are not “fail-proof,” but merely extremely unlikely to fail, in order to give the world safe enough air travel at an affordable cost. This tradeoff has been enormously wise and helpful to the world. Simply put, the margin-of-safety idea can be pushed into farce without careful judgment.

* * *

This brings us to another limitation of the model, which is the failure to engage in “total systems” thinking. I’m reminded of a quote I’ve used before at Farnam Street:

The reliability that matters is not the simple reliability of one component of a system,
but the final reliability of the total control system
.”
— Garrett Hardin in Filters Against Folly

Let’s return to the Boeing analogy. Say we did design the safest and most reliable jet airplane imaginable, with parts that would not fail in one billion hours of flight time under the most difficult weather conditions imaginable on Earth—and then let it be piloted by a drug addict high on painkillers.

The problem is that the whole flight system includes much more than just the reliability of the plane itself. Just because we built in safety margins in one area does not mean the system will not fail. This illustrates not so much a failure of the model itself, but a common mistake in the way the model is applied.

* * *

Which brings us to a final issue with the margin of safety model—naïve extrapolation of past data. Let’s look at a common insurance scenario to illustrate this one.

Suppose we have a 100-year-old reinsurance company – PropCo – which reinsures major primary insurers in the event of property damage in California caused by a catastrophe – most worrying being an earthquake and its aftershocks. Throughout its entire (long) history, PropCo had never experienced a yearly loss on this sort of coverage worse than $1 billion. Most years saw no loss worse than $250 million, and in fact, many years had no losses at all – giving them comfortable profit margins.

Thinking like engineers, the directors of PropCo insisted that the company have such a strong financial position so that they could safely cover a loss twice as bad as anything ever encountered. Given their historical losses, the directors believed this extra capital would give PropCo a comfortable “margin of safety” against the worst case. Right?

However, our directors missed a few crucial details. The $1 billion loss, the insurer’s worst, had been incurred in the year 1994 during the Northridge earthquake. Since then, the building density of Californian cities had increased significantly, and due to ongoing budget issues and spreading fraud, strict building codes had not been enforced. Considerable inflation in the period since 1994 also ensured that losses per damaged square foot would be far higher than ever faced previously.

With these conditions present, let’s propose that California is hit with an earthquake reading 7.0 on the Richter scale, with an epicenter 10 miles outside of downtown LA. PropCo faces a bill of $5 billion – not twice as bad, but five times as bad as it had ever faced. In this case, PropCo fails.

This illustration (which recurs every so often in the insurance field) shows the limitation of naïvely assuming a margin of safety is present based on misleading or incomplete past data.

* * *

Margin of safety is an important component to some decisions and life. You can think of it as a reservoir to absorb errors or poor luck. Size matters. At least, in this case, bigger is better. And if you need a calculator to figure out how much room you have, you’re doing something wrong.

Margin of safety is part of the Farnam Street Latticework of Mental Models.

Understanding your Circle of Competence: How Warren Buffett Avoids Problems

(c)2018 Farnam Street Media Inc. May not be used  without permission.

Understanding your circle of competence helps you avoid problems, identify opportunities for improvement, and learn from others.

The concept of the Circle of Competence has been used over the years by Warren Buffett as a way to focus investors on only operating in areas they knew best. The bones of the concept appear in his 1996 Shareholder Letter:

What an investor needs is the ability to correctly evaluate selected businesses. Note that word “selected”: You don’t have to be an expert on every company, or even many. You only have to be able to evaluate companies within your circle of competence. The size of that circle is not very important; knowing its boundaries, however, is vital.

Circle Of Competence

Circle of Competence is simple: Each of us, through experience or study, has built up useful knowledge on certain areas of the world. Some areas are understood by most of us, while some areas require a lot more specialty to evaluate.

For example, most of us have a basic understanding of the economics of a restaurant: You rent or buy space, spend money to outfit the place and then hire employees to seat, serve, cook, and clean. (And, if you don’t want to do it yourself, manage.)

From there it’s a matter of generating enough traffic and setting the appropriate prices to generate a profit on the food and drinks you serve—after all of your operating expenses have been paid. Though the cuisine, atmosphere, and price points will vary by restaurant, they all have to follow the same economic formula.

That basic knowledge, along with some understanding of accounting and a little bit of study, would enable one to evaluate and invest in any number of restaurants and restaurant chains; public or private. It’s not all that complicated.

However, can most of us say we understand the workings of a microchip company or a biotech drug company at the same level? Perhaps not.

“I’m no genius. I’m smart in spots—but I stay around those spots.”

— Tom Watson Sr., Founder of IBM

But as Buffett so eloquently put it, we do not necessarily need to understand these more esoteric areas to invest capital. Far more important is to honestly define what we do know and stick to those areas. Our circle of competence can be widened, but only slowly and over time. Mistakes are most often made when straying from this discipline.

Circle of Competence applies outside of investing.

Buffett describes the circle of competence of one of his business managers, a Russian immigrant with poor English who built the largest furniture store in Nebraska:

I couldn’t have given her $200 million worth of Berkshire Hathaway stock when I bought the business because she doesn’t understand stock. She understands cash. She understands furniture. She understands real estate. She doesn’t understand stocks, so she doesn’t have anything to do with them. If you deal with Mrs. B in what I would call her circle of competence… She is going to buy 5,000 end tables this afternoon (if the price is right). She is going to buy 20 different carpets in odd lots, and everything else like that [snaps fingers] because she understands carpet. She wouldn’t buy 100 shares of General Motors if it was at 50 cents a share.

It did not hurt Mrs. B to have such a narrow area of competence. In fact, one could argue the opposite. Her rigid devotion to that area allowed her to focus. Only with that focus could she have overcome her handicaps to achieve such extreme success.

In fact, Charlie Munger takes this concept outside of business altogether and into the realm of life in general. The essential question he sought to answer: Where should we devote our limited time in life, in order to achieve the most success? Charlie’s simple prescription:

You have to figure out what your own aptitudes are. If you play games where other people have the aptitudes and you don’t, you’re going to lose. And that’s as close to certain as any prediction that you can make. You have to figure out where you’ve got an edge. And you’ve got to play within your own circle of competence.

If you want to be the best tennis player in the world, you may start out trying and soon find out that it’s hopeless—that other people blow right by you. However, if you want to become the best plumbing contractor in Bemidji, that is probably doable by two-thirds of you. It takes a will. It takes the intelligence. But after a while, you’d gradually know all about the plumbing business in Bemidji and master the art. That is an attainable objective, given enough discipline. And people who could never win a chess tournament or stand in center court in a respectable tennis tournament can rise quite high in life by slowly developing a circle of competence—which results partly from what they were born with and partly from what they slowly develop through work.

So, the simple takeaway here is clear. If you want to improve your odds of success in life and business then define the perimeter of your circle of competence, and operate inside. Over time, work to expand that circle but never fool yourself about where it stands today, and never be afraid to say “I don’t know.”

Circle of Competence is part of the Farnam Street latticework of mental models.

The Timeless Parable of Mr. Market

There is no one better to explain the concept of Mr. Market than Warren Buffett, who has used to to make billions of dollars and remain calm when all around him were losing their heads.

In the 1987 letter to Berkshire Hathaway shareholders, Buffett unfolds the concept for us.

Ben Graham, my friend and teacher, long ago described the mental attitude toward market fluctuations that I believe to be most conducive to investment success. He said that you should imagine market quotations as coming from a remarkably accommodating fellow named Mr. Market who is your partner in a private business. Without fail, Mr. Market appears daily and names a price at which he will either buy your interest or sell you his.

Even though the business that the two of you own may have economic characteristics that are stable, Mr. Market’s quotations will be anything but. For, sad to say, the poor fellow has incurable emotional problems. At times he feels euphoric and can see only the favorable factors affecting the business. When in that mood, he names a very high buy-sell price because he fears that you will snap up his interest and rob him of imminent gains. At other times he is depressed and can see nothing but trouble ahead for both the business and the world. On these occasions, he will name a very low price, since he is terrified that you will unload your interest on him.

Mr. Market has another endearing characteristic: He doesn’t mind being ignored. If his quotation is uninteresting to you today, he will be back with a new one tomorrow. Transactions are strictly at your option. Under these conditions, the more manic-depressive his behavior, the better for you.

But, like Cinderella at the ball, you must heed one warning or everything will turn into pumpkins and mice: Mr. Market is there to serve you, not to guide you. It is his pocketbook, not his wisdom, that you will find useful. If he shows up some day in a particularly foolish mood, you are free to either ignore him or to take advantage of him, but it will be disastrous if you fall under his influence. Indeed, if you aren’t certain that you understand and can value your business far better than Mr. Market, you don’t belong in the game. As they say in poker, “If you’ve been in the game 30 minutes and you don’t know who the patsy is, you’re the patsy.”

Ben’s Mr. Market allegory may seem out-of-date in today’s investment world, in which most professionals and academicians talk of efficient markets, dynamic hedging and betas. Their interest in such matters is understandable, since techniques shrouded in mystery clearly have value to the purveyor of investment advice. After all, what witch doctor has ever achieved fame and fortune by simply advising “Take two aspirins”?

The value of market esoterica to the consumer of investment advice is a different story. In my opinion, investment success will not be produced by arcane formulae, computer programs or signals flashed by the price behavior of stocks and markets. Rather an investor will succeed by coupling good business judgment with an ability to insulate his thoughts and behavior from the super-contagious emotions that swirl about the marketplace. In my own efforts to stay insulated, I have found it highly useful to keep Ben’s Mr. Market concept firmly in mind.

Following Ben’s teachings, Charlie and I let our marketable equities tell us by their operating results – not by their daily, or even yearly, price quotations – whether our investments are successful. The market may ignore business success for a while, but eventually will confirm it. As Ben said: “In the short run, the market is a voting machine but in the long run it is a weighing machine.” The speed at which a business’s success is recognized, furthermore, is not that important as long as the company’s intrinsic value is increasing at a satisfactory rate. In fact, delayed recognition can be an advantage: It may give us the chance to buy more of a good thing at a bargain price.

Of course, this concept can be applied outside of stock markets as well.

“In the short run, the market is a voting machine but in the long run it is a weighing machine.”

— Ben Graham

Mr. Market is a Farnam Street Mental Model.

Mental Model: Bias from Insensitivity to Sample Size

The widespread misunderstanding of randomness causes a lot of problems.

Today we’re going to explore a concept that causes a lot of human misjudgment. It’s called the bias from insensitivity to sample size, or, if you prefer,the law of small numbers.

Insensitivity to small sample sizes causes a lot of problems.

* * *

If I measured one person, who happened to measure 6 feet, and then told you that everyone in the whole world was 6 feet, you’d intuitively realize this is a mistake. You’d say, you can’t measure only one person and then draw such a conclusion. To do that you’d need a much larger sample.

And, of course, you’d be right.

While simple, this example is a key building block to our understanding of how insensitivity to sample size can lead us astray.

As Stuard Suterhland writes in Irrationality:

Before drawing conclusions from information about a limited number of events (a sample) selected from a much larger number of events (the population) it is important to understand something about the statistics of samples.

In Thinking, Fast and Slow, Daniel Kahneman writes “A random event, by definition, does not lend itself to explanation, but collections of random events do behave in a highly regular fashion.” Kahnemen continues, “extreme outcomes (both high and low) are more likely to be found in small than in large samples. This explanation is not causal.”

We all intuitively know that “the results of larger samples deserve more trust than smaller samples, and even people who are innocent of statistical knowledge have heard about this law of large numbers.”

The principle of regression to the mean says that as the sample size grows larger results should converge to a stable frequency. So, if we’re flipping coins, and measuring the proportion of times that we get heads, we’d expect it to approach 50% after some large sample size of, say, 100 but not necessarily 2 or 4.

In our minds, we often fail to account for the accuracy and uncertainty with a given sample size.

While we all understand it intuitively, it’s hard for us to realize in the moment of processing and decision making that larger samples are better representations than smaller samples.

We understand the difference between a sample size of 6 and 6,000,000 fairly well but we don’t, intuitively, understand the difference between 200 and 3,000.

* * *

This bias comes in many forms.

In a telephone poll of 300 seniors, 60% support the president.

If you had to summarize the message of this sentence in exactly three words, what would they be? Almost certainly you would choose “elderly support president.” These words provide the gist of the story. The omitted details of the poll, that it was done on the phone with a sample of 300, are of no interest in themselves; they provide background information that attracts little attention.” Of course, if the sample was extreme, say 6 people, you’d question it. Unless you’re fully mathematically equipped, however, you’ll intuitively judge the sample size and you may not react differently to a sample of, say, 150 and 3000. That, in a nutshell, is exactly the meaning of the statement that “people are not adequately sensitive to sample size.”

Part of the problem is that we focus on the story over reliability, or, robustness, of the results.

System one thinking, that is our intuition, is “not prone to doubt. It suppresses ambiguity and spontaneously constructs stories that are as coherent as possible. Unless the message is immediately negated, the associations that it evokes will spread as if the message were true.”

Considering sample size, unless it’s extreme, is not a part of our intuition.

Kahneman writes:

The exaggerated faith in small samples is only one example of a more general illusion – we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.

* * *

In engineering, for example, we can encounter this in the evaluation of precedent.

Steven Vick, writing in Degrees of Belief: Subjective Probability and Engineering Judgment, writes:

If something has worked before, the presumption is that it will work again without fail. That is, the probability of future success conditional on past success is taken as 1.0. Accordingly, a structure that has survived an earthquake would be assumed capable of surviving with the same magnitude and distance, with the underlying presumption being that the operative causal factors must be the same. But the seismic ground motions are quite variable in their frequency content, attenuation characteristics, and many other factors, so that a precedent for a single earthquake represents a very small sample size.

Bayesian thinking tells us that a single success, absent of other information, raises the likelihood of survival in the future.

In a way this is related to robustness. The more you’ve had to handle and you still survive the more robust you are.

Let’s look at some other examples.

* * *

Hospital

Daniel Kahneman and Amos Tversky demonstrated our insensitivity to sample size with the following question:

A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. As you know, about 50% of all babies are boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50%, sometimes lower. For a period of 1 year, each hospital recorded the days on which more than 60% of the babies born were boys. Which hospital do you think recorded more such days?

  1. The larger hospital
  2. The smaller hospital
  3. About the same (that is, within 5% of each other)

Most people incorrectly choose 3. The correct answer is, however, 2.

In Judgment in Managerial Decision Making, Max Bazerman explains:

Most individuals choose 3, expecting the two hospitals to record a similar number of days on which 60 percent or more of the babies board are boys. People seem to have some basic idea of how unusual it is to have 60 percent of a random event occurring in a specific direction. However, statistics tells us that we are much more likely to observe 60 percent of male babies in a smaller sample than in a larger sample.” This effect is easy to understand. Think about which is more likely: getting more than 60 percent heads in three flips of coin or getting more than 60 percent heads in 3,000 flips.

* * *

Another interesting example comes from Poker.

Over short periods of time luck is more important than skill. The more luck contributes to the outcome, the larger the sample you’ll need to distinguish between someone’s skill and pure chance.

David Einhorn explains.

People ask me “Is poker luck?” and “Is investing luck?”

The answer is, not at all. But sample sizes matter. On any given day a good investor or a good poker player can lose money. Any stock investment can turn out to be a loser no matter how large the edge appears. Same for a poker hand. One poker tournament isn’t very different from a coin-flipping contest and neither is six months of investment results.

On that basis luck plays a role. But over time – over thousands of hands against a variety of players and over hundreds of investments in a variety of market environments – skill wins out.

As the number of hands played increases, skill plays a larger and larger role and luck plays less of a role.

* * *

But this goes way beyond hospitals and poker. Baseball is another good example. Over a long season, odds are the best teams will rise to the top. In the short term, anything can happen. If you look at the standing 10 games into the season, odds are they will not be representative of where things will land after the full 162 game season. In the short term, luck plays too much of a role.

In Moneyball, Michael Lewis writes “In a five-game series, the worst team in baseball will beat the best about 15% of the time.”

* * *

If you promote people or work with colleagues you’ll also want to keep this bias in mind.

If you assume that performance at work is some combination of skill and luck you can easily see that sample size is relevant to the reliability of performance.

That performance sampling works like anything else, the bigger the sample size the bigger the reduction in uncertainty and the more likely you are to make good decisions.

This has been studied by one of my favorite thinkers, James March. He calls it the false record effect.

He writes:

False Record Effect. A group of managers of identical (moderate) ability will show considerable variation in their performance records in the short run. Some will be found at one end of the distribution and will be viewed as outstanding; others will be at the other end and will be viewed as ineffective. The longer a manager stays in a job, the less the probable difference between the observed record of performance and actual ability. Time on the job increased the expected sample of observations, reduced expected sampling error, and thus reduced the change that the manager (or moderate ability) will either be promoted or exit.

Hero Effect. Within a group of managers of varying abilities, the faster the rate of promotion, the less likely it is to be justified. Performance records are produced by a combination of underlying ability and sampling variation. Managers who have good records are more likely to have high ability than managers who have poor records, but the reliability of the differentiation is small when records are short.

(I realize promotions are a lot more complicated than I’m letting on. Some jobs, for example, are more difficult than others. It gets messy quickly and that’s part of the problem. Often when things get messy we turn off our brains and concoct the simplest explanation we can. Simple but wrong. I’m only pointing out that sample size is one input into the decision. I’m by no means advocating an “experience is best” approach, as that comes with a host of other problems.)

* * *

This bias is also used against you in advertising.

The next time you see a commercial that says “4 out of 5 Doctors recommend ….” These results are meaningless without knowing the sample size. Odds are pretty good that the sample size is 5.

* * *

Large sample sizes are not a panacea. Things change. Systems evolve and faith in those results can be unfounded as well.

The key, at all times, is to think.

This bias leads to a whole slew of things, such as:
– under-estimating risk
– over-estimating risk
– undue confidence in trends/patterns
– undue confidence in the lack of side-effects/problems

The Bias from insensitivity to sample size is part of the Farnam Street latticework of mental models.

The Nature of Explanation

We unconsciously construct mental models of the world and these models help aid our thinking.

This idea is not new. In fact, in 1943 Kenneth Craik proposed that thinking is the manipulation of internal representations of the world in his book The Nature of Explanation.

“This deceptively simple notion,” argues Philip Johnson-Laird in Mental Models, “has rarely been taken sufficiently seriously by psychologists, particularly by those studying language and thought.”

They certainly argue that there are mental representations — images, or strings of symbols — and that information in them is processed by the mind; but they ignore a crucial issue: what it is that makes a mental entity a representation of something. In consequence, psychological theories of meaning almost invariably fail to deal satisfactorily with referential phenomena. A similar neglect or the subtleties of mental representation has led to psychological theories of reasoning that almost invariably assume, either explicitly or implicitly, the existence of a mental logic.

Explanation depends on understanding. If you don’t understand something you cannot explain it. Although what is explanation? “It is easier to give criteria for what counts as understanding than to capture its essence — perhaps because it has no essence,” writes Johnson-Laird.

This will no doubt strike many of you as fuzzy. Justice Stewart found it impossible to formulate a test for obscenity but nevertheless asserted, “I know it when I see it,” so can we, in an inexact, yet useful way, when it comes to explanations.

Explanations certainly require knowledge and understanding. Johnson-Laird writes:

If you know what causes a phenomenon, what results from it, how to influence, control, initiate, or prevent it, how it relates to other states of affairs or how it resembles them, how to predict its onset and course, what its internal or underlying “structure” is, then to some extent you understand it.

The psychological core of understanding, I shall assume, consists in your having a “working model” of the phenomenon in your mind. If you understand inflation, a mathematical proof, the way a computer works, DNA or a divorce, then you have a mental representation that serves as a model of an entity in much the same way as, say, a clock functions as a model of the earth’s rotation.

This is where Kenneth Craik comes into the picture. His 1943 book The Nature of Explanation was one of the first, if not the first, to propose that human beings think by manipulating internal representations of the world. This manipulation — or reasoning — involves three distinct processes:

1. A translation of some external process into an internal representation in terms of words, numbers, or other symbols.
2. The derivation of other symbols from them by some sort of inferential process.
3. A re-translation of these symbols into actions, or at least a recognition of the correspondence between these symbols and external events, as in realizing that a prediction is fulfilled.

In The Nature of Explanation Craik writes this beautiful passage:

One other point is clear; this process of reasoning has produced a final result similar to that which might have been reached by causing the actual physical processes to occur (e.g. building the bridge haphazard mid measuring its strength or compounding certain chemicals and seeing what happened); but it is also clear that this is not what has happened; the man’s mind does not contain a material bridge or the required chemicals. Surely, however, this process of prediction is not unique to minds, though no doubt it is hard to imitate the flexibility and versatility of mental prediction. A calculating machine, an anti-aircraft ‘predictor’, and Kelvin’s tidal predictor all show the same ability. In all these latter cases, the physical process which it is desired to predict is imitated by some mechanical device or model which is cheaper, or quicker, or more convenient in operation. Here we have a very close parallel to our three stages of reasoning-the ‘translation’ of the external processes into their representatives (positions of gears, etc.) in the model; the arrival at other positions of gears, etc., by mechanical processes in the instrument; and finally, the retranslation of these into physical processes of the original type.

By a model we thus mean any physical or chemical system, which has a similar relation-structure to that of the process it imitates. By relation-structure I do not mean some obscure non-physical entity which attends the model, but the fact that it is a physical working model which works in the same way as the process it parallels, in the aspects under consideration at any moment. Thus, the model need not resemble the real object pictorially; Kelvin’s tide predictor, which consists of a number of pulleys on levers, does not resemble a ride in appearance, but it works in the same way in certain essential respects-it combines oscillations of various frequencies so as to produce an oscillation which closely resembles in amplitude at each moment the variation in tide level at any place. …

My hypothesis then is that thought models, or parallels, reality — that its essential feature is not ‘the mind’, ‘the self’, ‘sense-data’ nor propositions but symbolism, and that this symbolism is largely of the same kind as. that which is familiar to us in mechanical devices which aid thought and calculation. …

If the organism carries a ‘small-scale model’ of external reality and of its own possible actions within its head, it is able to try out various alternatives, conclude which is the best of them, react to future situations before they arise, utilize the knowledge of past events in dealing with the present and future, and in every way to react in a much fuller, safer, and more competent manner to the emergencies which face it. Most of the greatest advances of modem technology have been instruments, which extended the scope of our sense-organs, our brains or our limbs. Such are telescopes and microscopes, wireless, calculating machines, typewriters, motor cars, ships and aeroplanes. Is it not possible, therefore, that our brains themselves utilize comparable mechanisms to achieve the same ends and that these mechanisms can parallel phenomena in the external world as a calculating machine can parallel the development of strains in a bridge?

Small models of reality need neither be wholly accurate nor correspond completely with what they model to be useful. Your model of an iPhone may contain only the idea of a rectangle that serves multiple functions such as sending and receiving data, apps, displaying moving pictures with accompanying sound. Alternatively, it may consist of an understanding of the programming necessary to make the device work, the protocols, the physical limitations, and how the display actually functions, in which case you’ve eclipsed me. Your model may be deeper still, into the hardware and how it works, etc. A person who repairs iPhones is likely to have a more comprehensive model of them than someone who only operates one. The engineers at Apple are likely to have a richer model than most of us.

What must be questioned now is whether adding information increases the usefulness of the model. If I explain how operating systems and API’s work, you will have a much richer model of an iPhone. For some of you that will mean a more useful model and for some it will not.

“Many of the models in people’s minds are,” Johnson-Laird writes, “little more than high-grade simulations, but they are none the less useful provided that the picture is accurate; all representations of physical phenomena necessarily contain an element of simulation.”

So the nature of an explanation is to understand something – to have a working model of it. All explanations are incomplete because at some point they all must take something for granted. When you explain something to another person, “what is conveyed is a blueprint for the construction of a working model.”

Obviously, a satisfactory blueprint for one individual may be grossly inadequate for another, since any set of instructions demands the knowledge and ability to understand them. … In most domains of expertise, there is a consensus about what counts as a satisfactory explanation — a consensus based on common knowledge and formulable in public criteria.”

Still Curious? Try reading these three books in the following order: 1) The Nature of Explanation; 2) Mental Models; and 3) How We Reason.

Mental Model: Equilibrium

There are many ways in which you can visualize the concept of equilibrium, but one of the simplest comes from Boombustology where a ball sits on a simple curved shape.

equilibrium

A situation in which equilibrium is possible is one in which over time, if left to its own devices, the ball will find one unique location. Overshooting and undershooting this unique location is self-correcting. A situation of disequilibrium, however, is one in which the ball is unable to find a unique location. A ball in such a state does not generate self-correcting moves that dampen its moves toward a theoretical “equilibrium” or resting spot; rather, disequilibrium generates motion that is self-reinforcing and accelerates the ball’s move away from any stable state.

Let’s take a step back and thank Netwon.

In the Principia he describes his three laws of motion. Using planets, these laws allowed Newton to demonstrate how gravitational forces act between two bodies. He showed that the force of the sun’s gravity (pulling planets toward the sun) is offset by their forward velocity. These two forces, equal in nature, create a state of equilibrium.

Equilibrium is a balance between one or more opposing forces. As you can imagine, different types of equilibrium exist. Static equilibrium is when a system is at rest. Dynamic equilibrium is when two or more forces are equally matched. Robert Hagstorm, in the Last Liberal Art, helps illustrate the difference between the two:

A scale that is equally weighted on both sides is an example of static equilibrium. Fill a bathtub full of water and then turn off the faucet and you will observe static equilibrium. But if you unplug the drain and then turn on the faucet so the level of the bathtub does not change, you are witnessing dynamic equilibrium. Another example is the human body. It remains in dynamic equilibrium so long as the heat loss from cooling remains in balance with the consumption of sugars.

Supply and Demand + Equilibrium

The rule of supply and demand, from economics, is also an example of the law of equilibrium.

In 1997 Warren Buffett, through his company Berkshire Hathaway, purchased 11.2 million ounces of silver based on his understanding of equilibrium. In his annual letter for that year, he succinctly sums up the investment:

In recent years, bullion inventories have fallen materially, and last summer Charlie (Munger) and I concluded that a higher price would be needed to establish equilibrium between supply and demand.

Too little supply and equilibrium is out of balance. Buffett (correctly) bet that the only way to bring the market back into a state of equilibrium was rising prices. Demand, the balancing force to supply, can also result in successful investments.

In his 2011 shareholder letter, Buffett again illustrates the concept of equilibrium through supply and demand.

Today the world’s gold stock is about 170,000 metric tons. If all of this gold were melded together, it would form a cube of about 68 feet per side. (Picture it fitting comfortably within a baseball infield.) At $1,750 per ounce – gold’s price as I write this – its value would be $9.6 trillion. Call this cube pile A. Let’s now create a pile B costing an equal amount. For that, we could buy all U.S. cropland (400 million acres with output of about $200 billion annually), plus 16 Exxon Mobils (the world’s most profitable company, one earning more than $40 billion annually). After these purchases, we would have about $1 trillion left over for walking-around money (no sense feeling strapped after this buying binge). Can you imagine an investor with $9.6 trillion selecting pile A over pile B?

Beyond the staggering valuation given the existing stock of gold, current prices make today’s annual production of gold command about $160 billion. Buyers – whether jewelry and industrial users, frightened individuals, or speculators – must continually absorb this additional supply to merely maintain an equilibrium at present prices.

A century from now the 400 million acres of farmland will have produced staggering amounts of corn, wheat, cotton, and other crops – and will continue to produce that valuable bounty, whatever the currency may be. Exxon Mobil will probably have delivered trillions of dollars in dividends to its owners and will also hold assets worth many more trillions (and, remember, you get 16 Exxons). The 170,000 tons of gold will be unchanged in size and still incapable of producing anything. You can fondle the cube, but it will not respond.

Admittedly, when people a century from now are fearful, it’s likely many will still rush to gold. I’m confident, however, that the $9.6 trillion current valuation of pile A will compound over the century at a rate far inferior to that achieved by pile B.

In Boombustology, Mansharamani writes:

Inherent in most equilibrium-oriented approaches is a belief that higher prices generate new supply that tends to push prices down. Likewise, it is believed that lower prices generate new demand that tends to push prices up. In this way, deviations from an appropriate price level are self-correcting.

A grasp of supply and demand can help us make better investment decisions. The producers of undifferentiated goods, (e.g., an aluminium can), are (usually) poor investments because the only way they will make adequate returns is under conditions of tight supply. If any excess capacity exists in the industry, prices will trend down towards the cost of producing. In this case, owners are left with unsatisfactory returns on their investment.

The only real winners are the low cost producers. As prices trend down only they can maintain full production whereas high cost competitors must cut production, which starts reducing supply and moves the industry towards equilibrium. When business picks up again, as it inevitably does, the production that was once shuttled comes back online. Only low cost producers can operate through the cycle. Opportunities to profit from equilibrium exist when demand outstrips capacity, which usually results from: (1) a positive change in demand or (2) a negative change in supply.

While seductively simple, this model of equilibrium in financial markets is somewhat incomplete. We must consider reflexivity.

George Soros writes, “Reflexivity is, in effect, a two-way feedback mechanism in which reality helps shape the participants’ thinking and the participants’ thinking helps shape reality in an unending process in which thinking and reality may come to approach each other but can never become identical.”

The implications of reflexivity on financial markets are quite profound, particularly with regard to the existence of an equilibrium price. Soros describes these implications in his own words succinctly:

Instead of a tendency towards some kind of theoretical equilibrium, the participants’ views and actual state of affairs enter into a process of dynamic disequilibrium, which may be self-reinforcing at first, moving both thinking and reality in a certain direction, but is bound to become unsustainable in the long run and engender a move in the opposite direction.

Soros’ testimony in 1994 to the House Banking Committee summarizes his theory of reflexivity and how it manifests itself in financial markets:

I must state at the outset that I am in fundamental disagreement with the prevailing wisdom. The generally accepted theory is that markets tend towards equilibrium and on the whole discount the future correctly. I operate using a different theory, according to which financial markets cannot possibly discount the future correctly because they do not merely discount the future, they help to shape it. In certain circumstances, financial markets can affect the so-called fundamentals which they are supposed to reflect. When that happens, markets enter into a state of dynamic disequilibrium and behave quite differently than what would be considered normal by the theory of efficient markets. Such boom/bust sequences do not arise very often, but when they do, they can be very disruptive, precisely because they affect the fundamentals of the economy.

In Boombustology, Mansharamani writes:

… financial extremes are characterized by two primary components: a prevailing trend that exists in reality and a misconception relating to it. He often uses real estate as an example to illustrate this point. The prevailing trend in reality is that there is an increased willingness to lend and a corresponding rise in prices. The misconception relating to this trend is that the prices of real estate are independent of the willingness to lend. Further, as more banks become willing to lend, and the number of buyers therefore rises, the prices of real estate rise—thereby making the banks feel more secure (given higher collateral values) and driving more lending.

Feedback Loops and Equilibrium

In Universal Principles of Design, William Lidwell & co. write:

Every action creates an equal and opposite reaction. When reactions loop back to affect themselves, a feedback loop is created. All real-world systems are composed of many such interacting feedback loops — animals, machines, businesses, and ecosystems, to name a few. There are two types of feedback loops: positive and negative. Positive feedback amplifies system output, resulting in growth or decline. Negative feedback dampers output, stabilizes the system around an equilibrium point.

Positive feedback loops are effective for creating change, but generally result in negative consequences if not moderated by negative feedback loops. For example, in response to head and neck injuries in football in the late 1950s, designers created plastic football helmets with internal padding to replace leather helmets. The helmets provided more protection, but induced players to take increasingly greater risks when tackling. More head and neck injuries occurred (after the introduction of plastic helmets) than before. By concentrating on the problem in isolation (e.g., not considering changes in player behavior designers inadvertently created a positive feedback loop in which players used their head and neck in increasingly risky ways. This resulted in more injuries which resulted in additional redesigns that made the helmet shells harder and more padded and so on.

Negative feedback loops are effective for resisting change. For example, the Segway Human Transported uses negative feedback lops to maintain equilibrium. As a rider leans forward or backward, the Segway accelerates or decelerates to keep the system in equilibrium. To achieve this smoothly, the Segway makes hundreds of adjustments every second. Given the high adjustment rate, the oscillations around the point of equilibrium are so small as to not be detectable. However, if fewer adjustments were made per second, the oscillations would increase in size and the ride would become increasingly jerky.

Diseases and Equilibrium

Malcolm Gladwell illustrates this in The Tipping Point with a hypothetical outbreak of the flu.

Suppose, for example, that one summer 1,000 tourists come to Manhattan from Canada carrying an untreatable strain of twenty-four-hour virus. This strain of flu has a 2 percent infection rate, which is to say that one out of every 50 people who come into close contact with someone carrying it catches the bug himself. Let’s say that 50 is also exactly the number of people the average Manhattanite — in the course of riding the subways and mingling with colleagues at work — comes into contact with every day. What we have, then, is a disease in equilibrium. Those 1,000 Canadian tourists pass on the virus to 1,000 new people on the day they arrive. And the next day those 1,000 newly infected people pass on the virus to another 1,000 people, just as the original 1,000 tourists who started the epidemic are returning to health. With those getting sick and those getting well so perfectly in balance, the flu chugs along at a steady but unspectacular clip through the test of summer and fall.

But then comes the Christmas season. The subways and buses get more crowded with tourists and shoppers, and instead of running into an even 50 people a day, the average Manhattanite now has close contact with, say, 55 people a day. All of a sudden, the equilibrium is disrupted. The 1,000 flu carriers now run into 55,000 people a day and at a 2 percent infection rate, that translates into 1,100 cases the following day. Those 1,100, in turn, are now passing on their virus to 55,000 people as well, so that by day three there are 1,210 Manhattanites with the flu and by day four 1,331 and by the end of the week there are nearly 2,000, and so on up, in an exponential spiral until Manhattan has a full-blow flu epidemic on its hands by Christmas Day. That moment when the average flu carrier went from running into 50 people a day to running into 55 was the Tipping point. It was the point at which an ordinary and stable phenomenon — a low-level flu outbreak — turned into a public health crisis. If you were to draw a graph of the progress of the Canadian flu epidemic, the Tipping point would be the point on the graph where is suddenly turned upward.

The Equilibrium is a part of the Farnam Street latticework of Mental Models.