Tag: Charlie Munger

An Introduction to the Mental Model of Redundancy (with examples)

“The reliability that matters is not the simple reliability of one component of a system,
but the final reliability of the total control system.”

Garrett Hardin

***

We learn from Engineering that critical systems often require back up systems to guarantee a certain level of performance and minimize downtime. These systems are resilient to adverse conditions and if one fails there is spare capacity or a backup system.

A simple example where you want to factor in a large margin of safety is a bridge. David Dodd, a longtime colleague of Benjamin Graham, observed “You build a bridge that 30,000-pound trucks can go across and then you drive 10,000-pound trucks across it. That is the way I like to go across bridges.”

Looking at failure, we can see many insights into redundancy.

There are many cases of failures where the presence of redundant systems would have averted catastrophe. On the other hand, there are cases of failure where the presence of redundancy caused failure.

How can redundancy cause failure?

First, in certain cases, the added benefits of redundancy are outweighed by the risks of added complexity. Since adding redundancy increases the complexity of a system, efforts to increase reliability and safety through redundant systems may backfire and inadvertently make systems more susceptible to failure. An example of how adding complexity to a system can increase the odds of failure can be found in the near-meltdown of the Femi reactor in 1996. This incident was caused by an emergency safety device which broke off and blocked a pipe stopping the flow of coolants into the reactor core. Luckily this was before the plant was active.

Second, redundancy with people can lead to social diffusion where people always assume it was someone else who had the responsibility.

Third, redundancy can lead to increasingly risky behavior.

* * *

In Reliability Engineering for Electronic Design, Norman Fuqua gives a great introduction to the concept of redundancy.

Websters defines redundancy as needless repetition. In reliability engineering, however, redundancy is defined as the existence of more than one means for accomplishing a given task. Thus all of these means must fail before there is a system failure.

Under certain circumstance during system design, it may become necessary to consider the use of redundancy to reduce the probability of system failure–to enhance systems reliability–by providing more than one functional path or operating element in areas that are critically important to system success. The use of redundancy is not a panacea to solve all reliability problems, nor is it a substitute for good initial design. By its very nature, redundancy implies increased complexity, increased weight and space, increased power consumption, and usually a more complicated system …

In Seeking Wisdom, Peter Bevelin mentioned some interesting quotes from Buffett and Munger that speak to the concept of redundancy/resilience from the perspective of business:

Charlie Munger
Of course you prefer a business that will prosper even if it is not managed well. We are not looking for mismanagement; we like the capacity to withstand it if we stumble into it….We try and operate so that it wouldn’t be too awful for us if something really extreme happened – like interest rates at 1% or interest rates at 20%… We try to arrange [our affairs] so that no matter what happens, we’ll never have to “go back to go.”

Warren Buffett uses the concept of margin of safety for investing and insurance:
We insist on a margin of safety in our purchase price. If we calculate the value of a common stock to be only slightly higher than its price, we’re not interested in buying. We believe this margin-of-safety principle, so strongly emphasized by Ben Graham, to be the cornerstone of investment success.

David Dodd, on the same topic, writes:

You don’t try to buy something for $80 million that you think is worth $83,400,000.

Buffett on Insurance:

If we can’t tolerate a possible consequence, remote though it may be, we steer clear of planting its seeds.

The pitfalls of this business mandate an operating principle that too often is ignored: Though certain long-tail lines may prove profitable at combined ratios of 110 or 115, insurers will invariably find it unprofitable to price using those ratios as targets. Instead, prices must provide a healthy margin of safety against the societal trends that are forever springing expensive surprises on the insurance industry.

Confucius comments:

The superior man, when resting in safety, does not forget that danger may come. When in a state of security he does not forget the possibility of ruin. When all is orderly, he does not forget that disorder may come. Thus his person is not endangered, and his States and all their clans are preserved.

Warren Buffett talked about redundancy from a business perspective at the 2009 shareholder meeting:

Question: You’ve talked a lot about opportunity-costs. Can you discuss more important decisions over the past year?

Buffett: When both prices are moving and in certain cases intrinsic business value moving at a pace that’s far greater than we’ve seen – it’s tougher, more interesting and more challenging and can be more profitable. But, it’s a different task than when things were moving at more leisurely pace. We faced that problem in September and October. We want to always keep a lot of money around. We have so many extra levels of safety we follow at Berkshire.

We got a call on Goldman on a Wednesday – that couldn’t have been done the previous Wednesday or the next Wednesday. We were faced with opportunity-cost – and we sold something that under normal circumstances we wouldn’t.

Jonathan Bendor, writing in Parallel Systems: Redundancy in Government, provides an example of how redundancy can reduce the risk of failure on cars.

Suppose an automobile had dual breaking (sic) circuits: each circuit can stop the car, and the circuits operate independently so that if one malfunctions it does not impair the other. If the probability of either one failing is 1/10, the probability of both failing simultaneously is (1/10)^2, or 1/100. Add a third independent circuit and the probability of the catastrophic failure of no brakes at all drops to (1/10)^3, or 1/1,000.

Airplane Design provides an insightful example. From the code of federal regulations:

The airplane systems and associated components, considered separately and in relation to other systems, must be designed so that the occurrence of any failure condition which would prevent the continued safe flight and landing of the airplane is extremely improbable, and the occurrence of any other failure conditions which would reduce the capacity of the airplane or the ability of the crew to cope with adverse operating conditions is improbable.

* * *

Ways redundancy can fail

In The Problem of Redundancy Problem: Why More Nuclear Security Forces May Produce Less Nuclear Security, Scott Sagan writes:

The first problem with redundancy is that adding extra components can inadvertently create a catastrophic common-mode error (a fault that causes all the components to fail). In complex systems, independence in theory (or in design) is not necessarily independence in fact. As long as there is some possibility of unplanned interactions between the components leading to common-mode errors, however, there will be inherent limits to the effectiveness of redundancy as a solution to reliability problems. The counterproductive effects of redundancy when extra components present even a small chance of producing a catastrophic common-mode error can be dramatic.

This danger is perhaps most easily understood through a simple example from the commercial aircraft industry. Aircraft manufacturers have to determine how many engines to use on jumbo jets. Cost is clearly an important factor entering their calculations. Yet so is safety, since each additional engine on an aircraft both increases the likelihood that the redundant engine will keep the plane in the air if all others fail in flight and increases the probability that a single engine will cause an accident, by blowing up or starting a fire that destroys all the other engines and the aircraft itself.

In (the image below) I assume that 40% of the time that each engine fails, it does so in a way (such as starting a catastrophic fire) that causes all the other engines to fail as well.

Aircraft manufacturers make similar calculations in order to estimate how many engines would maximize safety. Boeing, for example, used such an analysis to determine that, given the reliability of modern jet engines, putting two engines on the Boeing 777, rather than three or more engines as exist on many other long-range aircraft, would result in lower risks of serious accidents.

In more complex systems or organizations, however, it is often difficult to know when to stop adding redundant safety devices because of the inherent problem of predicting the probabilities of exceedingly rare events.

The second way in which redundancy can backfire is when diffusion of responsibility leads to “social shirking.”

This common phenomenon—in which individuals or groups reduce their reliability in the belief that others will take up the slack—is rarely examined in the technical literature on safety and reliability because of a “translation problem” that exists when transferring redundancy theory from purely mechanical systems to complex organizations. In mechanical engineering, the redundant units are usually inanimate objects, unaware of each other’s existence. In organizations, however, we are usually analyzing redundant individuals, groups, or agencies, backup systems that are aware of one another.

The third basic way in which redundancy can be counterproductive is when the addition of extra components encourages individuals or organizations to increase production in dangerous ways. In most settings, individuals and organizations face both production pressures and pressure to be safe and secure. If improvements in safety and security, however, lead individuals to engage in inherently risky behavior—driving faster, flying higher, producing more nuclear energy, etc.—then expected increases in system reliability could be reduced or even eliminated. Research demonstrates, for example, that laws requiring “baby-proof” safety caps on aspirin bottles have led to an increase in child poisoning because parents leave the bottles outside the medicine cabinet.

* * *

Another example of people over-confident in redundant systems can be found in the Challenger Disaster:

A dramatic case in point is the January 1986 space shuttle Challenger explosion. A strong consensus about the basic technical cause of the accident emerged soon afterward with the publication of the Rogers Commission report: the unprecedented cold temperature at the Kennedy Space Center at the time of launch caused the failure of two critical O-rings on a joint in the shuttle’s solid rocket booster, producing a plume of hot propellant gases that penetrated the shuttle’s external fuel tank and ignited its mixture of liquid hydrogen and oxygen. In contrast to the technical consensus, a full understanding of why NASA officials and Morton Thiokol engineers decided to launch the shuttle that day, despite the dangerously cold weather, has been elusive.

The Challenger launch decision can be understood as a set of individuals overcompensating for improvements in space shuttle safety that had been produced through the use of redundant O-rings. This overcompensation interpretation differs significantly from both the traditional arguments that “production pressures” forced officials to break safety rules and consciously accept an increased risk of an accident to permit the launch to take place and Diane Vaughan’s more recent argument, which focuses instead on how complex rules and engineering culture in NASA created “the normalization of deviance” in which risky operations were accepted unless it could be proven that they were extremely unsafe. The production pressures explanation—that high-ranking officials deliberately stretched the shuttle flight safety rules because of political pressure to have a successful launch that month—was an underlying theme of the Rogers Commission report and is still a widely held view today.(35) The problem with the simple production pressure explanation is that Thiokol engineers and NASA officials were perfectly aware that the resilience of an O-ring could be reduced by cold temperature and that the potential effects of the cold weather on shuttle safety were raised and analyzed, following the existing NASA safety rules, on the night of the Challenger launch decision.

Vaughan’s argument focuses on a deeper organizational pathology: “the normalization of deviance.” Engineers and high-ranking officials had developed elaborate procedures for determining “acceptable risk” in all aspects of shuttle operations. These organizational procedures included detailed decision-making rules among launch officials and the development of specific criteria by which to judge what kinds of technical evidence could be used as an input to the decision. The Thiokol engineers who warned of the O-ring failure on the night before the launch lacked proper engineering data to support their views and, upon consideration of the existing evidence, key managers, therefore, unanimously voted to go ahead with the launch.

Production pressures were not the culprits, Vaughan insists.Well-meaning individuals were seeking to keep the risks of an accident to a minimum, and were just following the rules (p. 386). The problem with Vaughan’s argument, however, is that she does not adequately explain why the engineers and mangers followed the rules that night. Why did they not demand more time to gather data, or protest the vote in favor of a launch, or more vigorously call for a postponement until that afternoon when the weather was expected to improve?

The answer is that the Challenger accident appears to be a tragic example of overcompensation. There were two O-rings present in the critical rocket booster joint: the primary O-ring and the secondary O-ring were listed as redundant safety components because they were designed so that the secondary O-ring would seal even if the first leaked because of “burn through” by hot gasses during a shuttle launch. One of the Marshall space center officials summarized the resulting belief: “We had faith in the tests. The data said that the primary would always push into the joint and seal . . . . And if we didn’t have a primary seal in the worst case scenario, we had faith in the secondary” (p. 105).

This assumption was critical on the night of January 27, 1986 for all four senior Thiokol managers reversed their initial support for postponing the launch when a Marshall Space Center official reminded them of the backup secondary O-ring. “We were spending all of our time figuring out the probability of the primary seating,” one of the Thiokol managers later noted: “[t]he engineers, Boisjoly and Thompson, had expressed some question about how long it would take that [primary] O-ring to move, [had] accepted that as a possibility, not a probability, but it was possible. So, if their concern was a valid concern, what would happen? And the answer was, the secondary O-ring would seat”(p. 320).

In short, the Challenger decision makers failed to consider the possibility that the cold temperature would reduce the resilience of both O-rings in the booster joint since that low probability event had not been witnessed in the numerous tests that had been conducted. That is, however, exactly what happened on the night of unprecedented cold temperatures. Like many automobile drivers, these decision makers falsely believed that redundant safety devices allowed them to operate in more dangerous conditions without increasing the risk of a catastrophe.

Redundancy is part of the Farnam Street latticework of mental models.

Ethical Breakdowns: Why Good People often Let Bad Things Happen

When Charlie Munger recommended reading Max Bazerman’s Judgment in Managerial Decision Making I had never hear of the HBS professor. A lot of reading later and I’m a huge fan.

In the HBR article below Bazerman covers some of the ground from his new book Blind Spots (see my notes).

These days, many of us are instructed to make decisions from a business perspective (thereby reducing or eliminating the ethical implications of our decisions). The Ford Pinto example below is very telling:

Consider an infamous case that, when it broke, had all the earmarks of conscious top-down corruption. The Ford Pinto, a compact car produced during the 1970s, became notorious for its tendency in rear-end collisions to leak fuel and explode into flames. More than two dozen people were killed or injured in Pinto fires before the company issued a recall to correct the problem. Scrutiny of the decision process behind the model’s launch revealed that under intense competition from Volkswagen and other small-car manufacturers, Ford had rushed the Pinto into production. Engineers had discovered the potential danger of ruptured fuel tanks in preproduction crash tests, but the assembly line was ready to go, and the company’s leaders decided to proceed. Many saw the decision as evidence of the callousness, greed, and mendacity of Ford’s leaders—in short, their deep unethicality.

But looking at their decision through a modern lens—one that takes into account a growing understanding of how cognitive biases distort ethical decision making—we come to a different conclusion. We suspect that few if any of the executives involved in the Pinto decision believed that they were making an unethical choice. Why? Apparently because they thought of it as purely a business decision rather than an ethical one.

Taking an approach heralded as rational in most business school curricula, they conducted a formal cost-benefit analysis—putting dollar amounts on a redesign, potential lawsuits, and even lives—and determined that it would be cheaper to pay off lawsuits than to make the repair. That methodical process colored how they viewed and made their choice. The moral dimension was not part of the equation. Such “ethical fading,” a phenomenon first described by Ann Tenbrunsel and her colleague David Messick, takes ethics out of consideration and even increases unconscious unethical behavior.

Continue Reading at HBR.

I recommend you purchase Judgment in Managerial Decision Making and Blind Spots.

Hiring and the Mismatch Problem

“We want to cling to these incredibly outdated and simplistic measures of ability.”
Malcolm Gladwell

***

Hiring is difficult and we tend to fall back on antiquated tools that give us a number (something, anything) to help us evaluate potential employees. This creates what Malcolm Gladwell calls “mismatch problems” — when the criteria for evaluating job candidates is out of step with the reality of the job demands.

Of course, we never think our criteria is out of step.

The mismatch problem shows itself all over the sports world. Although the study below was released in 2008, Gladwell has long illustrated the point that sports combines (events professional sports leagues hold for scouts to evaluate potential draftees based on a battery of ‘tests’) don’t work.

Gladwell’s results echo what Michael Lewis talks about in Moneyball: Combines are a poor predictor of determining ultimate success. Mismatch problems transcend the sports world.

Teachers are another example. While we tend to evaluate teachers based on high test scores, the number of degrees and other credentials, that makes little difference in how well people actually teach.

Some companies, like Google, are trying to attack this problem. Google tried to find correlations between ‘great’ existing employees. When they find correlations, say like most people who score 9/10 on performance reviews, own a dog, they try to work that into their hiring. By constantly evaluating the actual results of their hiring, rethinking how they hire, and removing questions and evaluations that show no bearing on actual performance they are taking steps to try to eliminate the mismatch problem.

Google also knows hiring lacks certainty; they are just trying to continuously improve and refine the process. Interestingly, very few workforces are so evidence-based. Rather the argument becomes hiring works because it has always ‘worked’…

So why do mismatch problems exist?

Because we desire certainty. We want to impose certainty on something that is not, by nature, certain. The increase in complexity doesn’t help either.

“The craving for that physics-style precision does nothing but get you in terrible trouble.”

See the video here.

Interested in learning more? Check out measurements that mislead.

Malcolm Gladwell is the New York Times bestselling author of Blink:The Power of Thinking Without ThinkingThe Tipping Point: How Little Things Can Make a Big DifferenceOutliers:The Story of Success, and What the Dog Saw: And Other Adventures.

Basically, It’s Over: A Parable About How One Nation Came To Financial Ruin

An excellent parable by Charlie Munger on how one nation came to financial ruin.

In the early 1700s, Europeans discovered in the Pacific Ocean a large, unpopulated island with a temperate climate, rich in all nature’s bounty except coal, oil, and natural gas. Reflecting its lack of civilization, they named this island “Basicland.”

The Europeans rapidly repopulated Basicland, creating a new nation. They installed a system of government like that of the early United States. There was much encouragement of trade, and no internal tariff or other impediment to such trade. Property rights were greatly respected and strongly enforced. The banking system was simple. It adapted to a national ethos that sought to provide a sound currency, efficient trade, and ample loans for credit-worthy businesses while strongly discouraging loans to the incompetent or for ordinary daily purchases.

Moreover, almost no debt was used to purchase or carry securities or other investments, including real estate and tangible personal property. The one exception was the widespread presence of secured, high-down-payment, fully amortizing, fixed-rate loans on sound houses, other real estate, vehicles, and appliances, to be used by industrious persons who lived within their means. Speculation in Basicland’s security and commodity markets was always rigorously discouraged and remained small. There was no trading in options on securities or in derivatives other than “plain vanilla” commodity contracts cleared through responsible exchanges under laws that greatly limited use of financial leverage.

In its first 150 years, the government of Basicland spent no more than 7 percent of its gross domestic product in providing its citizens with essential services such as fire protection, water, sewage and garbage removal, some education, defense forces, courts, and immigration control. A strong family-oriented culture emphasizing duty to relatives, plus considerable private charity, provided the only social safety net.

The tax system was also simple. In the early years, governmental revenues came almost entirely from import duties, and taxes received matched government expenditures. There was never much debt outstanding in the form of government bonds.

As Adam Smith would have expected, GDP per person grew steadily. Indeed, in the modern area it grew in real terms at 3 percent per year, decade after decade, until Basicland led the world in GDP per person. As this happened, taxes on sales, income, property, and payrolls were introduced. Eventually total taxes, matched by total government expenditures, amounted to 35 percent of GDP. The revenue from increased taxes was spent on more government-run education and a substantial government-run social safety net, including medical care and pensions.

A regular increase in such tax-financed government spending, under systems hard to “game” by the unworthy, was considered a moral imperative—a sort of egality-promoting national dividend—so long as growth of such spending was kept well below the growth rate of the country’s GDP per person.
Basicland also sought to avoid trouble through a policy that kept imports and exports in near balance, with each amounting to about 25 percent of GDP. Some citizens were initially nervous because 60 percent of imports consisted of absolutely essential coal and oil. But, as the years rolled by with no terrible consequences from this dependency, such worry melted away.

Basicland was exceptionally creditworthy, with no significant deficit ever allowed. And the present value of large “off-book” promises to provide future medical care and pensions appeared unlikely to cause problems, given Basicland’s steady 3 percent growth in GDP per person and restraint in making unfunded promises. Basicland seemed to have a system that would long assure its felicity and long induce other nations to follow its example—thus improving the welfare of all humanity.

But even a country as cautious, sound, and generous as Basicland could come to ruin if it failed to address the dangers that can be caused by the ordinary accidents of life. These dangers were significant by 2012, when the extreme prosperity of Basicland had created a peculiar outcome: As their affluence and leisure time grew, Basicland’s citizens more and more whiled away their time in the excitement of casino gambling. Most casino revenue now came from bets on security prices under a system used in the 1920s in the United States and called “the bucket shop system.”

The winnings of the casinos eventually amounted to 25 percent of Basicland’s GDP, while 22 percent of all employee earnings in Basicland were paid to persons employed by the casinos (many of whom were engineers needed elsewhere). So much time was spent at casinos that it amounted to an average of five hours per day for every citizen of Basicland, including newborn babies and the comatose elderly. Many of the gamblers were highly talented engineers attracted partly by casino poker but mostly by bets available in the bucket shop systems, with the bets now called “financial derivatives.”

Many people, particularly foreigners with savings to invest, regarded this situation as disgraceful. After all, they reasoned, it was just common sense for lenders to avoid gambling addicts. As a result, almost all foreigners avoided holding Basicland’s currency or owning its bonds. They feared big trouble if the gambling-addicted citizens of Basicland were suddenly faced with hardship.

And then came the twin shocks. Hydrocarbon prices rose to new highs. And in Basicland’s export markets there was a dramatic increase in low-cost competition from developing countries. It was soon obvious that the same exports that had formerly amounted to 25 percent of Basicland’s GDP would now only amount to 10 percent. Meanwhile, hydrocarbon imports would amount to 30 percent of GDP, instead of 15 percent. Suddenly Basicland had to come up with 30 percent of its GDP every year, in foreign currency, to pay its creditors.

How was Basicland to adjust to this brutal new reality? This problem so stumped Basicland’s politicians that they asked for advice from Benfranklin Leekwanyou Vokker, an old man who was considered so virtuous and wise that he was often called the “Good Father.” Such consultations were rare. Politicians usually ignored the Good Father because he made no campaign contributions.

Among the suggestions of the Good Father were the following. First, he suggested that Basicland change its laws. It should strongly discourage casino gambling, partly through a complete ban on the trading in financial derivatives, and it should encourage former casino employees—and former casino patrons—to produce and sell items that foreigners were willing to buy. Second, as this change was sure to be painful, he suggested that Basicland’s citizens cheerfully embrace their fate. After all, he observed, a man diagnosed with lung cancer is willing to quit smoking and undergo surgery because it is likely to prolong his life.

The views of the Good Father drew some approval, mostly from people who admired the fiscal virtue of the Romans during the Punic Wars. But others, including many of Basicland’s prominent economists, had strong objections. These economists had intense faith that any outcome at all in a free market—even wild growth in casino gambling—is constructive. Indeed, these economists were so committed to their basic faith that they looked forward to the day when Basicland would expand real securities trading, as a percentage of securities outstanding, by a factor of 100, so that it could match the speculation level present in the United States just before onslaught of the Great Recession that began in 2008.

The strong faith of these Basicland economists in the beneficence of hypergambling in both securities and financial derivatives stemmed from their utter rejection of the ideas of the great and long-dead economist who had known the most about hyperspeculation, John Maynard Keynes. Keynes had famously said, “When the capital development of a country is the byproduct of the operations of a casino, the job is likely to be ill done.” It was easy for these economists to dismiss such a sentence because securities had been so long associated with respectable wealth, and financial derivatives seemed so similar to securities.

Basicland’s investment and commercial bankers were hostile to change. Like the objecting economists, the bankers wanted change exactly opposite to change wanted by the Good Father. Such bankers provided constructive services to Basicland. But they had only moderate earnings, which they deeply resented because Basicland’s casinos—which provided no such constructive services—reported immoderate earnings from their bucket-shop systems. Moreover, foreign investment bankers had also reported immoderate earnings after building their own bucket-shop systems—and carefully obscuring this fact with ingenious twaddle, including claims that rational risk-management systems were in place, supervised by perfect regulators. Naturally, the ambitious Basicland bankers desired to prosper like the foreign bankers. And so they came to believe that the Good Father lacked any understanding of important and eternal causes of human progress that the bankers were trying to serve by creating more bucket shops in Basicland.

Of course, the most effective political opposition to change came from the gambling casinos themselves. This was not surprising, as at least one casino was located in each legislative district. The casinos resented being compared with cancer when they saw themselves as part of a long-established industry that provided harmless pleasure while improving the thinking skills of its customers.

As it worked out, the politicians ignored the Good Father one more time, and the Basicland banks were allowed to open bucket shops and to finance the purchase and carry of real securities with extreme financial leverage. A couple of economic messes followed, during which every constituency tried to avoid hardship by deflecting it to others. Much counterproductive governmental action was taken, and the country’s credit was reduced to tatters. Basicland is now under new management, using a new governmental system. It also has a new nickname: Sorrowland.

Charlie Munger Explains Why Bureaucracy is not Shareholder Friendly

Charlie Munger, the billionaire partner of Warren Buffett at Berkshire Hathaway, explains why bureaucracy is not shareholder friendly:

The great defect of scale, of course, which makes the game interesting—so that the big people don’t always win—is that as you get big, you get the bureaucracy. And with the bureaucracy comes the territoriality—which is again grounded in human nature.

And the incentives are perverse. For example, if you worked for AT&T in my day, it was a great bureaucracy. Who in the hell was really thinking about the shareholder or anything else? And in a bureaucracy, you think the work is done when it goes out of your in-basket into somebody else’s in-basket. But, of course, it isn’t. It’s not done until AT&T delivers what it’s supposed to deliver. So you get big, fat, dumb, unmotivated bureaucracies.

They also tend to become somewhat corrupt. In other words, if I’ve got a department and you’ve got a department and we kind of share power running this thing, there’s sort of an unwritten rule: “If you won’t bother me, I won’t bother you and we’re both happy.” So you get layers of management and associated costs that nobody needs. Then, while people are justifying all these layers, it takes forever to get anything done. They’re too slow to make decisions and nimbler people run circles around them.

The constant curse of scale is that it leads to big, dumb bureaucracy—which, of course, reaches its highest and worst form in government where the incentives are really awful. That doesn’t mean we don’t need governments—because we do. But it’s a terrible problem to get big bureaucracies to behave.

So people go to stratagems. They create little decentralized units and fancy motivation and training programs. For example, for a big company, General Electric has fought bureaucracy with amazing skill. But that’s because they have a combination of a genius and a fanatic running it. And they put him in young enough so he gets a long run. Of course, that’s Jack Welch.

But bureaucracy is terrible …. And as things get very powerful and very big, you can get some really dysfunctional behavior. Look at Westinghouse. They blew billions of dollars on a bunch of dumb loans to real estate developers. They put some guy who’d come up by some career path—I don’t know exactly what it was, but it could have been refrigerators or something—and all of a sudden, he’s loaning money to real estate developers building hotels. It’s a very unequal contest. And in due time, they lost all those billions of dollars.

Munger is perhaps the only person I know who reads more than we do. In fact, we get a lot of our reading off his book recommendations.

***

You can learn a lot from Warren Buffett and Charlie Munger. Reading all of the Berkshire Hathaway Letters to Shareholders was better than my MBA. I’m serious.