Tag: Nassim Taleb

Life Changing Books (New Guy Edition)

Back in 2013, I posted the Books that Changed my Life. In doing so, I was responding to a reader request to post up the books that “literally changed my life.”

Now that we have Jeff on board, I’ve asked him to do the same. Here are his choices, presented in a somewhat chronological order. As always, these lists leave off a lot of important books in the name of brevity.

Rich Dad, Poor Dad – Robert Kiyosaki

Before I get hanged for apostasy, let me explain. The list is about books that changed my life and this one absolutely did. I pulled this off my father’s shelf and read it in high school, and it kicked off a lifelong interest in investments, business, and the magic of compound interest. That eventually led me to find Warren Buffett and Charlie Munger, affecting the path of my life considerably. With that said, I would probably not recommend you start here. I haven’t re-read the book since high school and what I’ve learned about Kiyosaki doesn’t make me want to recommend anything to you from him. But for better or worse, this book had an impact. Another one that probably holds up better is The Millionaire Next Door, which my father recommended when I was in high school and stuck with me for a long time too.

Buffett: Making of an American Capitalist/Buffett’s Letters to Shareholders – Roger Lowenstein, Warren Buffett

These two and the next book are duplicates off Shane’s list, but they are also probably the reason we know each other. Learning about Warren Buffett took the kid who liked “Rich Dad, Poor Dad” and watched The Apprentice, and might have been on a path to highly leveraged real estate speculation and who knows what else, and put him on a more sound path. I read this biography many times in college, and decided I wanted to emulate some of Buffett’s qualities. (I actually now prefer The Snowball, by Alice Schroeder, but Lowenstein’s came first and changed my life more.) Although I have a business degree, I learned a lot more from reading and applying the collected Letters to Shareholders.

Poor Charlie’s Almanack – Peter Kaufman, Charlie Munger et al.

The Almanack is the greatest book I have ever read, and I knew it from the first time I read it. As Charlie says in the book, there is no going back from the multi-disciplinary approach. It would feel like cutting off your hands. I re-read this book every year in whole or in part, and so far, 8 years on, I haven’t failed to pick up a meaningful new insight. Like any great book, it grows as you grow. I like to think I understand about 40% of it on a deep level now, and I hope to add a few percent every year. I literally cannot conceive of a world in which I didn’t read this.

The Nurture Assumption – Judith Rich Harris

This book affected my thinking considerably. I noticed in the Almanack that Munger recommended this book and another, No Two Alike, towards the end. Once I read it, I could see why. It is a monument to clear and careful thinking. Munger calls the author Judith Rich Harris a combination of Darwin and Sherlock Holmes, and he’s right. If this book doesn’t change how you think about parenting, social development, peer pressure, education, and a number of other topics, then re-read it.

Filters Against Folly/Living within Limits – Garrett Hardin

Like The Nurture Assumption, these two books are brilliantly well thought-through. Pillars of careful thought. It wasn’t until years after I read them that I realized Garrett Hardin was friends with, and in fact funded by, Charlie Munger. The ideas about overpopulation in Living within Limits made a deep impression on me, but the quality of thought in general hit me the hardest. Like the Almanack, it made me want to become a better and more careful thinker.

The Black Swan – Nassim Taleb

Who has read this and not been affected by it? Like many, Nassim’s books changed how I think about the world. The ideas from The Black Swan and Fooled by Randomness about the narrative fallacy and the ludic fallacy cannot be forgotten, as well as the central idea of the book itself that rare events are not predictable and yet dominate our landscape. Also, Nassim’s writing style made me realize deep, practical writing didn’t have to be dry and sanitized. Like him or not, he wears his soul on his sleeve.

Good Calories, Bad Calories / Why We Get Fat: And What to do About it – Gary Taubes

I’ve been interested in nutrition since I was young, and these books made me realize most of what I knew was not very accurate. Gary Taubes is a scientific journalist of the highest order. Like Hardin, Munger, and Harris, he thinks much more carefully than most of his peers. Nutrition is a field that is still sort of growing up, and the quality of the research and thought shows it. Taubes made me recognize that nutrition can be a real science if it’s done more carefully, more Feynman-like. Hopefully his NuSi initiative will help shove the field in the right direction.

The (Honest) Truth about Dishonesty – Dan Ariely

This book by Ariely was a game-changer in that it helped me realize the extent to which we rationalize our behavior in a million little ways. I had a lot of nights thinking about my own propensity for dishonesty and cheating after I read this one, and I like to think I’m a pretty moral person to start with. I had never considered how situational dishonesty was, but now that I do, I see it constantly in myself and others. There are also good sections on incentive-caused bias and social pressure that made an impact.

Sapiens – Yuval Noah Harrari

This is fairly new so I’m still digesting this book, and I have a feeling it will take many years. But Sapiens has a lot of (for me) deep insights about humanity and how we got here. I think Yuval is a very good thinker and an excellent writer. A lot of the ideas in this book will set some people off, and not in a good way. But that doesn’t mean they’re not correct. Highly recommended if you’re open-minded and want to learn.

***

At the end of the day, what gets me excited is my Antilibrary, all the books I have on my shelf or on my Amazon wish list that I haven’t read yet. The prospect of reading another great book that changes my life like these books did is an exciting quest.

The Map Is Not the Territory

The Map is Not the Territory

The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. A map can also be a snapshot of a point in time, representing something that no longer exists. This is important to keep in mind as we think through problems and make better decisions.

“The map appears to us more real than the land.”

— D.H. Lawrence

The Relationship Between Map and Territory

In 1931, in New Orleans, Louisiana, mathematician Alfred Korzybski presented a paper on mathematical semantics. To the non-technical reader, most of the paper reads like an abstruse argument on the relationship of mathematics to human language, and of both to physical reality. Important stuff certainly, but not necessarily immediately useful for the layperson.

However, in his string of arguments on the structure of language, Korzybski introduced and popularized the idea that the map is not the territory. In other words, the description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted. This has enormous practical consequences.

In Korzybski’s words:

A.) A map may have a structure similar or dissimilar to the structure of the territory.

B.) Two similar structures have similar ‘logical’ characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory.

C.) A map is not the actual territory.

D.) An ideal map would contain the map of the map, the map of the map of the map, etc., endlessly…We may call this characteristic self-reflexiveness.

Maps are necessary, but flawed. (By maps, we mean any abstraction of reality, including descriptions, theories, models, etc.) The problem with a map is not simply that it is an abstraction; we need abstraction. A map with the scale of one mile to one mile would not have the problems that maps have, nor would it be helpful in any way.

To solve this problem, the mind creates maps of reality in order to understand it, because the only way we can process the complexity of reality is through abstraction. But frequently, we don’t understand our maps or their limits. In fact, we are so reliant on abstraction that we will frequently use an incorrect model simply because we feel any model is preferable to no model. (Reminding one of the drunk looking for his keys under the streetlight because “That’s where the light is!”)

The Map Is Not the Territory

Even the best and most useful maps suffer from limitations, and Korzybski gives us a few to explore: (A.) The map could be incorrect without us realizing it(B.) The map is, by necessity, a reduction of the actual thing, a process in which you lose certain important information; and (C.) A map needs interpretation, a process that can cause major errors. (The only way to truly solve the last would be an endless chain of maps-of-maps, which he called self-reflexiveness.)

With the aid of modern psychology, we also see another issue: the human brain takes great leaps and shortcuts in order to make sense of its surroundings. As Charlie Munger has pointed out, a good idea and the human mind act something like the sperm and the egg — after the first good idea gets in, the door closes. This makes the map-territory problem a close cousin of man-with-a-hammer tendency.

This tendency is, obviously, problematic in our effort to simplify reality. When we see a powerful model work well, we tend to over-apply it, using it in non-analogous situations. We have trouble delimiting its usefulness, which causes errors.

Let’s check out an example.

***

By most accounts, Ron Johnson was one the most successful and desirable retail executives by the summer of 2011. Not only was he handpicked by Steve Jobs to build the Apple Stores, a venture which had itself come under major scrutiny – one retort printed in Bloomberg magazine: “I give them two years before they’re turning out the lights on a very painful and expensive mistake” – but he had been credited with playing a major role in turning Target from a K-Mart look-alike into the trendy-but-cheap Tar-zhey by the late 1990s and early 2000s.

Johnson’s success at Apple was not immediate, but it was undeniable. By 2011, Apple stores were by far the most productive in the world on a per-square-foot basis, and had become the envy of the retail world. Their sales figures left Tiffany’s in the dust. The gleaming glass cube on Fifth Avenue became a more popular tourist attraction than the Statue of Liberty. It was a lollapalooza, something beyond ordinary success. And Johnson had led the charge.

“(History) offers a ridiculous spectacle of a fragment expounding the whole.”

— Will Durant

With that success, in 2011 Johnson was hired by Bill Ackman, Steven Roth, and other luminaries of the financial world to turn around the dowdy old department store chain JC Penney. The situation of the department store was dour: Between 1992 and 2011, the retail market share held by department stores had declined from 57% to 31%.

Their core position was a no-brainer though. JC Penney had immensely valuable real estate, anchoring malls across the country. Johnson argued that their physical mall position was valuable if for no other reason that people often parked next to them and walked through them to get to the center of the mall. Foot traffic was a given. Because of contracts signed in the ’50s, ’60s, and ’70s, the heyday of the mall building era, rent was also cheap, another major competitive advantage. And unlike some struggling retailers, JC Penney was making (some) money. There was cash in the register to help fund a transformation.

The idea was to take the best ideas from his experience at Apple; great customer service, consistent pricing with no markdowns and markups, immaculate displays, world-class products, and apply them to the department store. Johnson planned to turn the stores into little malls-within-malls. He went as far as comparing the ever-rotating stores-within-a-store to Apple’s “apps.” Such a model would keep the store constantly fresh, and avoid the creeping staleness of retail.

Johnson pitched his idea to shareholders in a series of trendy New York City meetings reminiscent of Steve Jobs’ annual “But wait, there’s more!” product launches at Apple. He was persuasive: JC Penney’s stock price went from $26 in the summer of 2011 to $42 in early 2012 on the strength of the pitch.

The idea failed almost immediately. His new pricing model (eliminating discounting) was a flop. The coupon-hunters rebelled. Much of his new product was deemed too trendy. His new store model was wildly expensive for a middling department store chain – including operating losses purposefully endured, he’d spent several billion dollars trying to effect the physical transformation of the stores. JC Penney customers had no idea what was going on, and by 2013, Johnson was sacked. The stock price sank into the single digits, where it remains two years later.

What went wrong in the quest to build America’s Favorite Store? It turned out that Johnson was using a map of Tulsa to navigate Tuscaloosa. Apple’s products, customers, and history had far too little in common with JC Penney’s. Apple had a rabid, young, affluent fan-base before they built stores; JC Penney’s was not associated with youth or affluence. Apple had shiny products, and needed a shiny store; JC Penney was known for its affordable sweaters. Apple had never relied on discounting in the first place; JC Penney was taking away discounts given prior, triggering massive deprival super-reaction.

“All models are wrong but some are useful.”

— George Box

In other words, the old map was not very useful. Even his success at Target, which seems like a closer analogue, was misleading in the context of JC Penney. Target had made small, incremental changes over many years, to which Johnson had made a meaningful contribution. JC Penney was attempting to reinvent the concept of the department store in a year or two, leaving behind the core customer in an attempt to gain new ones. This was a much different proposition. (Another thing holding the company back was simply its base odds: Can you name a retailer of great significance that has lost its position in the world and come back?)

The main issue was not that Johnson was incompetent. He wasn’t. He wouldn’t have gotten the job if he was. He was extremely competent. But it was exactly his competence and past success that got him into trouble. He was like a great swimmer that tried to tackle a grand rapid, and the model he used successfully in the past, the map that had navigated a lot of difficult terrain, was not the map he needed anymore. He had an excellent theory about retailing that applied in some circumstances, but not in others. The terrain had changed, but the old idea stuck.

***

One person who well understands this problem of the map and the territory is Nassim Taleb, author of the Incerto series – Antifragile , The Black SwanFooled by Randomness, and The Bed of Procrustes.

Taleb has been vocal about the misuse of models for many years, but the earliest and most vivid I can recall is his firm criticism of a financial model called Value-at Risk, or VAR. The model, used in the banking community, is supposed to help manage risk by providing a maximum potential loss within a given confidence interval. In other words, it purports to allow risk managers to say that, within 95%, 99%, or 99.9% confidence, the firm will not lose more than $X million dollars in a given day. The higher the interval, the less accurate the analysis becomes. It might be possible to say that the firm has $100 million at risk at any time at a 99% confidence interval, but given the statistical properties of markets, a move to 99.9% confidence might mean the risk manager has to state the firm has $1 billion at risk. 99.99% might mean $10 billion. As rarer and rarer events are included in the distribution, the analysis gets less useful. So, by necessity, the “tails” are cut off somewhere and the analysis is deemed acceptable.

Elaborate statistical models are built to justify and use the VAR theory. On its face, it seems like a useful and powerful idea; if you know how much you can lose at any time, you can manage risk to the decimal. You can tell your board of directors and shareholders, with a straight face, that you’ve got your eye on the till.

The problem, in Nassim’s words, is that:

A model might show you some risks, but not the risks of using it. Moreover, models are built on a finite set of parameters, while reality affords us infinite sources of risks.

In order to come up with the VAR figure, the risk manager must take historical data and assume a statistical distribution in order to predict the future. For example, if we could take 100 million human beings and analyze their height and weight, we could then predict the distribution of heights and weights on a different 100 million, and there would be a microscopically small probability that we’d be wrong. That’s because we have a huge sample size and we are analyzing something with very small and predictable deviations from the average.

But finance does not follow this kind of distribution. There’s no such predictability. As Nassim has argued, the “tails” are fat in this domain, and the rarest, most unpredictable events have the largest consequences. Let’s say you deem a highly threatening event (for example, a 90% crash in the S&P 500) to have a 1 in 10,000 chance of occurring in a given year, and your historical data set only has 300 years of data. How can you accurately state the probability of that event? You would need far more data.

Thus, financial events deemed to be 5, or 6, or 7 standard deviations from the norm tend to happen with a certain regularity that nowhere near matches their supposed statistical probability.  Financial markets have no biological reality to tie them down: We can say with a useful amount of confidence that an elephant will not wake up as a monkey, but we can’t say anything with absolute confidence in an Extremistan arena.

We see several issues with VAR as a “map,” then. The first that the model is itself a severe abstraction of reality, relying on historical data to predict the future. (As all financial models must, to a certain extent.) VAR does not say “The risk of losing X dollars is Y, within a confidence of Z.” (Although risk managers treat it that way). What VAR actually says is “the risk of losing X dollars is Y, based on the given parameters.” The problem is obvious even to the non-technician: The future is a strange and foreign place that we do not understand. Deviations of the past may not be the deviations of the future. Just because municipal bonds have never traded at such-and-such a spread to U.S. Treasury bonds does not mean that they won’t in the future. They just haven’t yet. Frequently, the models are blind to this fact.

In fact, one of Nassim’s most trenchant points is that on the day before whatever “worst case” event happened in the past, you would have not been using the coming “worst case” as your worst case, because it wouldn’t have happened yet.

Here’s an easy illustration. October 19, 1987, the stock market dropped by 22.61%, or 508 points on the Dow Jones Industrial Average. In percentage terms, it was then and remains the worst one-day market drop in U.S. history. It was dubbed “Black Monday.” (Financial writers sometimes lack creativity — there are several other “Black Monday’s” in history.) But here we see Nassim’s point: On October 18, 1987, what would the models use as the worst possible case? We don’t know exactly, but we do know the previous worst case was 12.82%, which happened on October 28, 1929. A 22.61% drop would have been considered so many standard deviations from the average as to be near impossible.

But the tails are very fat in finance — improbable and consequential events seem to happen far more often than they should based on naive statistics. There is also a severe but often unrecognized recursiveness problem, which is that the models themselves influence the outcome they are trying to predict. (To understand this more fully, check out our post on Complex Adaptive Systems.)

A second problem with VAR is that even if we had a vastly more robust dataset, a statistical “confidence interval” does not do the job of financial risk management. Says Taleb:

There is an internal contradiction between measuring risk (i.e. standard deviation) and using a tool [VAR] with a higher standard error than that of the measure itself.

I find that those professional risk managers whom I heard recommend a “guarded” use of the VAR on grounds that it “generally works” or “it works on average” do not share my definition of risk management. The risk management objective function is survival, not profits and losses. A trader according to the Chicago legend, “made 8 million in eight years and lost 80 million in eight minutes”. According to the same standards, he would be, “in general”, and “on average” a good risk manager.

This is like a GPS system that shows you where you are at all times but doesn’t include cliffs. You’d be perfectly happy with your GPS until you drove off a mountain.

It was this type of naive trust of models that got a lot of people in trouble in the recent mortgage crisis. Backward-looking, trend-fitting models, the most common maps of the financial territory, failed by describing a territory that was only a mirage: A world where home prices only went up. (Lewis Carroll would have approved.)

This was navigating Tulsa with a map of Tatooine.

***

The logical response to all this is, “So what?” If our maps fail us, how do we operate in an uncertain world? This is its own discussion for another time, and Taleb has gone to great pains to try and address the concern. Smart minds disagree on the solution. But one obvious key must be building systems that are robust to model error.

The practical problem with a model like VAR is that the banks use it to optimize. In other words, they take on as much exposure as the model deems OK. And when banks veer into managing to a highly detailed, highly confident model rather than to informed common sense, which happens frequently, they tend to build up hidden risks that will un-hide themselves in time.

If one were to instead assume that there were no precisely accurate maps of the financial territory, they would have to fall back on much simpler heuristics. (If you assume detailed statistical models of the future will fail you, you don’t use them.)

In short, you would do what Warren Buffett has done with Berkshire Hathaway. Mr. Buffett, to our knowledge, has never used a computer model in his life, yet manages an institution half a trillion dollars in size by assets, a large portion of which are financial assets. How?

The approach requires not only assuming a future worst case far more severe than the past, but also dictates building an institution with a robust set of backup systems, and margins-of-safety operating at multiple levels. Extra cash, rather than extra leverage. Taking great pains to make sure the tails can’t kill you. Instead of optimizing to a model, accepting the limits of your clairvoyance.

When map and terrain differ, follow the terrain.

The trade-off, of course, is short-run rewards much less great than those available under more optimized models. Speaking of this, Charlie Munger has noted:

Berkshire’s past record has been almost ridiculous. If Berkshire had used even half the leverage of, say, Rupert Murdoch, it would be five times its current size.

For Berkshire at least, the trade-off seems to have been worth it.

***

The salient point then is that in our march to simplify reality with useful models, of which Farnam Street is an advocate, we confuse the models with reality. For many people, the model creates its own reality. It is as if the spreadsheet comes to life. We forget that reality is a lot messier. The map isn’t the territory. The theory isn’t what it describes, it’s simply a way we choose to interpret a certain set of information. Maps can also be wrong, but even if they are essentially correct, they are an abstraction, and abstraction means that information is lost to save space. (Recall the mile-to-mile scale map.)

How do we do better? This is fodder for another post, but the first step is to realize that you do not understand a model, map, or reduction unless you understand and respect its limitations. We must always be vigilant by stepping back to understand the context in which a map is useful, and where the cliffs might lie. Until we do that, we are the turkey.

How Warren Buffett Keeps up with a Torrent of Information

A telling excerpt from an interview of Warren Buffett (below) on the value of reading.

Seems like he’s taking the opposite approach to Nassim Taleb in some ways.

How Warren Buffett Keeps up with a Torrent of Information

Interviewer: How do you keep up with all the media and information that goes on in our crazy world and in your world of Berkshire Hathaway? What’s your media routine?

Warren Buffett: I read and read and read. I probably read five to six hours a day. I don’t read as fast now as when I was younger. But I read five daily newspapers. I read a fair number of magazines. I read 10-Ks. I read annual reports. I read a lot of other things, too. I’ve always enjoyed reading. I love reading biographies, for example.

Interviewer: You process information very quickly.

Warren Buffett: I have filters in my mind. If somebody calls me about an investment in a business or an investment in securities, I usually know in two or three minutes whether I have an interest. I don’t waste any time with the ones which I don’t have an interest.

I always worry a little bit about even appearing rude because I can tell very, very, very quickly whether it’s going to be something that will lead to something, or whether it’s a half an hour or an hour or two hours of chatter.

What’s interesting about these filters is that Buffett has consciously developed them as heuristics to allow for rapid processing. They allow him to move quickly with few mistakes — that’s what heuristics are designed to do. Most of us are trying to get rid of our heuristics to reduce error but here is one of the smartest people alive and he’s doing the opposite: he’s creating these filters as a means for allowing for information processing. He’s moving fast and in the right direction.

Nassim Taleb: How to Not be a Sucker From the Past

“History is useful for the thrill of knowing the past, and for the narrative (indeed), provided it remains a harmless narrative.”

— Nassim Taleb

The fact that new information exists about the past in general means that we have an incomplete roadmap about history. There is a necessary fallibility … if you will.

In The Black Swan, Nassim Taleb writes:

History is useful for the thrill of knowing the past, and for the narrative (indeed), provided it remains a harmless narrative. One should learn under severe caution. History is certainly not a place to theorize or derive general knowledge, nor is it meant to help in the future, without some caution. We can get negative confirmation from history, which is invaluable, but we get plenty of illusions of knowledge along with it.

While I don’t entirely hold Taleb’s view, I think it’s worth reflecting on. As a friend put it to me recently, “when people are looking into the rearview mirror of the past, they can take facts and like a string of pearls draw lines of causal relationships that facilitate their argument while ignoring disconfirming facts that detract from their central argument or point of view.”

Taleb advises us to adopt the empirical skeptic approach of Menodotus which was to “know history without theorizing from it,” and to not draw any large theoretical or scientific claims.

We can learn from history but our desire for causality can easily lead us down a dangerous rabbit hole when new facts come to light disavowing what we held to be true. In trying to reduce the cognitive dissonance, our confirmation bias leads us to reinterpret past events in a way that fits our current beliefs.

History is not stagnant — we only know what we know currently and what we do know is subject to change. The accepted beliefs about how events played out may change in light of new information and then the newly accepted beliefs may change over time as well.

The Lucretius Problem: How History Blinds Us

The Lucretius Problem is a mental defect where we assume the worst case event that has happened is the worst case event that can happen. In so doing, we fail to understand that the worst event that has happened in the past surpassed the worst event that came before it. Only the fool believes all he can see is all there is to see.

Lucretius_Rome

It’s always good to re-read books and to dip back into them periodically. When reading a new book, I often miss out on crucial information (especially books that are hard to categorize with one descriptive sentence). When you come back to a book after reading hundreds of others you can’t help but make new connections with the old book and see it anew. The book hasn’t changed but you have.

It has been a while since I read Anti-fragile. In the past, I’ve talked about an Antifragile Way of Life, Learning to Love Volatility, the Definition of Antifragility, and the Noise and the Signal.

But upon re-reading Antifragile I came across the Lucretius Problem and I thought I’d share an excerpt. (Titus Lucretius Carus was a Roman poet and philosopher, best-known for his poem On the Nature of Things).

In Antifragile, Nassim Taleb writes:

Indeed, our bodies discover probabilities in a very sophisticated manner and assess risks much better than our intellects do. To take one example, risk management professionals look in the past for information on the so-called worst-case scenario and use it to estimate future risks – this method is called “stress testing.” They take the worst historical recession, the worst war, the worst historical move in interest rates, or the worst point in unemployment as an exact estimate for the worst future outcome​. But they never notice the following inconsistency: this so-called worst-case event, when it happened, exceeded the worst [known] case at the time.

I have called this mental defect the Lucretius problem, after the Latin poetic philosopher who wrote that the fool believes that the tallest mountain in the world will be equal to the tallest one he has observed. We consider the biggest object of any kind that we have seen in our lives or hear about as the largest item that can possibly exist. And we have been doing this for millennia.

Taleb brings up an interesting point, which is that our documented history can blind us. All we know is what we have been able to record. There is an uncertainty that we don’t seem to grasp.

We think because we have sophisticated data collecting techniques that we can capture all the data necessary to make decisions. We think we can use our current statistical techniques to draw historical trends using historical data without acknowledging the fact that past data recorders had fewer tools to capture the dark figure of unreported data. We also overestimate the validity of what has been recorded before and thus the trends we draw might tell a different story if we had the dark figure of unreported data.

Taleb continues:

The same can be seen in the Fukushima nuclear reactor, which experienced a catastrophic failure in 2011 when a tsunami struck. It had been built to withstand the worst past historical earthquake, with the builders not imagining much worse— and not thinking that the worst past event had to be a surprise, as it had no precedent. Likewise, the former chairman of the Federal Reserve, Fragilista Doctor Alan Greenspan, in his apology to Congress offered the classic “It never happened before.” Well, nature, unlike Fragilista Greenspan, prepares for what has not happened before, assuming worse harm is possible.

Dealing with Uncertainty

Taleb provides an answer which is to develop layers of redundancy, that is a margin of safety, to act as a buffer against oneself. We overvalue what we have recorded and assume it tells us the worst and best possible outcomes. Redundant layers are a buffer against our tendency to think what has been recorded is a map of the whole terrain. An example of a redundant feature could be a rainy day fund which acts as an insurance policy against something catastrophic such as a job loss that allows you to survive and fight another day.

Antifragile is a great book to read and you might learn something about yourself and the world you live in by reading it or in my case re-reading it.


Read Next

Nassim Taleb: The Definition of a Black Swan

Fooled By Randomness: My Notes

fooled by randomness

I loved Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets by Nassim Taleb. This is the first popular book he wrote, the book that helped propel him into an intellectual celebrity. Interestingly, Fooled by Randomness contains semi-explored gems of the ideas that would later go on to become the best-selling books The Black Swan and Antifragile.

Here are some of my notes from the book.

Hindsight Bias

Part of the argument that Fooled by Randomness presents is that when we look back at things that have happened we see them as less random than they actually were.

It is as if there were two planets: the one in which we actually live and the one, considerably more deterministic, on which people are convinced we live. It is as simple as that: Past events will always look less random than they were (it is called the hindsight bias). I would listen to someone’s discussion of his own past realizing that much of what he was saying was just backfit explanations concocted ex post by his deluded mind.

The Courage of Montaigne

Writing on Montaigne as the role model for the modern thinker, Taleb also addresses his courage:

It certainly takes bravery to remain skeptical; it takes inordinate courage to introspect, to confront oneself, to accept one’s limitations— scientists are seeing more and more evidence that we are specifically designed by mother nature to fool ourselves.

Probability

Fooled by Randomness is about probability, not in a mathematical way but as skepticism.

In this book probability is principally a branch of applied skepticism, not an engineering discipline. …

Probability is not a mere computation of odds on the dice or more complicated variants; it is the acceptance of the lack of certainty in our knowledge and the development of methods for dealing with our ignorance. Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem or a brain teaser. Mother nature does not tell you how many holes there are on the roulette table , nor does she deliver problems in a textbook way (in the real world one has to guess the problem more than the solution).

Outside of textbooks and casinos, probability almost never presents itself as a mathematical problem” which is fascinating given how we tend to solve problems. In decisions under uncertainty, I discussed how risk and uncertainty are different things, which creates two types of ignorance.

Most decisions are not risk-based, they are uncertainty-based and you either know you are ignorant or you have no idea you are ignorant. There is a big distinction between the two. Trust me, you’d rather know you are ignorant.

Randomness Disguised as Non-Randomness

The core of the book is about luck that we understand as skill or “randomness disguised as non-randomness (that is determinism).”

This problem manifests itself most frequently in the lucky fool, “defined as a person who benefited from a disproportionate share of luck but attributes his success to some other, generally very precise, reason.”

Such confusion crops up in the most unexpected areas, even science, though not in such an accentuated and obvious manner as it does in the world of business. It is endemic in politics, as it can be encountered in the shape of a country’s president discoursing on the jobs that “he” created, “his” recovery, and “his predecessor’s” inflation.

These lucky fools are often fragilistas — they have no idea they are lucky fools. For example:

[W]e often have the mistaken impression that a strategy is an excellent strategy, or an entrepreneur a person endowed with “vision,” or a trader a talented trader, only to realize that 99.9% of their past performance is attributable to chance, and chance alone. Ask a profitable investor to explain the reasons for his success; he will offer some deep and convincing interpretation of the results. Frequently, these delusions are intentional and deserve to bear the name “charlatanism.”

This does not mean that all success is luck or randomness. There is a difference between “it is more random than we think” and “it is all random.”

Let me make it clear here : Of course chance favors the prepared! Hard work, showing up on time, wearing a clean (preferably white) shirt, using deodorant, and some such conventional things contribute to success— they are certainly necessary but may be insufficient as they do not cause success. The same applies to the conventional values of persistence, doggedness and perseverance: necessary, very necessary. One needs to go out and buy a lottery ticket in order to win. Does it mean that the work involved in the trip to the store caused the winning? Of course skills count, but they do count less in highly random environments than they do in dentistry.

No, I am not saying that what your grandmother told you about the value of work ethics is wrong! Furthermore, as most successes are caused by very few “windows of opportunity,” failing to grab one can be deadly for one’s career. Take your luck!

That last paragraph connects to something Charlie Munger once said: “Really good investment opportunities aren’t going to come along too often and won’t last too long, so you’ve got to be ready to act. Have a prepared mind.

Taleb thinks of success in terms of degrees, so mild success might be explained by skill and labor but outrageous success “is attributable variance.”

Luck Makes You Fragile

One thing Taleb hits on that really stuck with me is that “that which came with the help of luck could be taken away by luck (and often rapidly and unexpectedly at that). The flipside, which deserves to be considered as well (in fact it is even more of our concern), is that things that come with little help from luck are more resistant to randomness.” How Antifragile.

Taleb argues this is the problem of induction, “it does not matter how frequently something succeeds if failure is too costly to bear.”

Noise and Signal

We are confused between noise and signal.

…the literary mind can be intentionally prone to the confusion between noise and meaning, that is, between a randomly constructed arrangement and a precisely intended message. However, this causes little harm; few claim that art is a tool of investigation of the Truth— rather than an attempt to escape it or make it more palatable. Symbolism is the child of our inability and unwillingness to accept randomness; we give meaning to all manner of shapes; we detect human figures in inkblots.

All my life I have suffered the conflict between my love of literature and poetry and my profound allergy to most teachers of literature and “critics.” The French thinker and poet Paul Valery was surprised to listen to a commentary of his poems that found meanings that had until then escaped him (of course, it was pointed out to him that these were intended by his subconscious).

If we’re concerned about situations where randomness is confused with non-randomness should we also be concerned with situations where non-randomness is mistaken for randomness, which would result in the signal being ignored?

First, I am not overly worried about the existence of undetected patterns. We have been reading lengthy and complex messages in just about any manifestation of nature that presents jaggedness (such as the palm of a hand, the residues at the bottom of Turkish coffee cups, etc.). Armed with home supercomputers and chained processors, and helped by complexity and “chaos” theories, the scientists, semiscientists, and pseudoscientists will be able to find portents. Second, we need to take into account the costs of mistakes; in my opinion, mistaking the right column for the left one is not as costly as an error in the opposite direction. Even popular opinion warns that bad information is worse than no information at all.

If you haven’t yet, pick up a copy of Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. Don’t make the same mistake I did and wait to read this important book.

Footnotes