Tag: Nassim Taleb

12 Books Every Investor Should Read

philip-fisher-common-stocks-uncommon-profits

If you’re looking for something to read that will improve your ability as an investor, I’d recommend any of the books below. All 12 of them are deeply informative and will leave an impact on you.

1. The Intelligent Investor by Benjamin Graham
Described as “by far the best book on investing ever written” by none other than Warren Buffett. “Chapters 8 and 20 have been the bedrock of my investing activities for more than 60 years,” he says. “I suggest that all investors read those chapters and reread them every time the market has been especially strong or weak.”

2. The Little Book that Beats the Market by Joel Greenblatt
As Buffett says, investing is simple but not easy. This book focuses on the simplicity of investing. Greenblatt, who has average annualized returns of about 40% for over 20 years, explains investing using 6th grade math and plain language. Putting it into practice is another story.

3. Fooled by Randomness by Nassim Taleb
The core of Taleb’s other books — The Black Swan and Antifragile — can be found in this early work. One of the best parts, for me, was the notion of alternative histories. “Mother Nature,” he writes, “does not tell you how many holes there are on the roulette table.” This book teaches you how to look at the world probabilistically. After you start doing that, nothing is ever the same again.

4. The Most Important Thing by Howard Marks
“This is a rarity,” Buffett writes of Howard Marks’ book, “a useful book.” More than teaching you the keys to successful investment, it will teach you about critical thinking.

5. Poor Charlie’s Almanack by Charlie Munger
Charlie Munger is perhaps the smartest man I don’t know. This book is a curated collection of his speeches and talks that can’t help but leave you smarter. Munger’s wit and wisdom come across on every page. This book will improve your thinking and decisions. It will also shine light upon psychological forces that make you a one-legged man in an ass-kicking contest. Read and re-read.

6. Common Stocks and Uncommon Profits by Philip Fisher
Buffett used to say that he was 85% Benjamin Graham and 15% Phil Fisher. That was a long time ago, the Buffett of today resembles more Fisher than Graham. Maybe there is something to buying and holding great companies.

7. The Dao of Capital by Mark Spitznagel
Spitznagel presents the methodology of Austrian Investing, where one looks for positional advantage. Nassim Taleb, commenting on the book wrote: “At last, a real book by a real risk-taking practitioner. You cannot afford not to read this!”

8. Buffett: The Making of an American Capitalist by Roger Lowenstein
This book, perhaps more than any other, has changed the lives of many of my friends and investors because this is how many of them first discovered Warren Buffett and value investing.

9. The Outsiders: Eight Unconventional CEOs and Their Radically Rational Blueprint for Success
An outstanding book detailing eight extraordinary CEOs and the unconventional methods they used for capital allocation. One of them, Henry Singleton, had a unique view on strategic planning.

10. The Misbehavior of Markets: A Fractal View of Financial Turbulence by Benoit Mandelbrot
A critique of modern finance theories, which usually gets built on the underlying assumption that distributions are normal. Nassim Taleb calls this “the most realistic finance book ever published.”

11. Why Stocks Go Up (and Down) by William Pike
This is a basics book on the fundamentals of equity and bond investing – financial statements, cash flows, etc. A good place to start on the nuts and bolts. If you’re looking to learn accounting also check out The Accounting Game: Basic Accounting Fresh from the Lemonade Stand, I’m serious. This is the book I recommended to classmates in business school with no accounting background to get them up to speed quickly.

12. Bull: A History of the Boom and Bust, 1982-2004 by Maggie Mahar
The first and perhaps best book written on the market’s historic run, which started in 1982 and ended in the early 2000s. Mahar reminds readers that euphoria and blindness are a regular part of bull markets – lessons we should have learned from studying history.

Keep in mind that if investing were as easy as buying a book and reading it, we’d all be rich.

Nassim Taleb: A Definition of Antifragile and its Implications

"Complex systems are weakened, even killed, when deprived of stressors."
“Complex systems are weakened, even killed, when deprived of stressors.”

I was talking with someone the other day about Antifragility, and I realized that, while a lot of people use the word, not many people have read: Antifragile, where Nassim Taleb defines it.

Just as being clear on what constitutes a black swan allowed us to better discuss the subject, so too will defining antifragility.

The classic example of something antifragile is Hydra, the greek mythological creature that has numerous heads. When one is cut off, two grow back in its place.

From Antifragile: Things That Gain from Disorder:

Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure , risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better. This property is behind everything that has changed with time: evolution, culture, ideas, revolutions, political systems, technological innovation, cultural and economic success, corporate survival, good recipes (say, chicken soup or steak tartare with a drop of cognac), the rise of cities, cultures, legal systems, equatorial forests, bacterial resistance … even our own existence as a species on this planet. And antifragility determines the boundary between what is living and organic (or complex), say, the human body, and what is inert, say, a physical object like the stapler on your desk.

The antifragile loves randomness and uncertainty, which also means— crucially—a love of errors, a certain class of errors. Antifragility has a singular property of allowing us to deal with the unknown, to do things without understanding them— and do them well. Let me be more aggressive: we are largely better at doing than we are at thinking, thanks to antifragility. I’d rather be dumb and antifragile than extremely smart and fragile, any time.

It is easy to see things around us that like a measure of stressors and volatility: economic systems , your body, your nutrition (diabetes and many similar modern ailments seem to be associated with a lack of randomness in feeding and the absence of the stressor of occasional starvation), your psyche. There are even financial contracts that are antifragile: they are explicitly designed to benefit from market volatility.

Antifragility makes us understand fragility better. Just as we cannot improve health without reducing disease, or increase wealth without first decreasing losses, antifragility and fragility are degrees on a spectrum.

Nonprediction

By grasping the mechanisms of antifragility we can build a systematic and broad guide to nonpredictive decision making under uncertainty in business, politics, medicine, and life in general— anywhere the unknown preponderates, any situation in which there is randomness, unpredictability, opacity, or incomplete understanding of things.

It is far easier to figure out if something is fragile than to predict the occurrence of an event that may harm it. Fragility can be measured; risk is not measurable (outside of casinos or the minds of people who call themselves “risk experts”). This provides a solution to what I’ve called the Black Swan problem— the impossibility of calculating the risks of consequential rare events and predicting their occurrence. Sensitivity to harm from volatility is tractable, more so than forecasting the event that would cause the harm. So we propose to stand our current approaches to prediction, prognostication, and risk management on their heads.

In every domain or area of application, we propose rules for moving from the fragile toward the antifragile, through reduction of fragility or harnessing antifragility. And we can almost always detect antifragility (and fragility) using a simple test of asymmetry : anything that has more upside than downside from random events (or certain shocks) is antifragile; the reverse is fragile.

Deprivation of Antifragility

Crucially, if antifragility is the property of all those natural (and complex) systems that have survived, depriving these systems of volatility, randomness, and stressors will harm them. They will weaken, die, or blow up. We have been fragilizing the economy, our health, political life, education, almost everything … by suppressing randomness and volatility. … stressors. Much of our modern, structured, world has been harming us with top-down policies and contraptions (dubbed “Soviet-Harvard delusions” in the book) which do precisely this: an insult to the antifragility of systems. This is the tragedy of modernity: as with neurotically overprotective parents, those trying to help are often hurting us the most (see iatrogenics)

Antifragile is the antidote to Black Swans. The modern world may increase technical knowledge but it will also make things more fragile.

… Black Swans hijack our brains, making us feel we “sort of” or “almost” predicted them, because they are retrospectively explainable. We don’t realize the role of these Swans in life because of this illusion of predictability. Life is more, a lot more, labyrinthine than shown in our memory— our minds are in the business of turning history into something smooth and linear, which makes us underestimate randomness. But when we see it, we fear it and overreact. Because of this fear and thirst for order, some human systems, by disrupting the invisible or not so visible logic of things, tend to be exposed to harm from Black Swans and almost never get any benefit. You get pseudo-order when you seek order; you only get a measure of order and control when you embrace randomness.

Complex systems are full of interdependencies— hard to detect— and nonlinear responses. “Nonlinear” means that when you double the dose of, say, a medication, or when you double the number of employees in a factory, you don’t get twice the initial effect, but rather a lot more or a lot less. Two weekends in Philadelphia are not twice as pleasant as a single one— I’ve tried. When the response is plotted on a graph, it does not show as a straight line (“linear”), rather as a curve. In such environments, simple causal associations are misplaced; it is hard to see how things work by looking at single parts.

Man-made complex systems tend to develop cascades and runaway chains of reactions that decrease, even eliminate, predictability and cause outsized events. So the modern world may be increasing in technological knowledge, but, paradoxically, it is making things a lot more unpredictable.

An annoying aspect of the Black Swan problem— in fact the central, and largely missed , point —is that the odds of rare events are simply not computable.

Robustness is not enough.

Consider that Mother Nature is not just “safe.” It is aggressive in destroying and replacing, in selecting and reshuffling . When it comes to random events, “robust” is certainly not good enough. In the long run everything with the most minute vulnerability breaks, given the ruthlessness of time— yet our planet has been around for perhaps four billion years and, convincingly, robustness can’t just be it: you need perfect robustness for a crack not to end up crashing the system. Given the unattainability of perfect robustness, we need a mechanism by which the system regenerates itself continuously by using, rather than suffering from, random events, unpredictable shocks, stressors, and volatility.

Fragile and antifragile are relative — there is no absolute. You may be more antifragile than your neighbor but that doesn’t make you antifragile.

The Triad is FRAGILE — ROBUST — ANTIFRAGILE.

Here’s an example
Antifragile

All of this can lead to some pretty significant conclusions. Often it’s impossible to be antifragile, but falling short of that you should be robust, not fragile. How do you become robust? Make sure you’re not fragile. Eliminate things that make you fragile. In an interview, Taleb offers some ideas:

You have to avoid debt because debt makes the system more fragile. You have to increase redundancies in some spaces. You have to avoid optimization. That is quite critical for someone who is doing finance to understand because it goes counter to everything you learn in portfolio theory. … I have always been very skeptical of any form of optimization. In the black swan world, optimization isn’t possible. The best you can achieve is a reduction in fragility and greater robustness.

If you haven’t already, I highly encourage you to read Antifragile.


Read Next

10 Principles to Live an Antifragile Life

Footnotes

A Discussion on the Work of Daniel Kahneman

Edge.org asked the likes of Christopher Chabris, Nicholas Epley, Jason Zweig, William Poundstone, Cass Sunstein, Phil Rosenzweig, Richard Thaler & Sendhil Mullainathan, Nassim Nicholas Taleb, Steven Pinker, and Rory Sutherland among others: “How has Kahneman’s work influenced your own? What step did it make possible?”

Kahneman’s work is summarized in the international best-seller Thinking, Fast and Slow.

Here are some select excerpts that I found interesting.

Christopher Chabris (author of The Invisible Gorilla)

There’s an overarching lesson I have learned from the work of Danny Kahneman, Amos Tversky, and their colleagues who collectively pioneered the modern study of judgment and decision-making: Don’t trust your intuition.

Jennifer Jacquet

After what I see as years of hard work, experiments of admirable design, lucid writing, and quiet leadership, Kahneman, a man who spent the majority of his career in departments of psychology, earned the highest prize in economics. This was a reminder that some of the best insights into economic behavior could be (and had been) gleaned outside of the discipline

Jason Zweig (author of Your Money and Your Brain)

… nothing amazed me more about Danny than his ability to detonate what we had just done.

Anyone who has ever collaborated with him tells a version of this story: You go to sleep feeling that Danny and you had done important and incontestably good work that day. You wake up at a normal human hour, grab breakfast, and open your email. To your consternation, you see a string of emails from Danny, beginning around 2:30 a.m. The subject lines commence in worry, turn darker, and end around 5 a.m. expressing complete doubt about the previous day’s work.

You send an email asking when he can talk; you assume Danny must be asleep after staying up all night trashing the chapter. Your cellphone rings a few seconds later. “I think I figured out the problem,” says Danny, sounding remarkably chipper. “What do you think of this approach instead?”

The next thing you know, he sends a version so utterly transformed that it is unrecognizable: It begins differently, it ends differently, it incorporates anecdotes and evidence you never would have thought of, it draws on research that you’ve never heard of. If the earlier version was close to gold, this one is hewn out of something like diamond: The raw materials have all changed, but the same ideas are somehow illuminated with a sharper shift of brilliance.

The first time this happened, I was thunderstruck. How did he do that? How could anybody do that? When I asked Danny how he could start again as if we had never written an earlier draft, he said the words I’ve never forgotten: “I have no sunk costs.”

William Poundstone (author of Are Your Smart Enough To Work At Google?)

As a writer of nonfiction I’m often in the position of trying to connect the dots—to draw grand conclusions from small samples. Do three events make a trend? Do three quoted sources justify a conclusion? Both are maxims of journalism. I try to keep in mind Kahneman and Tversky’s Law of Small Numbers. It warns that small samples aren’t nearly so informative, in our uncertain world, as intuition counsels.

Cass R. Sunstein (Author, Why Nudge?)

These ideas are hardly Kahneman’s most well-known, but they are full of implications, and we have only started to understand them.

1. The outrage heuristic. People’s judgments about punishment are a product of outrage, which operates as a shorthand for more complex inquiries that judges and lawyers often think relevant. When people decide about appropriate punishment, they tend to ask a simple question: How outrageous was the underlying conduct? It follows that people are intuitive retributivists, and also that utilitarian thinking will often seem uncongenial and even outrageous.

2. Scaling without a modulus. Remarkably, it turns out that people often agree on how outrageous certain misconduct is (on a scale of 1 to 8), but also remarkably, their monetary judgments are all over the map. The reason is that people do not have a good sense of how to translate their judgments of outrage onto the monetary scale. As Kahneman shows, some work in psychophysics explains the problem: People are asked to “scale without a modulus,” and that is an exceedingly challenging task. The result is uncertainty and unpredictability. These claims have implications for numerous questions in law and policy, including the award of damages for pain and suffering, administrative penalties, and criminal sentences.

3. Rhetorical asymmetry. In our work on jury awards, we found that deliberating juries typically produce monetary awards against corporate defendants that are higher, and indeed much higher, than the median award of the individual jurors before deliberation began. Kahneman’s hypothesis is that in at least a certain category of cases, those who argue for higher awards have a rhetoric advantage over those who argue for lower awards, leading to a rhetorical asymmetry. The basic idea is that in light of social norms, one side, in certain debates, has an inherent advantage – and group judgments will shift accordingly. A similar rhetorical asymmetry can be found in groups of many kinds, in both private and public sectors, and it helps to explain why groups move.

4. Predictably incoherent judgments. We found that when people make moral or legal judgments in isolation, they produce a pattern of outcomes that they would themselves reject, if only they could see that pattern as a whole. A major reason is that human thinking is category-bound. When people see a case in isolation, they spontaneously compare it to other cases that are mainly drawn from the same category of harms. When people are required to compare cases that involve different kinds of harms, judgments that appear sensible when the problems are considered separately often appear incoherent and arbitrary in the broader context. In my view, Kahneman’s idea of predictable coherence has yet to be adequately appreciated; it bears on both fiscal policy and on regulation.

Phil Rosenzweig

For years, there were (as the old saying has it) two kinds of people: those relatively few of us who were aware of the work of Danny Kahneman and Amos Tversky, and the much more numerous who were not. Happily, the balance is now shifting, and more of the general public has been able to hear directly a voice that is in equal measures wise and modest.

Sendhil Mullainathan (Author of Scarcity: Why Having Too Little Means So Much)

… Kahneman and Tversky’s early work opened this door exactly because it was not what most people think it was. Many think of this work as an attack on rationality (often defined in some narrow technical sense). That misconception still exists among many, and it misses the entire point of their exercise. Attacks on rationality had been around well before Kahneman and Tversky—many people recognized that the simplifying assumptions of economics were grossly over-simplifying. Of course humans do not have infinite cognitive abilities. We are also not as strong as gorillas, as fast as cheetahs, and cannot swim like sea lions. But we do not therefore say that there is something wrong with humans. That we have limited cognitive abilities is both true and no more helpful to doing good social science that to acknowledge our weakness as swimmers. Pointing it out did it open any new doors.

Kahneman and Tversky’s work did not just attack rationality, it offered a constructive alternative: a better description of how humans think. People, they argued, often use simple rules of thumb to make judgments, which incidentally is a pretty smart thing to do. But this is not the insight that left us one step from doing behavioral economics. The breakthrough idea was that these rules of thumb could be catalogued. And once understood they can be used to predict where people will make systematic errors. Those two words are what made behavioral economics possible.

Nassim Taleb (Author of Antifragile)

Here is an insight Danny K. triggered and changed the course of my work. I figured out a nontrivial problem in randomness and its underestimation a decade ago while reading the following sentence in a paper by Kahneman and Miller of 1986:

A spectator at a weight lifting event, for example, will find it easier to imagine the same athlete lifting a different weight than to keep the achievement constant and vary the athlete’s physique.

This idea of varying one side, not the other also applies to mental simulations of future (random) events, when people engage in projections of different counterfactuals. Authors and managers have a tendency to take one variable for fixed, sort-of a numeraire, and perturbate the other, as a default in mental simulations. One side is going to be random, not the other.

It hit me that the mathematical consequence is vastly more severe than it appears. Kahneman and colleagues focused on the bias that variable of choice is not random. But the paper set off in my mind the following realization: now what if we were to go one step beyond and perturbate both? The response would be nonlinear. I had never considered the effect of such nonlinearity earlier nor seen it explicitly made in the literature on risk and counterfactuals. And you never encounter one single random variable in real life; there are many things moving together.

Increasing the number of random variables compounds the number of counterfactuals and causes more extremes—particularly in fat-tailed environments (i.e., Extremistan): imagine perturbating by producing a lot of scenarios and, in one of the scenarios, increasing the weights of the barbell and decreasing the bodyweight of the weightlifter. This compounding would produce an extreme event of sorts. Extreme, or tail events (Black Swans) are therefore more likely to be produced when both variables are random, that is real life. Simple.

Now, in the real world we never face one variable without something else with it. In academic experiments, we do. This sets the serious difference between laboratory (or the casino’s “ludic” setup), and the difference between academia and real life. And such difference is, sort of, tractable.

… Say you are the manager of a fertilizer plant. You try to issue various projections of the sales of your product—like the weights in the weightlifter’s story. But you also need to keep in mind that there is a second variable to perturbate: what happens to the competition—you do not want them to be lucky, invent better products, or cheaper technologies. So not only you need to predict your fate (with errors) but also that of the competition (also with errors). And the variance from these errors add arithmetically when one focuses on differences.

Rory Sutherland

When I met Danny in London in 2009 he diffidently said that the only hope he had for his work was that “it might lead to a better kind of gossip”—where people discuss each other’s motivations and behaviour in slightly more intelligent terms. To someone from an industry where a new flavour-variant of toothpaste is presented as being an earth-changing event, this seemed an incredibly modest aspiration for such important work.

However, if this was his aim, he has surely succeeded. When I meet people, I now use what I call “the Kahneman heuristic”. You simply ask people “Have you read Danny Kahneman’s book?” If the answer is yes, you know (p>0.95) that the conversation will be more interesting, wide-ranging and open-minded than otherwise.

And it then occurred to me that his aim—for better conversations—was perhaps not modest at all. Multiplied a millionfold it may very important indeed. In the social sciences, I think it is fair to say, the good ideas are not always influential and the influential ideas are not always good. Kahneman’s work is now both good and influential.

Nassim Taleb on the Notion of Alternative Histories

We see what’s visible and available. Often this is nothing more than randomness and yet we wrap a narrative around it. The trader who is rich must know what he is doing. A good outcome means we made the right decisions, right? Not so quick. If we were wise we would not judge the quality of a decision on its outcome. There are alternative histories worth considering.

***

Writing in Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets Nassim Taleb hits on the notion of alternative histories.

Taleb argues that we should judge people by the costs of the alternative, that is if history played out in another way. These “substitute courses of events are called alternative histories.”

Russian Roulette

Taleb writes:

Clearly, the quality of a decision cannot be solely judged based on its outcome, but such a point seems to be voiced only by people who fail (those who succeed attribute their success to the quality of their decision).

[…]

One can illustrate the strange concept of alternative histories as follows. Imagine an eccentric (and bored) tycoon offering you $ 10 million to play Russian roulette, i.e., to put a revolver containing one bullet in the six available chambers to your head and pull the trigger. Each realization would count as one history, for a total of six possible histories of equal probabilities. Five out of these six histories would lead to enrichment; one would lead to a statistic, that is, an obituary with an embarrassing (but certainly original) cause of death. The problem is that only one of the histories is observed in reality; and the winner of $ 10 million would elicit the admiration and praise of some fatuous journalist (the very same ones who unconditionally admire the Forbes 500 billionaires). Like almost every executive I have encountered during an eighteen-year career on Wall Street (the role of such executives in my view being no more than a judge of results delivered in a random manner), the public observes the external signs of wealth without even having a glimpse at the source (we call such source the generator.) Consider the possibility that the Russian roulette winner would be used as a role model by his family, friends, and neighbors.

While the remaining five histories are not observable, the wise and thoughtful person could easily make a guess as to their attributes. It requires some thoughtfulness and personal courage. In addition, in time, if the roulette-betting fool keeps playing the game, the bad histories will tend to catch up with him. Thus, if a twenty-five-year-old played Russian roulette, say, once a year, there would be a very slim possibility of his surviving until his fiftieth birthday— but, if there are enough players, say thousands of twenty-five-year-old players, we can expect to see a handful of (extremely rich) survivors (and a very large cemetery).

[…]

The reader can see my unusual notion of alternative accounting: $ 10 million earned through Russian roulette does not have the same value as $ 10 million earned through the diligent and artful practice of dentistry. They are the same, can buy the same goods, except that one’s dependence on randomness is greater than the other.

Reality is different than roulette. Consider that in the example above, while the result is unknown you know the odds, most of life is dealing with uncertainty. Bullets are infrequent, “like a revolver that would have hundreds, even thousands, of chambers instead of six.” After a while you forget about the bullet. You can’t see the chamber and we generally think of risk in terms of what is visible.

Interestingly this is the core of the black swan, which is really the induction problem. No amount of evidence can allow the inference that something is true whereas one counterexample can refute a conclusion. This idea is also related to the “denigration of history,” where we think things that happen to others would not happen to us.

Footnotes
  • 1

    image source: http://koulin.deviantart.com/art/Baccano-Russian-Roulette-147120870

What Is Complexity?

While it seems more and more common these days, it’s important to determine when you’re operating in complexity. Complexity means that little things can have a big effect and big things can have no impact. Complexity also renders some of the way we think about problems as useless, at best.

In The Black Swan: The Impact of the Highly Improbable Fragility, Nassim Taleb writes:

I will simplify here with a functional definition of complexity—among many more complete ones. A complex domain is characterized by the following: there is a great degree of interdependence between its elements, both temporal (a variable depends on its past changes), horizontal (variables depend on one another), and diagonal (variable A depends on the past history of variable B). As a result of this interdependence, mechanisms are subjected to positive, reinforcing feedback loops, which cause “fat tails.” That is, they prevent the working of the Central Limit Theorem that, as we saw in Chapter 15 , establishes Mediocristan thin tails under summation and aggregation of elements and causes “convergence to the Gaussian.” In lay terms, moves are exacerbated over time instead of being dampened by counterbalancing forces. Finally, we have nonlinearities that accentuate the fat tails.

So, complexity implies Extremistan. (The opposite is not necessarily true.)

Based on this definition, complexity highlights some of the flaws in the way we approach things and inductive reasoning.

How do we know what we know? How do we know that what we have observed from given objects and events suffices to enable us to figure out their other properties ? There are traps built into any kind of knowledge gained from observation.

Consider the Turkey that is fed every day.

Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving , something unexpected will happen to the turkey. It will incur a revision of belief.

If the hand that feeds you can wring your neck, you’re a turkey.

If you haven’t read it already, The Black Swan: The Impact of the Highly Improbable Fragility, is a must read.

A Wonderfully Simple Heuristic to Recognize Charlatans

While we can learn a lot from what successful people do in the mornings, as Nassim Taleb points out, we can learn a lot from what failed people do before breakfast too.

Inversion is actually one of the most powerful mental models in our arsenal. Not only does inversion help us innovate but it also helps us deal with uncertainty.

“It is in the nature of things,” says Charlie Munger, “that many hard problems are best solved when they are addressed backward.”

Sometimes we can’t articulate what we want. Sometimes we don’t know. Sometimes there is so much uncertainty that the best approach is to attempt to avoid certain outcomes rather than attempt to guide towards the ones we desire. In short, we don’t always know what we want but we know what we don’t want.

Avoiding stupidity is often easier than seeking brilliance.

“For the Arab scholar and religious leader Ali Bin Abi-Taleb (no relation), keeping one’s distance from an ignorant person is equivalent to keeping company with a wise man.”

The “apophatic,” writes Nassim Taleb in Antifragile, “focuses on what cannot be said directly in words, from the greek apophasis (saying no, or mentioning without meaning).”

The method began as an avoidance of direct description, leading to a focus on negative description, what is called in Latin via negativa, the negative way, after theological traditions, particularly in the Eastern Orthodox Church. Via negativa does not try to express what God is— leave that to the primitive brand of contemporary thinkers and philosophasters with scientistic tendencies. It just lists what God is not and proceeds by the process of elimination.

Statues are carved by subtraction.

Michelangelo was asked by the pope about the secret of his genius, particularly how he carved the statue of David, largely considered the masterpiece of all masterpieces. His answer was: “It’s simple. I just remove everything that is not David.”

Where Is the Charlatan?

Recall that the interventionista focuses on positive action—doing. Just like positive definitions, we saw that acts of commission are respected and glorified by our primitive minds and lead to, say, naive government interventions that end in disaster, followed by generalized complaints about naive government interventions, as these, it is now accepted, end in disaster, followed by more naive government interventions. Acts of omission, not doing something, are not considered acts and do not appear to be part of one’s mission.

[…]

I have used all my life a wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them. Just look at the “how to” books with, in their title, “Ten Steps for—” (fill in: enrichment, weight loss, making friends, innovation, getting elected, building muscles, finding a husband, running an orphanage, etc.).

We learn the most from the negative.

[I]n practice it is the negative that’s used by the pros, those selected by evolution: chess grandmasters usually win by not losing; people become rich by not going bust (particularly when others do); religions are mostly about interdicts; the learning of life is about what to avoid. You reduce most of your personal risks of accident thanks to a small number of measures.

Skill doesn’t always win.

In anything requiring a combination of skill and luck, the most skillful don’t always win. That’s one of the key messages of Michael Mauboussin’s book The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing. This is hard for us to swallow because we intuitively feel that if you are successful you have skill for the same reasons that if the outcome is good we think you made a good decision. We can’t predict whether a person who has skills will succeed but Taleb argues that we can “pretty much predict” that a person without skills will eventually have their luck run out.

Subtractive Knowledge

Taleb argues that the greatest “and most robust contribution to knowledge consists in removing what we think is wrong—subtractive epistemology.” He continues that “we know a lot more about what is wrong than what is right.” What does not work, that is negative knowledge, is more robust than positive knowledge. This is because it’s a lot easier for something we know to fail than it is for something we know that isn’t so to succeed.

There is a whole book on the half-life of what we consider to be ‘knowledge or fact’ called The Half-Life of Facts. Basically, because of our partial understanding of the world, which is constantly evolving, we believe things that are not true. That’s not the only reason that we believe things that are not true but it’s a big one.

The thing is we’re not so smart. If I’ve only seen white swans, saying “all swans are white” may be accurate given my limited view of the world but we can never be sure that there are no black swans until we’ve seen everything.

Or as Taleb puts it: “since one small observation can disprove a statement, while millions can hardly confirm it, disconfirmation is more rigorous than confirmation.”

Most people attribute this philosophical argument to Karl Popper but Taleb dug up some evidence that it goes back to the “skeptical-empirical” medical schools of the post-classical era in the Eastern Mediterranean.

Being antifragile isn’t about what you do, but rather what you avoid. Avoid fragility. Avoid stupidity. Don’t be the sucker. Be like Dariwn.