Category: Uncategorized

The Insidious Evils of ‘Like’ Culture

Most people thought the Internet represented a liberation from conformity where ideas, freedom of information, creativity ruled. But what role does our need to belong play? What role does the simple “like” button play in social approval? The WSJ article below argues that “As a result,” of the like button, “we can now search not just for information, merchandise and kitten videos on the Internet, but for approval.

Just as stand-up comedians are trained to be funny by observing which of their lines and expressions are greeted with laughter, so too are our thoughts online molded to conform to popular opinion by these buttons. A status update that is met with no likes (or a clever tweet that isn’t retweeted) becomes the equivalent of a joke met with silence. It must be rethought and rewritten. And so we don’t show our true selves online, but a mask designed to conform to the opinions of those around us.

Conversely, when we’re looking at someone else’s content—whether a video or a news story—we are able to see first how many people liked it and, often, whether our friends liked it. And so we are encouraged not to form our own opinion but to look to others for cues on how to feel.

“Like” culture is antithetical to the concept of self-esteem, which a healthy individual should be developing from the inside out rather than from the outside in. Instead, we are shaped by our stats, which include not just “likes” but the number of comments generated in response to what we write and the number of friends or followers we have. I’ve seen rock stars agonize over the fact that another artist has far more Facebook “likes” and Twitter followers than they do.

Because it’s so easy to medicate our need for self-worth by pandering to win followers, “likes” and view counts, social media have become the métier of choice for many people who might otherwise channel that energy into books, music or art—or even into their own Web ventures.

Continue Reading

Farnam Street: Mastering the best of what other people have figured out. Subscribe via twitteremail, or RSS.

Is Everything Obvious Once You Know The Answer?

Reading Duncan Watts new book Everything is Obvious: Once You Know The Answer can make you uncomfortable.

Common sense is particularly well adapted to handling the complexity of everyday situations. We get intro trouble when we project our common sense to situations outside the realm of everyday life.

Applying common sense in these areas, Watts argues, “turns out to suffer from a number of errors that systematically mislead us. Yet because of the way we learn from experience—even experiences that are never repeated or that take place in other times and places—the failings of commonsense reasoning are rarely apparent to us.”

We think we have the answers but we don’t. Most real-world problems are more complex than we think. “When policy makers sit down, say, to design some scheme to alleviate poverty, they invariably rely on their own common-sense ideas about why it is that poor people are poor, and therefore how best to help them.” This is where we get into trouble. “A quick look at history,” Watts argues, “suggests that when common sense is used for purposes beyond the everyday, it can fail spectacularly.”

According to Watts, commonsense reasoning suffers from three types of errors, which reinforce one another. First, is that our mental model of the individual behaviour is systematically flawed. Second, our mental model of complex system (collective behaviour) is equally flawed. Lastly—and most interesting, in my view—is that “we learn less from history than we think we do, and that this misperception in turn skews our perception of the future.”

Whenever something interesting happens—a book by an unknown author rocketing to the top of the best-seller list, an unknown search engine increasing in value more than 100,000 times in less than 10 years, the housing bubble collapsing—we instinctively want to know why. We look for an explanation. “In this way,” Watts says, “we deceive ourselves into believing that we can make predictions that are impossible.”

“By providing ready explanations for whatever particular circumstances the world throws at us, commonsense explanations give us the confidence to navigate from day to day and relieve us of the burden of worrying about whether what we think we know is really true, or is just something we happen to believe.”

Once we know the outcome, our brains weave a clever story based on the aspects of the situation that seem relevant (at least, relevant in hindsight). We convince ourselves that we fully understand things that we don’t.

Is Netflix successful, as Reed Hastings argues, because of their culture? Which aspects of their culture make them successful? Do companies with a similar culture exist that fail? “The paradox of common sense, then, is that even as it helps us make sense of the world, it can actively undermine our ability to understand it.”

The key to improving your ability to make decisions then is to figure out what kind of predictions can we make and how we can improve our accuracy.

One problem with making predictions is knowing what variables to look at and how to weigh them. Even if we get the variables and relative importance of one factor to another correct, these predictions also reflect how much the future will resemble the past. As Warren Buffett says “the rearview mirror is always clearer than the windshield.”

Relying on historical data is problematic because of the frequency of big strategic decisions. “If you could make millions, or even hundreds, such bets,” Watts argues, “it would make sense to got with the historical probability. But when facing a decisions about whether or not to lead the country into war, or to make some strategic acquisition, you cannot count on getting more than one attempt. … making one-off strategic decisions is therefore ill suited to statistical models or crowd wisdom.”

Watts finds it ironic that organizations using the best practices in strategy planning can also be the most vulnerable to planning errors. This is the strategy paradox.

Michael Raynor, author of The Strategy Paradox, argues that the main cause of strategic failure is not bad strategy but great strategy that happens to be wrong. Bad strategy is characterized by lack of vision, muddled leadership, and inept execution, which is more likely to lead to mediocrity than colossal failure. Great strategy, on the other hand, is marked by clarity of vision, bold leadership, and laser-focused execution. Great strategy can lead to great successes as it did with the iPod but it can also lead to enormous failures as it did with Betamax. “Whether great strategy succeeds or fails therefore depends entirely on whether the initial vision happens to be right or not. And that is not just difficult to know in advance, but impossible.” Raynor argues that the solution to this is to develop methods for planning that account for strategic uncertainty. (I’ll eventually get around to reviewing the Strategy Paradox—It was a great read.)

Rather than trying to predict an impossible future, another idea is to react to changing circumstances as rapidly as possible, dropping alternatives that are not working no matter how promising they seem and diverting resources to those that are succeeding. This sounds an awful lot like evolution (variation and selection).

Watts and Raynor’s solution to overcome our inability to predict the future echos Peter Palchinsky’s principles. The Palchinsky Principles, as said by Tim Harford in Adapt (review) are “first, seek out new ideas and try new things; second, when trying something new do it on a scale where failure is survivable; third, seek out feedback and learn from your mistakes as you go along.”

Of course this experimental approach has limits. The US can’t go to war with half of Iraq with one strategy and the other half with a different approach to see which one works best. Watts says “for decisions like these, it’s unlikely that an experimental approach will be of much help.”

In the end, Watts concludes that planners need to learn to behave more “like what the development economist William Easterly calls searchers.” As Easterly put it:

A Planner thinks he already knows the answer; he thinks of poverty as a technical engineering problem that his answers will solve. A Searcher admits he doesn’t know the answers in advance; he believes that poverty is a complicated tangle of political, social, historical, institutional, and technological factors…and hopes to find answers to individual problems by trial and error…A Planner believes outsiders know enough to impose solutions. A Searcher believes only insiders have enough knowledge to find solutions, and that most solutions must be homegrown.

Still curious? Read Everything is Obvious: Once You Know The Answer.

Suppressing Volatility Makes the World Less Predictable and More Dangerous

I recommend reading Nassim Taleb’s recent article (PDF) in Foreign Affairs. It’s the ultimate example of iatrogenics by the fragilista.

If you don’t have time here are my notes:

  • Complex systems that have artificially suppressed volatility tend to become extremely fragile, while at the same time exhibiting not visible risks.
  • Seeking to restrict variability seems to be good policy (who does not prefer stability to chaos?), so it is with very good intentions that policymakers unwittingly increase the risk of major blowups.
  • Because policymakers believed it was better to do something than to do nothing, they felt obligated to heal the economy rather than wait and see if it healed on its own.
  • Those who seek to prevent volatility on the grounds that any and all bumps in the road must be avoided paradoxically increase the probability that a tail risk will cause a major explosion. Consider as a thought experiment a man placed in artificially sterilized environment for a decade and then invited to take a ride on a crowded subway; he would be expected to die quickly.
  • But although these controls might work in some rare situations, in the long-term effect of any such system is an eventual and extremely costly blowup whose cleanup costs can far exceed the benefits accrued.
  • … Government interventions are laden with unintended—and unforeseen—consequences, particularly in complex systems, so humans must work with nature by tolerating systems that absorb human imperfections rather than seek to change them.
  • Although it is morally satisfying, the film (inside job) naively overlooks the fact that humans have always been dishonest and regulators have always been behind the curve.
  • Humans must try to resist the illusion of control: just as foreign policy should be intelligence-proof (it should minimize its reliance on the competence of information-gathering organizations and the predictions of “experts” in what are inherently unpredictable domains), the economy should be regulator-proof, given that some regulations simply make the system itself more fragile.
  • The “turkey problem” occurs when a naive analysis of stability is derived from the absence of past variations.
  • Imagine someone who keep adding sand to a sand pile without any visible consequence, until suddenly the entire pile crumbles. It would be foolish to blame the collapse on the last grain of sand rather than the structure of the pile, but that is what people do consistently, and that is the policy error.
  • As with a crumbling sand pile, it would be foolish to attribute the collapse of a fragile bridge to the last truck that crossed it, and even more foolish to try to predict in advance which truck might bring it down.
  • Obama’s mistake illustrates the illusion of local causal chains—that is, confusing catalysts for causes and assuming that one can known which catalyst will produce which effect.
  • Governments are wasting billions of dollars on attempting to predict events that are produced by interdependent systems and are therefore not statistically understandable at the individual level.
  • Most explanations being offered for the current turmoil in the Middle East follow the “catalysts as causes” confusion. The riots in Tunisia and Egypt were initially attributed to rising commodity prices, not to stifling and unpopular dictatorships.
  • Again, the focus is wrong even if the logic is comforting. It is the system and its fragility, not events, that must be studied—what physicists call “percolation theory,” in which the properties of the terrain are studied rather than those of a single element of the terrain.
  • Humans fear randomness—a healthy ancestral trait inherited from a different environment. Whereas in the past, which was a more linear world, this trait enhanced, fitness and increased changes of survival, it can have the reverse effect in today’s complex world, making volatility take the shape of nasty Black Swans hiding behind deceptive periods of “great moderation.”
  • But alongside the “catalysts as causes” confusion sit tow mental biases: the illusion of control and the action bias (the illusion that doing something is always better than doing nothing.) This leads to the desire to impose man-made solutions. Greenspan’s actions were harmful, but it would have been hard to justify inaction in a democracy where the incentive is to always promise a better outcome than the other guy, regardless of the actual delayed cost.
  • As Seneca wrote in De clementia, “repeated punishment, while it crushes the hatred of a few, stirs the hatred of all … just as trees that have been trimmed throw out again countless branches.”
  • The Romans were wise enough to know that only a free man under Roman law could be trusted to engage in a contract; by extension, only a free people can be trusted to abide by a treaty.
  • As Jean-Jacques Rousseau put it, “A little bit of agitation gives motivation to the soul, and what really makes the species prosper is not peace so much as freedom.” With freedom comes some unpredictable fluctuation. This is one of life’s packages: there is no freedom without noise—and no stability without volatility.

***

Still curious? Nassim Taleb newest book is Antifragile: Things That Gain from Disorder. He is also the author of The Black SwanFooled By Randomness, and The Bed of Procrustes.

Problem Solving Tools

Problem Solving Tools

Do you know of any good problem solving tools? Well, I didn’t. My approach seemed to consist mostly of dumb luck.

That works most of the time, but feels inadequate for someone looking to improve their ability to make good decisions.

So I did what any person preferring reading to reality TV does and purchased a lot of books on problem solving. I convinced myself that an investment of a little time to find some problem solving tools, which marginally improved my ability to effectively solve problems would pay off handsomely over a long lifetime.

The book with the most problem solving tools is one that I didn’t think I’d enjoy at all: Problem Solving 101 by Ken Wantanabe.

This book offered a simple way to deal with problems that I can still recall today: (1) understand the current situation; (2) identify the root cause of the problem; (3) develop an effective action plan; and (4) execute until the problem is solved. While simple—and remarkably effective—the process is not easy to execute.

If you’ve ever found yourself in the middle of a problem solving meeting you know that our bias towards action causes us to want to skip steps 1 and 2. We’re prone to action. We want to shoot first and ask questions later.

This bias makes the simple four step approach above almost painful. If we think we understand the problem our minds naturally see steps 1 and 2 as a waste. The next time you find yourself in an unfortuante problem solving meeting ask yourself a few questions – are we addressing a problem or a symptom. If you’re addressing a problem, does everyone in the room agree on the problem? How will we know we’ve solved the problem?

Think about how doctors diagnose patients.

When you visit a Dr. they first ask you questions about your symptoms and then take your temperature. They might run a blood test or two. Maybe order an X-Ray. They collect information that can be used to identify the root cause of your illness. After they’ve determined, and hopefully confirmed, a diagnosis they decide what to prescribe. While the process isn’t the most efficient, it leads to good outcomes more often than not.

If you want to learn to solve problems better, you should buy problem solving 101. If you’re really motivated, cut your cable subscription and read judgment and managerial decision making too. Exercising your brain is time well spent.

Still Curious? Check out these books on decision making.

Tight Coupling and Complexity

From The London Review of Books comes an article on the rise of algorithmic trading:

Systems that are both tightly coupled and highly complex, Perrow argues in Normal Accidents, are inherently dangerous. Crudely put, high complexity in a system means that if something goes wrong it takes time to work out what has happened and to act appropriately. Tight coupling means that one doesn’t have that time. Moreover, he suggests, a tightly coupled system needs centralised management, but a highly complex system can’t be managed effectively in a centralised way because we simply don’t understand it well enough; therefore its organisation must be decentralised. Systems that combine tight coupling with high complexity are an organisational contradiction, Perrow argues: they are ‘a kind of Push-me-pul-lyou out of the Doctor Dolittle stories (a beast with heads at both ends that wanted to go in both directions at once).

Perrow’s theory is just that, a theory. It has never been tested very systematically, and certainly never proved conclusively, but it points us in a necessary direction. When thinking about automated trading, it’s easy to focus too narrowly, either pointing complacently to its undoubted benefits or invoking a sometimes exaggerated fear of out of control computers. Instead, we have to think about financial systems as a whole, desperately hard though that kind of thinking may be. The credit system that failed so spectacularly in 2007-8 is slowly recovering, but governments have not dealt with the systemic flaws that led to the crisis, such as the combination of banks that are too big to be allowed to fail and ‘shadow banks’ (institutions that perform bank-like functions but aren’t banks) that are regulated too weakly. Share trading is another such system: it is less tightly interconnected in Europe than in the United States, but it is drifting in that direction here as well. There has been no full-blown stock-market crisis since October 1987: last May’s events were not on that scale.[*] But as yet we have done little to ensure that there won’t be another.

Continue Reading

I highly reccommend reading Normal Accidents and The Logic of Failure: Recognizing And Avoiding Error In Complex Situations.

Predicting the Improbable

One natural human bias is that we tend to draw strong conclusions based on few observations. This bias, misconceptions of chance, shows itself in many ways including the gambler and hot hand fallacies. Such biases may induce public opinion and the media to call for dramatic swings in policies or regulation in response to highly improbable events. These biases are made even worse by our natural tendency to “do something.”

***

An event like an earthquake happens, making it more available in our mind.

We think the event is more probable than evidence would support so we run out and buy earthquake insurance. Over many years as the earthquake fades from our mind (making it less available) we believe, paradoxically, that the risk is lower (based on recent evidence) so we cancel our policy. …

Some events are hard to predict. This becomes even more complicated when you consider not only predicting the event but the timing of the event as well. This article below points out that experts, like the rest of us, base their predictions on inference from observing the past and are just as prone to biases as the rest of us.

Why do people over infer from recent events?

There are two plausible but apparently contradicting intuitions about how people over-infer from observing recent events.

The gambler’s fallacy claims that people expect rapid reversion to the mean.

For example, upon observing three outcomes of red in roulette, gamblers tend to think that black is now due and tend to bet more on black (Croson and Sundali 2005).

The hot hand fallacy claims that upon observing an unusual streak of events, people tend to predict that the streak will continue. (See Misconceptions of Chance)

The hot hand fallacy term originates from basketball where players who scored several times in a row are believed to have a “hot hand”, i.e. are more likely to score at their next attempt.

Recent behavioural theory has proposed a foundation to reconcile the apparent contradiction between the two types of over-inference. The intuition behind the theory can be explained with reference to the example of roulette play.

A person believing in the law of small numbers thinks that small samples should look like the parent distribution, i.e. that the sample should be representative of the parent distribution. Thus, the person believes that out of, say 6, spins 3 should be red and 3 should be black (ignoring green). If observed outcomes in the small sample differ from the 50:50 ratio, immediate reversal is expected. Thus, somebody observing 2 times red in 6 consecutive spins believes that black is “due” on the 3rd spin to restore the 50:50 ratio.

Now suppose such person is uncertain about the fairness of the roulette wheel. Upon observing an improbable event (6 times red in 6 spins, say), the person starts to doubt about the fairness of the roulette wheel because a long streak does not correspond to what he believes a random sequence should look like. The person then revises his model of the data generating process and starts to believe the event on streak is more likely. The upshot of the theory is that the same person may at first (when the streak is short) believe in reversion of the trend (the gambler’s fallacy) and later – when the streak is long – in continuation of the trend (the hot hand fallacy).

Continue Reading