This is a fascinating expert from “How Managers Express Their Creativity” by Herbert Simon that deals with information flow within grounds and organizations. Simon compares successful scientific research with successful stock-market investing. Both scientists and investors are looking for mis-priced bets.
In this respect, successful scientific research has much in common with successful stock-market investment. Information is only valuable if others do not have it or do not believe it strongly enough to act on it. The investor pits his knowledge, beliefs and guesses against the knowledge, beliefs and guesses of others.
In neither domain—science or the stock market—is the professional looking for a “fair bet.” On the contrary, he or she is looking for a situation where superior knowledge—knowledge not yet available to others—can be made, with some reasonable assurance, to pay off. Sometimes that superior knowledge comes from persistence in acquiring more “chunks” than most others have. Sometimes it comes from the accidents that have already been mentioned. But whatever its source, it seldom completely eliminates the element of risk. Investors and scientists require a “contrarian” streak that gives them the confidence to pit their knowledge and judgment against the common wisdom of their colleagues.
If you’re interested in learning more about Herbert Simon, I recommend reading Models of My Life. I’ve also written about Simon before (here, and here).
Some of the general heuristics—rules of thumb—that people use in making judgments that produce biases towards classifying situations according to their representativeness, or toward judging frequencies according to the availability of examples in memory, or toward interpretations warped by the way in which a problem has been framed. These heuristics have important implications for individuals and society.
Insensitivity to Base Rates
When people are given information about the probabilities of certain events (e.g., how many lawyers and how many engineers are in a population that is being sampled), and then are given some additional information as to which of the events has occurred (which person has been sampled from the population), they tend to ignore the prior probabilities in favor of incomplete or even quite irrelevant information about the individual event. Thus, if they are told that 70 percent of the population are lawyers, and if they are then given a noncommittal description of a person (one that could equally well fit a lawyer or an engineer), half the time they will predict that the person is a lawyer and half the time that he is an engineer–even though the laws of probability dictate that the best forecast is always to predict that the person is a lawyer.
Insensitivity to Sample Size
People commonly misjudge probabilities in many other ways. Asked to estimate the probability that 60 percent or more of the babies born in a hospital during a given week are male, they ignore information about the total number of births, although it is evident that the probability of a departure of this magnitude from the expected value of 50 percent is smaller if the total number of births is larger (the standard error of a percentage varies inversely with the square root of the population size).
There are situations in which people assess the frequency of a class by the ease with which instances can be brought to mind. In one experiment, subjects heard a list of names of persons of both sexes and were later asked to judge whether there were more names of men or women on the list. In lists presented to some subjects, the men were more famous than the women; in other lists, the women were more famous than the men. For all lists, subjects judged that the sex that had the more famous personalities was the more numerous.
Framing and Loss Aversion
The way in which an uncertain possibility is presented may have a substantial effect on how people respond to it. When asked whether they would choose surgery in a hypothetical medical emergency, many more people said that they would when the chance of survival was given as 80 percent than when the chance of death was given as 20 percent.
Source: Decision Making and Problem Solving, Herbert A. Simon
Herbert Simon describes the difference between experienced decision makers and novice ones in his autobiography Models of My Life.
In doing so, he highlights the value of mental models and collecting a repository of positive responses that can be called upon when needed.
One can train a man so that he has at his disposal a list or repertoire of the possible actions that could be taken under the circumstances…A person who is new at the game does not have immediately at his disposal a set of possible actions to consider, but has to construct them on the spot – a time- consuming and difﬁcult mental task.
The decision maker of experience has at his disposal a checklist of things to watch out for before finally accepting a decision. A large part of the difference between the experienced decision maker and the novice in these situations is not any particular intangible like “judgment” or “intuition.” If one could open the lid, so to speak, and see what was in the head of the experienced decision-maker, one would find that he had at his disposal repertoires of possible actions; that he had checklists of things to think about before he acted; and that he had mechanisms in his mind to evoke these, and bring these to his conscious attention when the situations for decisions arose.
Most of what we do is to get people ready to act in situations of encounter consists of drilling in these lists into them sufficiently deeply so that they will be evoked quickly at the time of the decision.
As someone interested in how the weak win wars, I found this article (pdf), by William Lynn, in the recent Foreign Affairs utterly fascinating.
…cyberwarfare is asymmetric. The low cost of computing devices means that U.S. adversaries do not have to build expensive weapons, such as stealth fighters or aircraft carriers, to pose a significant threat to U.S. military capabilities. A dozen determined computer programmers can, if they find a vulnerability to exploit, threaten the United States’ global logistics network, steal its operational plans, blind its intelligence capabilities, or hinder its ability to deliver weapons on target. Knowing this, many militaries are developing offensive capabilities in cyberspace, and more than 100 foreign intelligence organizations are trying to break into U.S. networks. Some governments already have the capacity to disrupt elements of the U.S. information infrastructure.
In cyberspace, the offense has the upper hand. The Internet was designed to be collaborative and rapidly expandable and to have low barriers to technological innovation; security and identity management were lower priorities. For these structural reasons, the U.S. government’s ability to defend its networks always lags behind its adversaries’ ability to exploit U.S. networks’ weaknesses. Adept programmers will find vulnerabilities and overcome security measures put in place to prevent intrusions. In an offense-dominant environment, a fortress mentality will not work. The United States cannot retreat behind a Maginot Line of firewalls or it will risk being overrun. Cyberwarfare is like maneuver warfare, in that speed and agility matter most. To stay ahead of its pursuers, the United States must constantly adjust and improve its defenses.
It must also recognize that traditional Cold War deterrence models of assured retaliation do not apply to cyberspace, where it is difficult and time consuming to identify an attack’s perpetrator. Whereas a missile comes with a return address, a computer virus generally does not. The forensic work necessary to identify an attacker may take months, if identification is possible at all. And even when the attacker is identified, if it is a nonstate actor, such as a terrorist group, it may have no assets against which the United States can retaliate. Furthermore, what constitutes an attack is not always clear. In fact, many of today’s intrusions are closer to espionage than to acts of war. The deterrence equation is further muddled by the fact that cyberattacks often originate from co-opted servers in neutral countries and that responses to them could have unintended consequences.