Tag: Steven Pinker

Steven Pinker on Why Your Professional Writing Sucks (And What to Do)

When we know a lot about a topic, it can be difficult to write about it in a way that makes sense to the layperson. In order to write better about your professional specialty, you need to avoid using jargon and abstractions, put yourself in the place of the reader, and try asking someone from the intended audience to read it.

***

Harvard’s cognitive psychology giant Steven Pinker has had no shortage of big, interesting topics to write about so far.

Starting in 1994 with his first book aimed at popular audiences, The Language Instinct, Pinker has discussed not only the origins of language, but the nature of human beings, the nature of our minds, the nature of human violence, and a host of related topics.

His most recent book The Sense of Style narrows in on how to write well, but continues to showcase his brilliant synthetical mind. It’s a 21st century version of Strunk & White, a book aimed to help us understand why our writing often sucks, and how we might make it suck a little less.

His deep background in linguistics and cognitive psychology allows him to discuss language and writing more deeply than your average style guide; it’s also funny as hell in parts, which can’t be said of nearly any style guide.

senseofstyle

Please No More “Ese”

In the third chapter, Pinker addresses the familiar problem of academese, legalese, professionalese…all the eses that make one want to throw a book, paper, or article in the trash rather than finish it. What causes them? Is it because we seek to obfuscate, as is commonly thought? Sometimes yes — especially when the author is trying to sell the reader something, be it a product or an idea.

But Pinker’s not convinced that concealment is driving most of our frustration with professional writing:

I have long been skeptical of the bamboozlement theory, because in my experience it does not ring true. I know many scholars who have nothing to hide and no need to impress. They do groundbreaking work on important subjects, reason well about clear ideas, and are honest, down-to-earth people, the kind you’d enjoy having a beer with. Still, their writing stinks.

So, if it’s not that we’re trying to mislead, what’s the problem?

***

Pinker first calls attention to the Curse of Knowledge — the inability to put ourselves in the shoes of a less informed reader.

The curse of knowledge is the single best explanation I know of why good people write bad prose. It simply doesn’t occur to the writer that her readers don’t know what she knows — that they haven’t mastered the patois of her guild, can’t divine the missing steps that seem too obvious to mention, have no way to visualize a scene that to her is as clear as day. And so she doesn’t bother to explain the jargon, or spell out the logic, or supply the necessary detail.

The first, simple, way this manifests itself is one we all encounter too frequently: Over-Abbreviation. It’s when we’re told to look up the date of the SALT conference for MLA sourcing on the HELMET system after our STEM meeting. (I only made one of those up.) Pinker’s easy way out is to recommend we always spell out acronyms the first time we use them, unless we’re absolutely sure readers will know what they mean. (And still maybe even then.)

The second obvious manifestation is our overuse of technical terms which the reader may or may not have encountered before. A simple fix is to add a few words of expository the first time you use the term, as in “Arabidopsis, a flowering mustard plant.” Don’t assume the reader knows all of your jargon.

In addition, the use of examples is so powerful that we might call them a necessary component of persuasive writing. If I give you a long rhetorical argument in favor of some action or another without anchoring it on a concrete example, it’s as if I haven’t explained it at all. Something like: “Reading a source of information that contradicts your existing beliefs is a useful practice, as in the case of a Democrat spending time reading Op-Eds written by Republicans.” The example makes the point far stronger.

Another deeper part of the problem is a little less obvious but a lot more interesting than you might think. Pinker ascribes a big source of messy writing to a mental process called chunking, in which we package groups of concepts into ever further abstraction in order to save space in our brain. Here’s a great example of chunking:

As children we see one person hand a cookie to another, and we remember it as an act of giving. One person gives another one a cookie in exchange for a banana; we chunk the two acts of giving together and think of the sequence as trading. Person 1 trades a banana to Person 2 for a shiny piece of metal, because he knows he can trade it to Person 3 for a cookie; we think of it as selling. Lots of people buying and selling make up a market. Activity aggregated over many markets get chunked into the economy. The economy can now be thought of as an entity which responds to action by central banks; we call that monetary policy. One kind of monetary policy, which involves the central bank buying private assets, is chunked as quantitative easing.

As we read and learn, we master a vast number of these abstractions, and each becomes a mental unit which we can bring to mind in an instant and share with others by uttering its name.

Chunking is an amazing and useful component of higher intelligence, but it gets us in trouble when we write because we assume our readers’ chunks are just like our own. They’re not.

A second issue is something he terms functional fixity. This compounds the problem induced by chunking:

Sometimes wording is maddeningly opaque without being composed of technical terminology from a private clique. Even among cognitive scientists, a “poststimulus event” is not a standard way to refer to a tap on the arm. A financial customer might be reasonably familiar with the world of investments and still have to puzzle over what a company brochure means by “capital charges and rights.” A computer-savvy user trying to maintain his Web site might be mystified by instructions on the maintenance page which refer to “nodes,” “content type” and “attachments.” And heaven help the sleepy traveler trying to set the alarm clock in his hotel room who must interpret “alarm function” and “second display mode.”

Why do writers invent such confusing terminology? I believe the answer lies in another way in which expertise can make our thoughts more idiosyncratic and thus harder to share: as we become familiar with something, we think about it more in terms of the use we put it to and less in terms of what it looks like and what it is made of. This transition, another staple of the cognitive psychology curriculum, is called functional fixity (sometimes functional fixedness).

The opposite of functional fixity would be familiar to those who have bought their dog or cat a toy only to be puzzled to see them playing with the packaging it came in. The animal hasn’t fixated on the function of the objects, to him an object is just an object. The toy and the packaging are not categorized as toy and thing toy comes in the way they are for us. In this case, we have functional fixity and they do not.

And so Pinker continues:

Now, if you combine functional fixity with chunking, and stir in the curse that hides each one from our awareness, you get an explanation of why specialists use so much idiosyncratic terminology, together with abstractions, metaconcepts, and zombie nouns. They are not trying to bamboozle us, that’s just the way they think.

[…]

In a similar way, writers stop thinking — and thus stop writing — about tangible objects and instead refer to them by the role those objects play in their daily travails. Recall the example from chapter 2 in which a psychologist showed people sentences, followed by the label TRUE or FALSE. He explained what he did as “the subsequent presentation of an assessment word,” referring to the [true/false] label as an “assessment word” because that’s why he put it there — so that the participants in the experiment could assess whether it applied to the preceding sentence Unfortunately, he left it up to us to figure out what an “assessment word” is–while saving no characters, and being less rather than more scientifically precise.

In the same way, a tap on the wrist became a “stimulus” and a [subsequent] tap on the elbow become a “post-stimulus event,” because the writer cared about the fact that one event came after the other and no longer cared about the fact that the events were taps on the arm.

As we get deeper into our expertise, we substitute concrete, useful, everyday imagery for abstract, technical fluff that brings nothing to the mind’s eye of a lay reader. We use metaconcepts like levels, issues, contexts, frameworks, and perspectives instead of describing the actual thing in plain language. (Thus does a book become a “tangible thinking framework.”)

Solutions

How do we solve the problem, then? Pinker partially defuses the obvious solution — remembering the reader over your shoulder while you write — because he feels it doesn’t always work. Even when we’re made aware that we need to simplify and clarify for our audience, we find it hard to regress our minds to a time when our professional knowledge was more primitive.

Pinker’s prescription has a few parts:

  1. Get rid of abstractions and use concrete nouns and refer to concrete things. Who did what to whom? Read over your sentences and look for nouns that refer to meta-abstractions and ask yourself whether there’s a way to put a tangible, everyday object or concept in its place. “The phrase ‘on the aspirational level’ adds nothing to ‘aspire,’ nor is a ‘prejudice reduction model’ any more sophisticated than ‘reducing prejudice.'”
  2. When in doubt, assume the reader knows a fair bit less than you about your topic. Clarity is not condescension. You don’t need to prove how smart you are — the reader won’t be impressed. “The key is to assume that your readers are as intelligent and sophisticated as you are, but that they happen not to know something you know.” 
  3. Get someone intelligent and part of your intended audience to read over your work and see if they understand it. You shouldn’t take every last suggestion, but do take seriously when they tell you certain sections are muddy or confusing. “The form in which thoughts occur to a writer is rarely the same as the form in which they can be absorbed by the reader.”
  4. Put your first draft down for enough time that, when you come back to it, you no longer feel deep familiarity with it. In this way, you become your intended audience. Your own fresh eyes will see the text in a new way. Don’t forget to read aloud, even if just under your breath.

Still interested? Check out Pinker’s The Sense of Style for a lot more on good writing, and check out his thoughts on what a broad education should entail.

Steven Pinker on What a Broad Education Should Entail

Harvard’s great biologist/psychologist Steven Pinker is one of my favorites, even though I’m just starting to get into his work.

What makes him great is not just his rational mind, but his multidisciplinary approach. He pulls from many fields to make his (generally very good) arguments. And he’s a rigorous scientist in his own field, even before we get to his ability to synthesize.

I first encountered Pinker in reading Poor Charlie’s Almanack: Charlie Munger gives him the edge over Noam Chomsky and others in the debate over whether the capacity for language has been “built into” our DNA through natural selection. Pinker wrote the bestseller, The Language Instinct, in which he argued that the capacity for complex language is innate. We develop it, of course, throughout our lives, but it’s in our genes from the beginning (an idea that has since been criticized).

Pinker went on to write books with modest titles like How the Mind Works, The Blank Slate: The Modern Denial of Human Nature, and The Better Angels of Our Nature: Why Violence Has Declined. The latter is a controversial one: Bill Gates loves it, Nassim Taleb hates it. You’ll have to make up your own mind.

***

The reason for writing about Pinker is that, while re-reading William Deresiewicz’s brilliant speech, Solitude and Leadership, I noticed that he had an extremely popular piece about not sending your kids to Ivy League schools. It’s an interesting argument, though I’m not sure I agree with all of it.

A little Googling told me that Pinker, himself a professor at an Ivy League school, responded with an even better piece on why Deresiewicz was imprecise in his criticisms and anecdotes.

I was fascinated most by Pinker’s discussion of what an elite education should entail. This tells you a lot about his mind:

This leads to Deresiewicz’s second goal, “building a self,” which he explicates as follows: “it is only through the act of establishing communication between the mind and the heart, the mind and experience, that you become an individual, a unique being—a soul.” Perhaps I am emblematic of everything that is wrong with elite American education, but I have no idea how to get my students to build a self or become a soul. It isn’t taught in graduate school, and in the hundreds of faculty appointments and promotions I have participated in, we’ve never evaluated a candidate on how well he or she could accomplish it. I submit that if “building a self” is the goal of a university education, you’re going to be reading anguished articles about how the universities are failing at it for a long, long time.

I think we can be more specific. It seems to me that educated people should know something about the 13-billion-year prehistory of our species and the basic laws governing the physical and living world, including our bodies and brains. They should grasp the timeline of human history from the dawn of agriculture to the present. They should be exposed to the diversity of human cultures, and the major systems of belief and value with which they have made sense of their lives. They should know about the formative events in human history, including the blunders we can hope not to repeat. They should understand the principles behind democratic governance and the rule of law. They should know how to appreciate works of fiction and art as sources of aesthetic pleasure and as impetuses to reflect on the human condition.

On top of this knowledge, a liberal education should make certain habits of rationality second nature. Educated people should be able to express complex ideas in clear writing and speech. They should appreciate that objective knowledge is a precious commodity, and know how to distinguish vetted fact from superstition, rumor, and unexamined conventional wisdom. They should know how to reason logically and statistically, avoiding the fallacies and biases to which the untutored human mind is vulnerable. They should think causally rather than magically, and know what it takes to distinguish causation from correlation and coincidence. They should be acutely aware of human fallibility, most notably their own, and appreciate that people who disagree with them are not stupid or evil. Accordingly, they should appreciate the value of trying to change minds by persuasion rather than intimidation or demagoguery.

I believe (and believe I can persuade you) that the more deeply a society cultivates this knowledge and mindset, the more it will flourish. The conviction that they are teachable gets me out of bed in the morning. Laying the foundations in just four years is a formidable challenge. If on top of all this, students want to build a self, they can do it on their own time.

If this seems familiar to some of you, that’s because it very closely parallels thoughts by Charlie Munger, who has argued many times for something similar in his demand for multidisciplinary worldly wisdom. We must learn the big ideas from the big disciplines. Notice the buckets Pinker talks about: 13 billion years of organic and inorganic history, 10,000 years of human culture, hundreds of years of modern civilization. These are the most reliable forms of wisdom.

So if the education system won’t do it for you, the job must be done anyway. Pinker and Munger have laid out the kinds of things you want to go about learning. Don’t let the education system keep you from having a real education. Learn how to think. Figure out how to spend more time reading. When you do, focus on the most basic and essential wisdom — including the lessons from history.

Article Summary

  • If the goal of university education is to build the self, we will be disappointed.
  • Rather, the goal of university education should be to: (1) give people an idea of our prehistory; (2) the basic laws governming the physical and living words; (3) the timeline of human history; (4) multiple cultures; (5) the formative events in human history; (6) the principles behind democracy and the rule of law; (7) read the great works of fiction and appreciate the great works of art as sources of aesthetic pleasure and reflections on the human condition; (8) make the habits of rationality second nature; (9) learn how to express ideas clearly in writing and talking; (10) appreciate objective knowledge; (11) learn how to reason and think statistically; (12) think causally rather than magically and understand the difference between causation and correlation and coincidence; and (13) appreciate human fallibility, most notably our own.
  • You can’t blame the education system if you don’t get this education, it’s up to you.

 

A Discussion on the Work of Daniel Kahneman

Edge.org asked the likes of Christopher Chabris, Nicholas Epley, Jason Zweig, William Poundstone, Cass Sunstein, Phil Rosenzweig, Richard Thaler & Sendhil Mullainathan, Nassim Nicholas Taleb, Steven Pinker, and Rory Sutherland among others: “How has Kahneman’s work influenced your own? What step did it make possible?”

Kahneman’s work is summarized in the international best-seller Thinking, Fast and Slow.

Here are some select excerpts that I found interesting.

Christopher Chabris (author of The Invisible Gorilla)

There’s an overarching lesson I have learned from the work of Danny Kahneman, Amos Tversky, and their colleagues who collectively pioneered the modern study of judgment and decision-making: Don’t trust your intuition.

Jennifer Jacquet

After what I see as years of hard work, experiments of admirable design, lucid writing, and quiet leadership, Kahneman, a man who spent the majority of his career in departments of psychology, earned the highest prize in economics. This was a reminder that some of the best insights into economic behavior could be (and had been) gleaned outside of the discipline

Jason Zweig (author of Your Money and Your Brain)

… nothing amazed me more about Danny than his ability to detonate what we had just done.

Anyone who has ever collaborated with him tells a version of this story: You go to sleep feeling that Danny and you had done important and incontestably good work that day. You wake up at a normal human hour, grab breakfast, and open your email. To your consternation, you see a string of emails from Danny, beginning around 2:30 a.m. The subject lines commence in worry, turn darker, and end around 5 a.m. expressing complete doubt about the previous day’s work.

You send an email asking when he can talk; you assume Danny must be asleep after staying up all night trashing the chapter. Your cellphone rings a few seconds later. “I think I figured out the problem,” says Danny, sounding remarkably chipper. “What do you think of this approach instead?”

The next thing you know, he sends a version so utterly transformed that it is unrecognizable: It begins differently, it ends differently, it incorporates anecdotes and evidence you never would have thought of, it draws on research that you’ve never heard of. If the earlier version was close to gold, this one is hewn out of something like diamond: The raw materials have all changed, but the same ideas are somehow illuminated with a sharper shift of brilliance.

The first time this happened, I was thunderstruck. How did he do that? How could anybody do that? When I asked Danny how he could start again as if we had never written an earlier draft, he said the words I’ve never forgotten: “I have no sunk costs.”

William Poundstone (author of Are You Smart Enough To Work At Google?)

As a writer of nonfiction I’m often in the position of trying to connect the dots—to draw grand conclusions from small samples. Do three events make a trend? Do three quoted sources justify a conclusion? Both are maxims of journalism. I try to keep in mind Kahneman and Tversky’s Law of Small Numbers. It warns that small samples aren’t nearly so informative, in our uncertain world, as intuition counsels.

Cass R. Sunstein (Author, Why Nudge?)

These ideas are hardly Kahneman’s most well-known, but they are full of implications, and we have only started to understand them.

1. The outrage heuristic. People’s judgments about punishment are a product of outrage, which operates as a shorthand for more complex inquiries that judges and lawyers often think relevant. When people decide about appropriate punishment, they tend to ask a simple question: How outrageous was the underlying conduct? It follows that people are intuitive retributivists, and also that utilitarian thinking will often seem uncongenial and even outrageous.

2. Scaling without a modulus. Remarkably, it turns out that people often agree on how outrageous certain misconduct is (on a scale of 1 to 8), but also remarkably, their monetary judgments are all over the map. The reason is that people do not have a good sense of how to translate their judgments of outrage onto the monetary scale. As Kahneman shows, some work in psychophysics explains the problem: People are asked to “scale without a modulus,” and that is an exceedingly challenging task. The result is uncertainty and unpredictability. These claims have implications for numerous questions in law and policy, including the award of damages for pain and suffering, administrative penalties, and criminal sentences.

3. Rhetorical asymmetry. In our work on jury awards, we found that deliberating juries typically produce monetary awards against corporate defendants that are higher, and indeed much higher, than the median award of the individual jurors before deliberation began. Kahneman’s hypothesis is that in at least a certain category of cases, those who argue for higher awards have a rhetoric advantage over those who argue for lower awards, leading to a rhetorical asymmetry. The basic idea is that in light of social norms, one side, in certain debates, has an inherent advantage – and group judgments will shift accordingly. A similar rhetorical asymmetry can be found in groups of many kinds, in both private and public sectors, and it helps to explain why groups move.

4. Predictably incoherent judgments. We found that when people make moral or legal judgments in isolation, they produce a pattern of outcomes that they would themselves reject, if only they could see that pattern as a whole. A major reason is that human thinking is category-bound. When people see a case in isolation, they spontaneously compare it to other cases that are mainly drawn from the same category of harms. When people are required to compare cases that involve different kinds of harms, judgments that appear sensible when the problems are considered separately often appear incoherent and arbitrary in the broader context. In my view, Kahneman’s idea of predictable coherence has yet to be adequately appreciated; it bears on both fiscal policy and on regulation.

Phil Rosenzweig

For years, there were (as the old saying has it) two kinds of people: those relatively few of us who were aware of the work of Danny Kahneman and Amos Tversky, and the much more numerous who were not. Happily, the balance is now shifting, and more of the general public has been able to hear directly a voice that is in equal measures wise and modest.

Sendhil Mullainathan (Author of Scarcity: Why Having Too Little Means So Much)

… Kahneman and Tversky’s early work opened this door exactly because it was not what most people think it was. Many think of this work as an attack on rationality (often defined in some narrow technical sense). That misconception still exists among many, and it misses the entire point of their exercise. Attacks on rationality had been around well before Kahneman and Tversky—many people recognized that the simplifying assumptions of economics were grossly over-simplifying. Of course humans do not have infinite cognitive abilities. We are also not as strong as gorillas, as fast as cheetahs, and cannot swim like sea lions. But we do not therefore say that there is something wrong with humans. That we have limited cognitive abilities is both true and no more helpful to doing good social science that to acknowledge our weakness as swimmers. Pointing it out did it open any new doors.

Kahneman and Tversky’s work did not just attack rationality, it offered a constructive alternative: a better description of how humans think. People, they argued, often use simple rules of thumb to make judgments, which incidentally is a pretty smart thing to do. But this is not the insight that left us one step from doing behavioral economics. The breakthrough idea was that these rules of thumb could be catalogued. And once understood they can be used to predict where people will make systematic errors. Those two words are what made behavioral economics possible.

Nassim Taleb (Author of Antifragile)

Here is an insight Danny K. triggered and changed the course of my work. I figured out a nontrivial problem in randomness and its underestimation a decade ago while reading the following sentence in a paper by Kahneman and Miller of 1986:

A spectator at a weight lifting event, for example, will find it easier to imagine the same athlete lifting a different weight than to keep the achievement constant and vary the athlete’s physique.

This idea of varying one side, not the other also applies to mental simulations of future (random) events, when people engage in projections of different counterfactuals. Authors and managers have a tendency to take one variable for fixed, sort-of a numeraire, and perturbate the other, as a default in mental simulations. One side is going to be random, not the other.

It hit me that the mathematical consequence is vastly more severe than it appears. Kahneman and colleagues focused on the bias that variable of choice is not random. But the paper set off in my mind the following realization: now what if we were to go one step beyond and perturbate both? The response would be nonlinear. I had never considered the effect of such nonlinearity earlier nor seen it explicitly made in the literature on risk and counterfactuals. And you never encounter one single random variable in real life; there are many things moving together.

Increasing the number of random variables compounds the number of counterfactuals and causes more extremes—particularly in fat-tailed environments (i.e., Extremistan): imagine perturbating by producing a lot of scenarios and, in one of the scenarios, increasing the weights of the barbell and decreasing the bodyweight of the weightlifter. This compounding would produce an extreme event of sorts. Extreme, or tail events (Black Swans) are therefore more likely to be produced when both variables are random, that is real life. Simple.

Now, in the real world we never face one variable without something else with it. In academic experiments, we do. This sets the serious difference between laboratory (or the casino’s “ludic” setup), and the difference between academia and real life. And such difference is, sort of, tractable.

… Say you are the manager of a fertilizer plant. You try to issue various projections of the sales of your product—like the weights in the weightlifter’s story. But you also need to keep in mind that there is a second variable to perturbate: what happens to the competition—you do not want them to be lucky, invent better products, or cheaper technologies. So not only you need to predict your fate (with errors) but also that of the competition (also with errors). And the variance from these errors add arithmetically when one focuses on differences.

Rory Sutherland

When I met Danny in London in 2009 he diffidently said that the only hope he had for his work was that “it might lead to a better kind of gossip”—where people discuss each other’s motivations and behaviour in slightly more intelligent terms. To someone from an industry where a new flavour-variant of toothpaste is presented as being an earth-changing event, this seemed an incredibly modest aspiration for such important work.

However, if this was his aim, he has surely succeeded. When I meet people, I now use what I call “the Kahneman heuristic”. You simply ask people “Have you read Danny Kahneman’s book?” If the answer is yes, you know (p>0.95) that the conversation will be more interesting, wide-ranging and open-minded than otherwise.

And it then occurred to me that his aim—for better conversations—was perhaps not modest at all. Multiplied a millionfold it may very important indeed. In the social sciences, I think it is fair to say, the good ideas are not always influential and the influential ideas are not always good. Kahneman’s work is now both good and influential.

The False Allure of Group Selection.

From Steven Pinker’s edge.org article The False Allure of Group Selection.

Pinker argues that the more carefully you think about group selection, the less sense it makes, and the more poorly it fits the facts of human psychology and history.

Human Psychology and Bees?

So for the time being we can ask, is human psychology really similar to the psychology of bees? When a bee suicidally stings an invader, presumably she does so as a primary motive, as natural as feeding on nectar or seeking a comfortable temperature. But do humans instinctively volunteer to blow themselves up or advance into machine-gun fire, as they would if they had been selected with group-beneficial adaptations? My reading of the study of cooperation by psychologists and anthropologists, and of the study of group competition by historians and political scientists, suggest that in fact human are nothing like bees.

The huge literature on the evolution of cooperation in humans has done quite well by applying the two gene-level explanations for altruism from evolutionary biology, nepotism and reciprocity, each with a few twists entailed by the complexity of human cognition.

Nepotistic altruism in humans consists of feelings of warmth, solidarity, and tolerance toward those who are likely to be one’s kin. It evolved because any genes that encouraged such feelings toward genetic relatives would be benefiting copies of themselves inside those relatives. (This does not, contrary to a common understanding, mean that people love their relatives because of an unconscious desire to perpetuate their genes.) A vast amount of human altruism can be explained in this way. Compared to the way people treat nonrelatives, they are far more likely to feed their relatives, nurture them, do them favors, live near them, take risks to protect them, avoid hurting them, back away from fights with them, donate organs to them, and leave them inheritances.[5]

The cognitive twist is that the recognition of kin among humans depends on environmental cues that other humans can manipulate.[6] Thus people are also altruistic toward their adoptive relatives, and toward a variety of fictive kin such as brothers in arms, fraternities and sororities, occupational and religious brotherhoods, crime families, fatherlands, and mother countries. These faux-families may be created by metaphors, simulacra of family experiences, myths of common descent or common flesh, and other illusions of kinship. None of this wasteful ritualizing and mythologizing would be necessary if “the group” were an elementary cognitive intuition which triggered instinctive loyalty. Instead that loyalty is instinctively triggered by those with whom we are likely to share genes, and extended to others through various manipulations.

The other classic form of altruism is reciprocity: initiating and maintaining relationships in which two agents trade favors, each benefiting the other as long as each protects himself from being exploited. Once again, a vast amount of human cooperation is elegantly explained by this theory.[7] People are “nice,” both in the everyday sense and the technical sense from game theory, in that they willingly confer a large benefit to a stranger at a small cost to themselves, because that has some probability of initiating a mutually beneficial long-term relationship. (It’s a common misunderstanding that reciprocal altruists never help anyone unless they are soliciting or returning a favor; the theory in fact predicts that they will sympathize with the needy.) People recognize other individuals and remember how they have treated and been treated by them. They feel gratitude to those who have helped them, anger to those who have exploited them, and contrition to those whom they have exploited if they depend on them for future cooperation.

One cognitive twist on this formula is that humans are language-using creatures who need not discriminate reciprocators from exploiters only by direct personal experience, but can also ask around and find out their reputation for reciprocating with or exploiting others. This in turn creates incentives to establish and exaggerate one’s reputation (a feature of human psychology that has been extensively documented by social psychologists), and to attempt to see through such exaggerations in others.[8] And one way to credibly establish one’s reputation as an altruist in the probing eyes of skeptics to be an altruist, that is, to commit oneself to altruism (and, indirectly, its potential returns in the long run, at the expense of personal sacrifices in the short run).[9] A third twist is that reciprocity, like nepotism, is driven not by infallible knowledge but by probabilistic cues. This means that people may extend favors to other people with whom they will never in fact interact with again, as long as the situation is representative of ones in which they may interact with them again.[10] Because of these twists, it’s a fallacy to think that the theory of reciprocal altruism implies that generosity is a sham, and that people are nice to one another only when each one cynically calculates what’s in it for him.

Group selection, in contrast, fails to predict that human altruism should be driven by moralistic emotions and reputation management, since these may benefit of individuals who inflate their reputations relative to their actual contributions and thus subtract from the welfare of the group. Nor is there any reason to believe that ants, bees, or termites have moralistic emotions such as sympathy, anger, and gratitude, or a motive to monitor the reputations of other bees or manage their own reputations. Group welfare would seem to work according to the rule “From each according to his ability, to each according to his need.” Ironically, Wilson himself, before he came out as a group selectionist, rejected the idea that human altruism could be explained by going to the ants, and delivered this verdict on the Marxist maxim: “Wonderful theory; wrong species.” Haidt, too, until recently was content to explain the moral emotions with standard theories of nepotistic and reciprocal altruism.

Punishment

People punish those that are most likely to exploit them, choose to interact with partners who are least likely to free-ride, and cooperate and punish more, and free-ride less, when their reputations are on the line.

Tribal Warfare

In tribal warfare among non-state societies, men do not regularly take on high lethal risks for the good of the group. Their pitched battles are noisy spectacles with few casualties, while the real combat is done in sneaky raids and ambushes in which the attackers assume the minimum risks to themselves.[14] When attacks do involve lethal risks, men are apt to desert, stay in the rear, and find excuses to avoid fighting, unless they are mercilessly shamed or physically punished for such cowardice.

Early Empires

What about early states? States and empires are the epitome of large-scale coordinated behavior and are often touted as examples of naturally selected groups. Yet the first complex states depended not on spontaneous cooperation but on brutal coercion. They regularly engaged in slavery, human sacrifice, sadistic punishments for victimless crimes, despotic leadership in which kings and emperors could kill with impunity, and the accumulation of large harems, with the mathematically necessity that large number of men were deprived of wives and families.

Nor has competition among modern states been an impetus for altruistic cooperation. Until the Military Revolution of the 16th century, European states tended to fill their armies with marauding thugs, pardoned criminals, and paid mercenaries, while Islamic states often had military slave castes.[17] The historically recent phenomenon of standing national armies was made possible by the ability of increasingly bureaucratized governments to impose conscription, indoctrination, and brutal discipline on their powerless young men. Even in historical instances in which men enthusiastically volunteered for military service (as they did in World War I), they were usually victims of positive illusions which led them to expect a quick victory and a low risk of dying in combat.[18] Once the illusion of quick victory was shattered, the soldiers were ordered into battle by callous commanders and goaded on by “file closers” (soldiers ordered to shoot any comrade who failed to advance) and by the threat of execution for desertion, carried out by the thousands. In no way did they act like soldier ants, willingly marching off to doom for the benefit of the group.

To be sure, the annals of war contain tales of true heroism—the proverbial soldier falling on the live grenade to save his brothers in arms. But note the metaphor. Studies of the mindset of soldierly duty shows that the psychology is one of fictive kinship and reciprocal obligation within a small coalition of individual men, far more than loyalty to the superordinate group they are nominally fighting for. The writer William Manchester, reminiscing about his service as a Marine in World War II, wrote of his platoonmates, “Those men on the line were my family, my home. … They had never let me down, and I couldn’t do it to them. . . . Men, I now knew, do not fight for flag or country, for the Marine Corps or glory of any other abstraction. They fight for one another.”

What about the ultimate in individual sacrifice, suicide attacks? Military history would have unfolded very differently if this was a readily available tactic, and studies of contemporary suicide terrorists have shown that special circumstances have to be engineered to entice men into it. Scott Atran, Larry Sugiyama, Valerie Hudson, Jessica Stern, and Bradley Thayer have documented that suicide terrorists are generally recruited from the ranks of men with poor reproductive prospects, and they are attracted and egged on by some combination of peer pressure, kinship illusions, material and reputational incentives to blood relatives, and indoctrination into the theory of eternal rewards in an afterlife (the proverbial seventy-two virgins).[19] These manipulations are necessary to overcome a strong inclination not to commit suicide for the benefit of the group.

The historical importance of compensation, coercion, and indoctrination in group-against-group competition should not come as a surprise, because the very idea that group combat selects for individual altruism deserves a closer look. Wilson’s dictum that groups of altruistic individuals beat groups of selfish individuals is true only if one classifies slaves, serfs, conscripts, and mercenaries as “altruistic.” It’s more accurate to say that groups of individuals that are organized beat groups of selfish individuals. And effective organization for group conflict is more likely to consist of more powerful individuals incentivizing and manipulating the rest of their groups than of spontaneous individual self-sacrifice.

The Argument

Now, no one “owns” the concept of natural selection, nor can anyone police the use of the term. But its explanatory power, it seems to me, is so distinctive and important that it should not be diluted by metaphorical, poetic, fuzzy, or allusive extensions that only serve to obscure how profound the genuine version of the mechanism really is.

Still curious? Read E.O. Wilson’s NYTimes article supporting multilevel selection. Also, check out some comments by Richard Dawkins and his take.

“stories equip us with a mental file of dilemmas we might one day face”

From Jonathan Gottschall’s The Storytelling Animal:

In his groundbreaking book How the Mind Works, Pinker argues that stories equip us with a mental file of dilemmas we might one day face, along with workable solutions. In the way that serious chess players memorize optimal responses to a wide variety of attacks and defenses, we equip ourselves for real life by absorbing fictional game plans.

But…

this model has flaws. As some critics have pointed out, fiction can make a terrible guide for real life. What if you actually tried to apply fictional solutions to your problems? You might end up running around like the comically insane Don Quixote or the tragically deluded Emma Bovary—both of whom go astray because they confuse literary fantasy with reality.

Smart People Are Reading These Books

Ok, so you’ve seen the nine books Bill Gates is reading this summer. Gates has some pretty smart friends and they were kind enough to share what they were reading this summer too.

******

Vinod Khosla is one of the co-founders of Sun Microsystems,and founder of the firm Khosla Ventures, which focuses on venture investments in various technology sectors, most notably clean technology.

The Score Takes Care of Itself by Bill Walsh, Steve Jamison and Craig Walsh
The Lean Startup by Eric Ries
The Checklist Manifesto by Atul Gwande
The Creative Destruction of Medicine by Eric Topol
The Power of Habit by Charles Duhigg
The Viral Storm by Nathan Wolfe
Willpower by Roy Baumeister & John Tierney

Here’s a list of books recommended by Vaclav Smil, who does interdisciplinary research in the fields of energy, environmental and population change, food production and nutrition, technical innovation, risk assessment, and public policy.

First belles-lettres, memoirs, narratives and stories:
This spring I re-read again Burton’s great twovolume classic of a pilgrimage to Mecca, one of the greatest, and the most informed travelogues ever written.
Another enjoyable re-read was Beerbohm’s playful Zuleika story.

New, and highly recommended, first-time readings have included von Rezzori (The Snows of Yesteryear and Memoirs of an Anti-Semite) and Crace, and Jean Renoir’s memories of his father.
Decades ago I was impressed by Caro’s first volume of Lyndon Johnson biography: this year came out a no-less readable latest instalment: The Passage of Power.

On the science/engineering side I have been reading (in preparation for my next book) many works on old and new materials:
Allwood and Cullen (Sustainable Materials with Both Eyes Open) and Berge (The Ecology of Building Materials) stand out and should be much more widely read.
I have also appreciated Eisler’s sobering history of fuel cells falling short of their repeatedly exaggerated promise.
And among the books on global economy I must recommend Nolan’s look at China and the world.

Nathan Myhrvold, was Microsoft’s Chief Technology Officer, and now follows a wide variety of interests.

A Universe from Nothing, Lawrence Krauss
Knocking on Heaven’s Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World, Lisa Randall
The Quest: Energy, Security, and the Remaking of the Modern World, Daniel Yergin
The Honest Truth About Dishonesty: How We Lie to Everyone—Especially Ourselves, Dan Ariely

Here’s a list of books recommended by Arne Duncan, the United States Secretary of Education and CEO of Chicago Public Schools.

How Will You Measure Your Life? by Clayton Christensen
Creating Innovators by Tony Wagner

Here’s a list of books recommended by Steven Pinker, a Harvard College Professor and Johnstone Family Professor in the Department of Psychology at Harvard University. He conducts research on language and cognition, and his most recent book is “The Better Angels of Our Nature: Why Violence Has Declined.”

Peter Diamandis, Abundance – Diamandis is even more optimistic than I am, and this book will remind readers of the opportunities we have to stave off disease, hunger, and privation.

Henry Hitching, The Language Wars – a stylish history of style and usage, for those of you who have ever wondered who decides what’s correct and incorrect in the English language.

Rebecca Goldstein, 36 Arguments for the Existence of God: A Work of Fiction – an exploration of the tension between faith and reason, played out in the romantic and academic fortunes of an atheist bestselling professor.

Here’s a list of books recommended by David Christian. David is a member of the Australian Academy of the Humanities who originated the “Big History” online course, which surveys the past on the largest possible scales.

Steven Pinker, The Better Angels of Our Nature: Why Violence has Declined. A famous Harvard psychologist and linguist does something historians should have done years ago: look for serious data about changing levels of violence in human societies. And his findings are stunning and in many ways unexpected. He finds that in the last two centuries, almost all forms of violence have declined drastically. Murder rates have plummeted in most parts of the world, domestic violence has declined sharply, but even the number of military casualties has declined, partly because the huge casualties of modern warfare were dwarfed by the even larger increases in total population. This is a very optimistic book about the gains of modernity.

Anders Aslund’s trilogy: How Capitalism was Built, Russia’s Capitalist Revolution: Why Market Reform Succeeded and Democracy Failed, and How Ukrine became a Market Economy and Democracy, Peterson Institute. A fascinating, if somewhat partisan, trilogy of books on the transition from a command economy to a market economy in easter Europe after the fall of communism. Aslund’s basic conclusion is that the transition to market economies has been pretty successful in 18 out of 21 post-Communist countries; in all these countries more than 50% of GDP now comes from the private sector, and, surprisingly, growth rates in the last decade have been highest in the former Soviet countries. (The three still in an economic time warp are Belarus, Turkmenistan and Uzbekisan.) But democratisation has been far less successful. It has largely succeeded in Eastern Europe and the Baltic republics, but most of the countries that belonged to the Soviet Union still have relatively to strongly authoritarian political systems, in which parliaments exist, but have little impact on government. Corruption levels and lack of respect for the rule of law remain high in most of the former Soviet countries. He also argues strongly that ‘shock therapy’ and rapid reform were essential because slower reforms merely allowed former elites to regain power over significant parts of the economy and skim off huge ‘rents’.

Brian Cox, Jeff Forshaw, The Quantum Universe: And Why Anything that can Happen, Does and Why Does E=mc2? (And Why Should We Care?). I loved both these books, but I must confess, as a non-scientist, that I didn’t understand the argument in full. Great to give you a sense of the thinking of modern physicists, but occasionally, despite the very clear explanations of the authors, I had to let the argument flow over me and enjoy it rather than understand it! Still worth it. Each time I have a go at quantum physics or relativity I’m convinced I’ve understood them a bit better, but please don’t make me sit an exam on them!

Sam Kean, The Disappearing Spoon: And Other Trues Tales of Madness, Love, and the History of the World from the Periodic Table of the Elements. A wonderful and entertaining book on the periodic table of elements. It’s not just on the periodic table, that wonderful document that helps us see the similarities and difference between different ‘species’ of elements. It’s also on the discoverers of the elements (some wonderful tales here) and on the elements themselves. Elements that were used as poisons, from cadmium to mercury to thallium, the most deadly of all. Or silver, a wonderful disinfectant, which the astronomer Tychoe Brahe used to have a special nose made when his own was cut off in a duel. The title comes from a spoon made from Gallium, which has such a low melting point that the spoon will disappear as you start stirring your tea.

Jan Zalasiewicz, The Earth after Us: What Legacy will Humans Leave in the Rocks? asks what traces we will leave behind us in 100 million times. As one of the pioneers of the idea that we now live in a new era, the Anthropocene, in which humans have become the most powerful force for change in the biosphere, he believes that we will indeed leave traces behind. But they won’t be easy to decipher for alien palaeontologists in the distant future. One of the strangest might be the absence of a layer of limestone as our oceans get too acidic to allow its deposition.

Here’s a book recommended by Charles Kenny, a fellow at the Center for Global Development and the New America Foundation. He’s also the author of “Getting Better: Why Global Development is Succeeding, and How We Can Improve the World Even More.”

The book I’ve just finished is a couple of years old – so out in paperback and perfect for the beach. It’s A Splendid Exchange by William Bernstein – a history of trade from pretty much the beginning to now. That makes It a global history, but one focused on some of the most colorful of the world’s explorers and some of its most interesting technology. It particularly interesting to see how much truly globe-spanning trade has turned from an activity designed to bring a few luxuries to the very rich into a vital part of preserving the quality of life of everyone planet-wide.

Here’s a list of books recommended by Atul Gawande, a surgeon and public health researcher who practices general and endocrine surgery at Brigham and Women’s Hospital in Boston. He’s also the author of “The Checklist Manifesto.”

My fun books for summer:
The Swerve – Stephen Greenblatt
The Green Eagle Score and The Black Ice Score – Richard Stark
Alien vs. Predator – Michael Robbins
The Kings of Cool – Don Winslow

Michael Kinsley is a columnist for Bloomberg View and for many years was the Editor of The New Republic and a columnist for the Washington Post.

There is only one novel that makes the cut for any of the other distinguished recommenders. That is Zuleika Dobson (1911), by Max Beerbohm, a parody of life at Oxford. I would only give it a B+ (although there is a very funny portrayal of an American Rhodes Scholar). Those who are willing to leave the heavy stuff to Bill and are looking for something shorter and more amusing might consider:

The Dog of the South, by Charles Portis. A first-person account of one man’s attempt to reclaim his wife by following her trail of credit-card charges.”It was cool up there and the landscape was not like the friendly earth I knew. This was the cool dry place that we hear so much about, the place where we are supposed to store things.”

Memento Mori, by Muriel Spark. A group of old people in London keep getting a mysterious phone call from someone saying, “Remember you must die.”

Scoop (or anything else) by Evelyn Waugh. This great novel of journalism has absolutely nothing to say about the profession’s current trials.

Finally, I second Bill’s recommendation of Behind the Beautiful Forevers by Katherine Boo, an unbelievably rich account of the life of the unbelievably poor people who live on a mountain of trash next to Mumbai’s glamorous new international airport.

Source: The Gates Notes, various pages.

12