Tag: Evolution

Claude Shannon: The Man Who Turned Paper Into Pixels

"The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning."— Claude Shannon (1948)
“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning.”— Claude Shannon (1948)

Claude Shannon is the most important man you’ve probably never heard of. If Alan Turing is to be considered the father of modern computing, then the American mathematician Claude Shannon is the architect of the Information Age.

The video, created by the British filmmaker Adam Westbrook, echoes the thoughts of Nassim Taleb that boosting the signal does not mean you remove the noise, in fact, just the opposite: you amplify it.

Any time you try to send a message from one place to another something always gets in the way. The original signal is always distorted. Where ever there is signal there is also noise.

So what do you do? Well, the best anyone could do back then was to boost the signal. But then all you do is boost the noise.

Thing is we were thinking about information all wrong. We were obsessed with what a message meant.

A Renoir and a receipt? They’re different, right? Was there a way to think of them in the same way? Like so many breakthroughs the answer came from an unexpected place. A brilliant mathematician with a flair for blackjack.

***

The transistor was invented in 1948, at Bell Telephone Laboratories. This remarkable achievement, however, “was only the second most significant development of that year,” writes James Gleick in his fascinating book: The Information: A History, a Theory, a Flood. The most important development of 1948 and what still underscores modern technology is the bit.

An invention even more profound and more fundamental came in a monograph spread across seventy-nine pages of The Bell System Technical Journal in July and October. No one bothered with a press release. It carried a title both simple and grand “A Mathematical Theory of Communication” and the message was hard to summarize. But it was a fulcrum around which the world began to turn. Like the transistor, this development also involved a neologism: the word bit, chosen in this case not by committee but by the lone author, a thirty-two-year -old named Claude Shannon. The bit now joined the inch, the pound, the quart, and the minute as a determinate quantity— a fundamental unit of measure.

But measuring what? “A unit for measuring information,” Shannon wrote, as though there were such a thing, measurable and quantifiable, as information.

[…]

Shannon’s theory made a bridge between information and uncertainty; between information and entropy; and between information and chaos. It led to compact discs and fax machines, computers and cyberspace, Moore’s law and all the world’s Silicon Alleys. Information processing was born, along with information storage and information retrieval. People began to name a successor to the Iron Age and the Steam Age.

Gleick also recounts the relationship between Turing and Shannon:

In 1943 the English mathematician and code breaker Alan Turing visited Bell Labs on a cryptographic mission and met Shannon sometimes over lunch, where they traded speculation on the future of artificial thinking machines. (“ Shannon wants to feed not just data to a Brain, but cultural things!” Turing exclaimed. “He wants to play music to it!”)

Commenting on vitality of information, Gleick writes:

(Information) pervades the sciences from top to bottom, transforming every branch of knowledge. Information theory began as a bridge from mathematics to electrical engineering and from there to computing. … Now even biology has become an information science, a subject of messages, instructions, and code. Genes encapsulate information and enable procedures for reading it in and writing it out. Life spreads by networking. The body itself is an information processor. Memory resides not just in brains but in every cell. No wonder genetics bloomed along with information theory. DNA is the quintessential information molecule, the most advanced message processor at the cellular level— an alphabet and a code, 6 billion bits to form a human being. “What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life,’” declares the evolutionary theorist Richard Dawkins. “It is information, words, instructions.… If you want to understand life, don’t think about vibrant, throbbing gels and oozes, think about information technology.” The cells of an organism are nodes in a richly interwoven communications network, transmitting and receiving, coding and decoding. Evolution itself embodies an ongoing exchange of information between organism and environment.

The bit is the very core of the information age.

The bit is a fundamental particle of a different sort: not just tiny but abstract— a binary digit, a flip-flop, a yes-or-no. It is insubstantial, yet as scientists finally come to understand information, they wonder whether it may be primary: more fundamental than matter itself. They suggest that the bit is the irreducible kernel and that information forms the very core of existence.

In the words of John Archibald Wheeler, the last surviving collaborator of both Einstein and Bohr, information gives rise to “every it— every particle, every field of force, even the spacetime continuum itself.”

This is another way of fathoming the paradox of the observer: that the outcome of an experiment is affected, or even determined, when it is observed. Not only is the observer observing, she is asking questions and making statements that must ultimately be expressed in discrete bits. “What we call reality,” Wheeler wrote coyly, “arises in the last analysis from the posing of yes-no questions.” He added: “All things physical are information-theoretic in origin, and this is a participatory universe.” The whole universe is thus seen as a computer —a cosmic information-processing machine.

The greatest gift of Prometheus to humanity was not fire after all: “Numbers, too, chiefest of sciences, I invented for them, and the combining of letters, creative mother of the Muses’ arts, with which to hold all things in memory .”

Information technologies are both relative in the time they were created and absolute in terms of the significance. Gleick writes:

The alphabet was a founding technology of information. The telephone, the fax machine, the calculator, and, ultimately, the computer are only the latest innovations devised for saving, manipulating, and communicating knowledge. Our culture has absorbed a working vocabulary for these useful inventions. We speak of compressing data, aware that this is quite different from compressing a gas. We know about streaming information, parsing it, sorting it, matching it, and filtering it. Our furniture includes iPods and plasma displays, our skills include texting and Googling, we are endowed, we are expert, so we see information in the foreground. But it has always been there. It pervaded our ancestors’ world, too, taking forms from solid to ethereal, granite gravestones and the whispers of courtiers. The punched card, the cash register, the nineteenth-century Difference Engine, the wires of telegraphy all played their parts in weaving the spiderweb of information to which we cling. Each new information technology, in its own time, set off blooms in storage and transmission. From the printing press came new species of information organizers: dictionaries, cyclopaedias, almanacs— compendiums of words, classifiers of facts, trees of knowledge. Hardly any information technology goes obsolete. Each new one throws its predecessors into relief. Thus Thomas Hobbes, in the seventeenth century, resisted his era’s new-media hype: “The invention of printing, though ingenious, compared with the invention of letters is no great matter.” Up to a point, he was right. Every new medium transforms the nature of human thought. In the long run, history is the story of information becoming aware of itself.

The Information: A History, a Theory, a Flood is a fascinating read.

(image source)

Just Babies: The Origins of Good and Evil

"Children are sensitive to inequality, then, but it seems to upset them only when they themselves are the ones getting less."
“Children are sensitive to inequality, then, but it seems to upset them only when they themselves are the ones getting less.”

Morality fascinates us. The stories we enjoy the most, whether fictional (as in novels, television shows, and movies) or real (as in journalism and historical accounts), are tales of good and evil. We want the good guys to be rewarded— and we really want to see the bad guys suffer.

So writes Paul Bloom in the first pages of Just Babies: The Origins of Good and Evil. His work, proposes that “certain moral foundations are not acquired through learning. They do not come from the mother’s knee … ”

***
What is morality?

Even philosophers don’t agree on morality. In fact, a lot of people don’t believe in morality at all.

To settle on some working terminology, Bloom writes:

Arguments about terminology are boring; people can use words however they please. But what I mean by morality—what I am interested in exploring, whatever one calls it— includes a lot more than restrictions on sexual behavior. Here is a simple example (of morality):

A car full of teenagers drives slowly past an elderly woman waiting at a bus stop. One of the teenagers leans out the window and slaps the woman, knocking her down. They drive away laughing.

Unless you are a psychopath, you will feel that the teenagers did something wrong. And it is a certain type of wrong. It isn’t a social gaffe like going around with your shirt inside out or a factual mistake like thinking that the sun revolves around the earth. It isn’t a violation of an arbitrary rule, such as moving a pawn three spaces forward in a chess game. And it isn’t a mistake in taste, like believing that the Matrix sequels were as good as the original.

As a moral violation, it connects to certain emotions and desires. You might feel sympathy for the woman and anger at the teenagers; you might want to see them punished. They should feel bad about what they did; at the very least, they owe the woman an apology. If you were to suddenly remember that one of the teenagers was you, many years ago, you might feel guilt or shame.

Punching someone in the face.

Hitting someone is a very basic moral violation. Indeed, the philosopher and legal scholar John Mikhail has suggested that the act of intentionally striking someone without their permission— battery is the legal term —has a special immediate badness that all humans respond to. Here is a good candidate for a moral rule that transcends space and time: If you punch someone in the face, you’d better have a damn good reason for it.

Not all morality has to do with what is wrong. “Morality,” Bloom says, “also encompasses questions of rightness.”

***
Morality from an Evolutionary Perspective

If you think of evolution solely in terms of “survival of the fittest” or “nature red in tooth and claw,” then such universals cannot be part of our natures. Since Darwin, though, we’ve come to see that evolution is far more subtle than a Malthusian struggle for existence. We now understand how the amoral force of natural selection might have instilled within us some of the foundation for moral thought and moral action.

Actually, one aspect of morality , kindness to kin, has long been a no-brainer from an evolutionary point of view. The purest case here is a parent and a child: one doesn’t have to do sophisticated evolutionary modeling to see that the genes of parents who care for their children are more likely to spread through the population than those of parents who abandon or eat their children.

We are also capable of acting kindly and generously toward those who are not blood relatives. At first, the evolutionary origin of this might seem obvious: clearly, we thrive by working together— in hunting, gathering, child care, and so on— and our social sentiments make this coordination possible.

Adam Smith pointed this out long before Darwin: “All the members of human society stand in need of each others assistance, and are likewise exposed to mutual injuries. Where the necessary assistance is reciprocally afforded from love, from gratitude, from friendship, and esteem, the society flourishes and is happy.”

This creates a tragedy of the commons problem.

But there is a wrinkle here; for society to flourish in this way, individuals have to refrain from taking advantage of others. A bad actor in a community of good people is the snake in the garden; it’s what the evolutionary biologist Richard Dawkins calls “subversion from within.” Such a snake would do best of all, reaping the benefits of cooperation without paying the costs. Now, it’s true that the world as a whole would be worse off if the demonic genes proliferated, but this is the problem, not the solution— natural selection is insensitive to considerations about “the world as a whole.” We need to explain what kept demonic genes from taking over the population, leaving us with a world of psychopaths.

Darwin’s theory was that cooperative traits could prevail if societies containing individuals who worked together peacefully would tend to defeat other societies with less cooperative members— in other words, natural selection operating at the group, rather than individual, level.

Writing of a hypothetical conflict between two imaginary tribes, Darwin wrote (in The Descent of Man): “If the one tribe included … courageous, sympathetic and faithful members who were always ready to warn each other of danger, to aid and defend each other, this tribe would without doubt succeed best and conquer the other.”

“An alternative theory,” Bloom writes, “more consistent with individual-level natural selection:”

is that the good guys might punish the bad guys. That is, even without such conflict between groups, altruism could evolve if individuals were drawn to reward and interact with kind individuals and to punish— or at least shun —cheaters, thieves, thugs, free riders, and the like.

***
The Difference Between Compassion and Empathy

there is a big difference between caring about a person (compassion) and putting yourself in the person’s shoes (empathy).

***
How can we best understand our moral natures?

Many would agree … that this is a question of theology, while others believe that morality is best understood through the insights of novelists, poets, and playwrights. Some prefer to approach morality from a philosophical perspective, looking not at what people think and how people act but at questions of normative ethics (roughly, how one should act) and metaethics (roughly, the nature of right and wrong).

Another lens is science.

We can explore our moral natures using the same methods that we use to study other aspects of our mental life, such as language or perception or memory. We can look at moral reasoning across societies or explore how people differ within a single society— liberals versus conservatives in the United States, for instance. We can examine unusual cases, such as cold-blooded psychopaths. We might ask whether creatures such as chimpanzees have anything that we can view as morality, and we can look toward evolutionary biology to explore how a moral sense might have evolved. Social psychologists can explore how features of the environment encourage kindness or cruelty, and neuroscientists can look at the parts of the brain that are involved in moral reasoning.

***
What are we born with?

Bloom argues that Thomas Jefferson was right when he wrote in a letter to his friend Peter Carr: “The moral sense, or conscience, is as much a part of man as his leg or arm. It is given to all human beings in a stronger or weaker degree, as force of members is given them in a greater or less degree.” This view, that we have an ingrained moral sense, was shared by enlightenment philosophers of the Jefferson period, including Adam Smith. While Smith is best known for his book, An Inquiry into the Nature and Causes of the Wealth of Nations, he himself favored his first book: The Theory of Moral Sentiments. The pages contain insight into “the relationship between imagination and empathy, the limits of compassion, our urge to punish others’ wrongdoing,” and more.

Bloom quotes Smith’s work to what he calls an “embarrassing degree.”

***
What aspects of morality are natural to us?

Our natural endowments include:

  • a moral sense— some capacity to distinguish between kind and cruel actions
  • empathy and compassion— suffering at the pain of those around us and the wish to make this pain go away
  • a rudimentary sense of fairness— a tendency to favor equal divisions of resources
  • a rudimentary sense of justice— a desire to see good actions rewarded and bad actions punished

Bloom argues that our goodness, however, is limited. This is perhaps best explained by Thomas Hobbes, who in 1651, argued that man “in the state of nature” is wicked and self-interested.

We have a moral sense that enables us to judge others and that guides our compassion and condemnation. We are naturally kind to others, at least some of the time. But we possess ugly instincts as well, and these can metastasize into evil. The Reverend Thomas Martin wasn’t entirely wrong when he wrote in the nineteenth century about the “native depravity” of children and concluded that “we bring with us into the world a nature replete with evil propensities.”

***
In The End …

We’re born with some elements of morality and others take time to emerge because, they require a capacity for reasoning. “The baby lacks a grasp of impartial moral principles—prohibitions or requirements that apply equally to everyone within a community. Such principles are at the foundation of systems of law and justice.”

There is a popular view that we are slaves of the passions …

that our moral judgments and moral actions are the product of neural mechanisms that we have no awareness of and no conscious control over. If this view of our moral natures were true, we would need to buck up and learn to live with it. But it is not true; it is refuted by everyday experience, by history, and by the science of developmental psychology.

It turns out instead that the right theory of our moral lives has two parts. It starts with what we are born with, and this is surprisingly rich: babies are moral animals. But we are more than just babies. A critical part of our morality—so much of what makes us human—emerges over the course of human history and individual development. It is the product of our compassion, our imagination, and our magnificent capacity for reason.

***

Still Curious? Just Babies: The Origins of Good and Evil goes on to explore some of the ways that Hobbes was right, among them: our indifference to strangers and our instinctive emotional responses.

Breakpoint: When Bigger is Not Better

Jeff Stibel’s book Breakpoint: Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain is an interesting read. The book is about “understanding what happens after a breakpoint. Breakpoints can’t and shouldn’t be avoided, but they can be identified.”

What is missing—what everyone is missing—is that the unit of measure for progress isn’t size, it’s time.

In any system continuous growth is impossible. Everything reaches a breakpoint. The real question is how the system responds to this breakpoint. “A successful network has only a small collapse, out of which a stronger network emerges wherein it reaches equilibrium, oscillating around an ideal size.”

The book opens with an interesting example.

In 1944, the United States Coast Guard brought 29 reindeer to St. Matthew Island, located in the Bering Sea just off the coast of Alaska. Reindeer love eating lichen, and the island was covered with it, so the reindeer gorged, grew large, and reproduced exponentially. By 1963, there were over 6,000 reindeer on the island, most of them fatter than those living in natural reindeer habitats.

There were no human inhabitants on St. Matthew Island, but in May 1965 the United States Navy sent an airplane over the island, hoping to photograph the reindeer. There were no reindeer to be found, and the flight crew attributed this to the fact that the pilot didn’t want to fly very low because of the mountainous landscape. What they didn’t realize was that all of the reindeer, save 42 of them, had died. Instead of lichen, the ground was covered with reindeer skeletons.

The network of St. Matthew Island reindeer had collapsed: the result of a population that grew too large and consumed too much. The reindeer crossed a pivotal point , a breakpoint, when they began consuming more lichen than nature could replenish. Lacking any awareness of what was happening to them, they continued to reproduce and consume. The reindeer destroyed their environment and, with it, their ability to survive. Within a few short years, the remaining 42 reindeer were dead. Their collapse was so extreme that for these reindeer there was no recovery.

Jeff Stibel

In the wild, of course, reindeer can move if they run out of lichen, which allows lichen in the area to be replenished before they return.

Nature rarely allows the environment to be pushed so far that it collapses. Ecosystems generally keep life balanced. Plants create enough oxygen for animals to survive, and the animals, in turn, produce carbon dioxide for the plants. In biological terms, ecosystems create homeostasis.

We evolved to reproduce and consume whatever food is available.

Back when our ancestors started climbing down from the trees, this was a good thing: food was scarce so if we found some, the right thing to do was gorge. As we ate more, our brains were able to grow, becoming larger than those of any other primates. This was a very good thing. But brains consume disproportionately large amounts of energy and, as a result, can only grow so big relative to body size. After that point, increased calories are actually harmful. This presents a problem for humanity, sitting at the top of the food pyramid. How do we know when to stop eating? The answer, of course, is that we don’t. People in developed nations are growing alarmingly obese, morbidly so. Yet we continue to create better food sources, better ways to consume more calories with less bite.

Mother Nature won’t help us because this is not an evolutionary issue: most of the problems that result from eating too much happen after we reproduce, at which point we are no longer evolutionarily important. We are on our own with this problem. But that is where our big brains come in. Unlike reindeer, we have enough brainpower to understand the problem, identify the breakpoint, and prevent a collapse.

We all know that physical things have limits. But so do the things we can’t see or feel. Knowledge is an example. “Our minds can only digest so much. Sure, knowledge is a good thing. But there is a point at which even knowledge is bad.” This is information overload.

We have been conditioned to believe that bigger is better and this is true across virtually every domain. When we try to build artificial intelligence, we start by shoveling as much information into a computer as possible. Then we stare dumbfounded when the machine can’t figure out how to tie its own shoes. When we don’t get the results we want, we just add more data. Who doesn’t believe that the smartest person is the one with the biggest memory and the most degrees, that the strongest person has the largest muscles, that the most creative person has the most ideas?

Growth is great until it goes too far.

[W]e often destroy our greatest innovations by the constant pursuit of growth. An idea emerges, takes hold, crosses the chasm, hits a tipping point, and then starts a meteoric rise with seemingly limitless potential. But more often than not, it implodes, destroying itself in the process.

Growth isn’t bad. It’s just not as good as we think.

Nature has a lesson for us if we care to listen: the fittest species are typically the smallest. The tinest insects often outlive the largest lumbering animals. Ants, bees, and cockroaches all outlived the dinosaurs and will likely outlive our race. … The deadliest creature is the mosquito, not the lion. Bigger is rarely better in the long run. What is missing—what everyone is missing—is that the unit of measure for progress isn’t size, it’s time.

Of course, “The world is a competitive place, and the best way to stomp out potential rivals is to consume all the available resources necessary for survival.”

Otherwise, the risk is that someone else will come along and use those resources to grow and eventually encroach on the ones we need to survive.

Networks rarely approach limits slowly “… they often don’t know the carrying capacity of their environments until they’ve exceeded it. This is a characteristic of limits in general: the only way to recognize a limit is to exceed it. ” This is what happened with MySpace. It grew too quickly. Pages became cluttered and confusing. There was too much information. It “grew too far beyond its breakpoint.”

There is an interesting paradox here though: unless you want to keep small social networks, the best way to keep the site clean is actually to use a filter that prevents you from seeing a lot of information, which creates a filter bubble.

Stibel offers three phases to any successful network.

first, the network grows and grows and grows exponentially; second, the network hits a breakpoint, where it overshoots itself and overgrows to a point where it must decline, either slightly or substantially; finally, the network hits equilibrium and grows only in the cerebral sense, in quality rather than in quantity.

He offers some advice:

Rather than endless growth, the goal should be to grow as quickly as possible—what technologists call hypergrowth—until the breakpoint is reached. Then stop and reap the benefits of scale alongside stability.

Breakpoint goes on to predict the fall of Facebook.

Evolution is Blind but We’re Not

Charles Darwin

The first thing we do is try to figure out what went wrong. When people in organizations evaluate poor outcomes, determining what went wrong and why is one of the first steps.

Once we have a cause, whether accurate or (often) not, we distribute this information around the organization with the hopes that the knowledge of why we made a mistake will prevent us from repeating that mistake.

We attempt to eliminate the mistake from happening again.

In his masterful book, Seeing What Others Don’t: The Remarkable Ways We Gain Insights, Gary Klein writes:

“Organizations have lots of reasons to dislike errors: they can pose severe safety risks, they disrupt coordination, they lead to waste, they reduce the chance for project success, they erode the culture, and they can result in lawsuits and bad publicity. … In your job as a manager, you find yourself spending most of your time flagging and correcting errors. You are continually checking to see if workers meet their performance standards. If you find deviations, you quickly respond to get everything back on track. It’s much easier and less frustrating to manage by reducing errors than to try to boost insights. You know how to spot errors.”

We hate errors and we make every effort not to repeat them.

Here’s an idea that I’ve been toying around with recently — we can’t repeat the same error twice, in part because things are always changing.

In his wonderful book of Fragments, Heraclitus writes:

No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.

The river changes and so does the person.

Evolution is blind to failure.

Evolution doesn’t have intent. When the DNA copy of a species creates a variation—say a shorter beak or sweeter taste—it does so without realizing these traits might have been tried before. These traits are not purposeful; evolution is blind to previous failures and cares not whether a mutation that failed 8 years ago occurs again. This is not a conscious process. What failed to become an advantaged trait two generations ago may become one today. It may be that the environment changed, and where there was once a preference for a shorter beak, a longer one now offers an advantage, however slight.

By repeating errors, evolution adapts. This is why natural selection works. Artificial selection, on the other hand, makes us fragile because selection isn’t blind anymore.

Charles Darwin ind dif

So why do we fail. One of the reasons for failure is our own ignorance.

“We may err because science has given us only a partial understanding of the world and how it works,” writes Atul Gawande in The Checklist Manifesto. “There are skyscrapers we do not yet know how to build, snowstorms we cannot predict, heart attacks we still haven’t learned how to stop.”

These things are within our grasp but we are not quite there yet. Human knowledge grows by the day. Knowledge in this case can be positive ‘what works’ and negative ‘what doesn’t work.’ For example, we can now build skyscrapers hundreds of stories; this knowledge didn’t exist 100 years ago. Thanks to computers and technology we can now model more variables, and we’re better able to predict the weather.

(In these endeavours we’re improving quickly in terms of knowledge and technology, while the environment changes slower.)

The same water doesn’t cross your foot. The world is always changing. What used to be a tailwind is now a headwind and vice versa.

Excusing Ignorance

We can excuse ignorance, when we only have limited understanding, but we cannot excuse ineptitude. Failures when the knowledge exists and we act contrary to it, become hard to forgive. This is important in the context of organizations because we tend to forgive someone who makes a ‘mistake’ for the first time but punish the person who makes the same ‘mistake’ again. This is a form of artificial selection.

So we punish a person, who, whether intentionally or not, is mimicking evolution. Yet we can never really make the ‘same mistake’ twice because the same exact conditions do not exist again. We’re not the same and neither is world. (Of course, they are only punished if the outcome is negative.)

I’m not trying to say learning from mistakes is bad, only that it is limited (and a form of artificial selection). It’s a piece to the puzzle of knowledge. But if your process for learning from mistakes doesn’t account for changing knowledge/technology and environments you have a blind spot. Things change.

Improving our ability to learn from mistakes involves more than simply determining what went wrong and trying to avoid that again in the future. We need a deeper understanding of the key variables that govern the situation (and their relation to the environment), the decision making process, and our knowledge at the time of the decision.

Sometimes it’s smart to attempt things without knowledge of previous mistakes and sometimes it’s not.

Coevolution and Artificial Selection

“The ancient relationship between bees and flowers is a classic example of coevolution. In a coevolutionary bargain like the one struck by the bee and the apple tree, the two parties acton each other to advance their individual interests but wind up trading favors: food for the bee, transportation for the apple genes. Consciousness needn’t enter into it on either side …”

***

In The Botany of Desire: A Plant’s-Eye View of the World Michael Pollan tells the story of four domesticated species—the apple, the tulip, cannabis, and the potato—and the human desires that link their destinies to our own.

“Its broader subject,” he writes, “is the complex reciprocal relationship between the human and natural world.”

It’s a simple question really: Did I choose to plant these tulips or did they make me do it? Pollan concludes that, in fact, both statements are true.

Did the plant make him do it? Only in the sense that the flower “makes” the bee pay it a visit.

Evolution doesn’t depend on will or intention to work; it is almost by definition, and unconscious, unwilled process. All it requires are beings compelled, as all plants and animals are, to make more of themselves by whatever means trial and error present. Sometimes an adaptive trait is so clever it appears purposeful: the ant that “cultivates” its own gardens of edible fungus, for instance, or the pitcher plant that “convinces” a fly it’s a piece of rotting meat. But such traits are clever only in retrospect. Design in nature is but a concatenation of accidents, culled by natural selection until the result is so beautiful or effective as to seem a miracle of purpose.

The book is as much about the human desires that connect us to plants as it is about the plants themselves.

“Our grammar,” Pollan writes, “might teach us to divide the world into active subjects and passive objects, but in a coevolutionary relationship every subject is also an object, every object a subject.”

Charles Darwin didn’t start out The Origin of Species with an account of his new theory, rather, he began with a foundation he felt would be easier for people to get their heads around. The first chapter was a special case of natural selection called artificial selection.

Artificial wasn’t used in the sense of fake but as in things that reflect human will. He wrote about a wealth of variation of species from which humans selected the traits that will be passed down to future generations. In this sense, human desire plays the role of nature, determining what constitutes “fitness.” If people could understand that, they would understand nature’s evolution.

Pollan argues that the crisp conceptual lie “that divided artificial from natural selection has blurred.”

Whereas once humankind exerted its will in the relatively small arena of artificial selection (the arena I think of, metaphorically, as a garden) and nature held sway everywhere else, today the force of our presence is felt everywhere. It has become much harder, in the past century, to tell where the garden leaves off an pure nature begins.

We are shaping things in ways that Darwin could never have imagined.

For a great many species today, “fitness” means the ability to get along in a world in which humankind has become the most powerful evolutionary force.

Artificial selection, it appears, has become at least as powerful as natural selection.

Nature’s success stories from now on are probably going to look a lot more like the apple’s than the panda’s or white leopard’s. If those last two species have a future, it will be because of human desire; strangely enough, their survival now depends on what amounts to a form of artificial selection.

The main characters of the book—the apple, the tulip, cannabis, and the potato—are four of the world’s success stories. “The dogs, cats, and horses of the plant world, these domesticated species are familiar to everyone,” Pollan writes.

Apples

In the wild a plant and its pests are continually coevolving, in a dance of resistance and conquest that can have no ultimate victor. But coevolution ceases in an orchard of grafted trees, since they are genetically identical from generation to generation. The problem very simply is that the apple trees no longer reproduce sexually, as they do when they’re grown from seed, and sex is nature’s way of creating fresh genetic combinations. At the same time the viruses, bacteria, fungi, and insects keep very much at it, reproducing sexually and continuing to evolve until eventually they hit on the precise genetic combination that allows them to overcome whatever resistance the apples may have once possessed. Suddenly total victory is in the pests’ sight — unless, that is, people come to the tree’s rescue, wielding the tools of modern chemistry.

Put another way, the domestication of the apple has gone too far, to the point where the species’ fitness for life in nature (where it still has to live, after all) has been dangerously compromised. Reduced to the handful of genetically identical clones that suit our taste and agricultural practice, the apple has lost the crucial variability — the wildness — that sexual reproduction confers.

The Tulip

The tulip’s genetic variability has in fact given nature–or, more precisely, natural selection–a great deal to play with. From among the chance mutations thrown out by a flower, nature preserves the rare ones that confer some advantage–brighter color, more perfect symmetry, whatever. For millions of years such features were selected, in effect, by the tulip’s pollinators–that is, insects–until the Turks came along and began to cast their own votes. (The Turks did not learn to make deliberate crosses till the 1600s; the novel tulips they prized were said simply to have “occurred.”) Darwin called such a process artificial, as opposed to natural, selection, but from the flower’s point of view, this is a distinction without a difference: individual plants in which a trait desired by either bees or Turks occurred wound up with more offspring. Though we self-importantly regard domestication as something people have done to plants, it is at the same time a strategy by which the plants have exploited us and our desires–even our most idiosyncratic notions of beauty–to advance their own interests. Depending on the environment in which a species finds itself, different adaptations will avail. Mutations that nature would have rejected out of hand in the wild sometimes prove to be brilliant adaptations in an environment that’s been shaped by human desire.

In the environment of the Ottoman Empire the best way for a tulip to get ahead was to have absurdly long petals drawn to a point fine as a needle. In drawings, paintings, and ceramics (the only place the Turks’ ideal of tulip beauty survives; the human environment is an unstable one), these elongated blooms look as though they’d been stretched to the limit by a glassblower. The metaphor of choice for this form of tulip petal was the dagger. … Though these … traits are not uncommon in species tulips, attenuated petals are virtually unknown in the wild, which suggests that the Ottoman ideal of tulip beauty—elegant, sharp, and masculine—was freakish and hard-won and conferred no advantage in nature.

All in all The Botany of Desire is one of the best books I’ve read on how our Apollonian desire for control and order increasingly butts up against the natural Dionysian wildness.

Daniel Dennett: How to Make Mistakes

IntuitionPumps

In Intuition Pumps And Other Tools for Thinking, Daniel Dennett, one of the world’s leading philosophers offers a trove of mind-stretching thought experiments, which he calls “imagination-extenders and focus-holders” (intuition pumps). They allow us to “think reliably and even gracefully about really hard questions.”

The first intuition pump is on mistakes.

More specifically, how to make mistakes and the keys to good mistakes.

History Rhymes

The history of philosophy is in large measure the history of very smart people making very tempting mistakes, and if you don’t know the history, you are doomed to making the same darn mistakes all over again.

Learning

Mistakes are not just opportunities for learning; they are, in an important sense, the only opportunity for learning or making something truly new. Before there can be learning, there must be learners. There are only two non-miraculous ways for learners to come into existence: they must either evolve or be designed and built by learners that evolved. Biological evolution proceeds by a grand, inexorable process of trial and error — and without the errors the trials wouldn’t accomplish anything.

Evolution is the Enabling Process of Knowledge

Evolution is one of the central themes of this book, as all my books, for the simple reason that it is the central, enabling process not only of life but also of knowledge and learning and understanding. If you attempt to make sense of the world of ideas and meanings, free will and morality, art and science and even philosophy itself without a sound and quite detailed knowledge of evolution, you have one hand tied behind your back. … For evolution, which knows nothing, the steps into novelty are blindly taken by mutations, which are random copying “errors” in DNA.

The Key to Good Mistakes

The chief trick to making good mistakes is not to hide them — especially not from yourself. Instead of turning away in denial when you make a mistake, you should become a connoisseur of your own mistakes, turning them over in your mind as if they were works of art, which in a way they are. The fundamental reaction to any mistake ought to be this: “Well, I won’t do that again!” Natural selection doesn’t actually think the thought; it just wipes out the goofers before they can reproduce; natural selection won’t do that again, at least not as often. Animals that can learn—learn not to make that noise, touch that wire, eat that food—have something with a similar selective force in their brains. (B. F. Skinner and the behaviorists understood the need for this and called it “reinforcement” learning; that response is not reinforced and suffers “extinction.”) We human beings carry matters to a much more swift and efficient level. We can actually think the thought, reflecting on what we have just done: “Well, I won’t do that again!” And when we reflect, we confront directly the problem that must be solved by any mistake-maker: what, exactly, is that? What was it about what I just did that got me into all this trouble? The trick is to take advantage of the particular details of the mess you’ve made, so that your next attempt will be informed by it and not just another blind stab in the dark.

We have all heard the forlorn refrain “Well, it seemed like a good idea at the time!” This phrase has come to stand for the rueful reflection of an idiot, a sign of stupidity, but in fact we should appreciate it as a pillar of wisdom. Any being, any agent, who can truly say, “Well, it seemed like a good idea at the time!” is standing on the threshold of brilliance. We human beings pride ourselves on our intelligence, and one of its hall marks is that we can remember our previous thinking, and reflect on it—on how it seemed, on why it was tempting in the first place, and then about what went wrong.

[…]

So when you make a mistake, you should learn to take a deep breath, grit your teeth, and then examine your own recollections of the mistake as ruthlessly and as dispassionately as you can manage. It’s not easy. The natural human reaction to making a mistake is embarrassment and anger (we are never angrier than when we are angry at ourselves), and you have to work hard to overcome these emotional reactions. Try to acquire the weird practice of savoring your mistakes, delighting in uncovering the strange quirks that led you astray. Then, once you have sucked out all the goodness to be gained from having made them, you can cheerfully set them behind you, and go on to the next big opportunity. But that is not enough: you should actively seek out opportunities to make grand mistakes, just so you can then recover from them.

Natural Selection

Every organism on the earth dies sooner or later after one complicated life story or another. How on earth could natural selection see through the fog of all these details in order to figure out what positive factors to “reward” with offspring and what negative factors to “punish” with childless death? Can it really be that some of our ancestors’ siblings died childless because their eyelids were the wrong shape? If not, how could the process of natural selection explain why our eyelids came to have the excellent shapes they have? Part of the answer is familiar: following the old adage: “If it ain’t broke, don’t fix it,” leave almost all of your old, conservative design solutions in place and take your risks with a safety net in place. Natural selection automatically conserves whatever has worked up to now, and fearlessly explores innovations large and small; the large ones almost always lead immediately to death. A terrible waste, but nobody’s counting. Our eyelids were mostly designed by natural selection long before there were human beings or even primates or even mammals.

Card Tricks

Here is a technique that card magicians—at least the best of them—exploit with amazing results. (I don’t expect to incur the wrath of the magicians for revealing this trick to you, since this is not a particular trick but a deep general principle.) A good card magician knows many tricks depend on luck—they don’t always work, or even often work. There are some effects—they can hardly be called tricks—that might work only once in a thousand times! Here is what you do: You start by telling the audience you are going to perform a trick, and without telling them what trick you are doing, you go for the one-in-a-thousand effect. It almost never works, of course, so you glide seamlessly into a second try, for an effect that works about one time in a hundred, perhaps. When it too fails (as it almost always will) you slide into effect #3, which only works about one time in ten, so you’d better be ready with effect #4 which works half the time (let’s say), and if all else fails (and by this time, usually one of the earlier safety nets will have kept you out of this worst case), you have a failsafe effect, which won’t impress the crowd very much but at least it’s a surefire trick. In the course of a whole performance, you will be very unlucky indeed if you always have to rely on your final safety net, and whenever you achieve one of the higher-flying effects, the audience will be stupefied. “Impossible! How on earth could you have known that was my card?” Aha! You didn’t know, but you had a cute way of taking a hopeful stab in the dark that paid off. By hiding the “error” cases from view, you create a “miracle”.

Evolution Works The Same Way

Evolution works the same way: all the dumb mistakes tend to be invisible, so all we see is a stupendous string of triumphs. For instance, the vast majority — way over 90 percent — of all the creatures that have ever lived died childless, but not a single one of your ancestors suffered that fate. Talk about a line of charmed lives!

One big difference between the discipline of science and the discipline of stage magic is that while magicians conceal their false starts from the audience as best they can, in science you make your mistakes in public. You show them off so that everybody can learn from them. … It is not so much that our brains are bigger or more powerful, or even that we have the knack of reflecting on our own past errors, but that we share the benefits that our individual brains have won by their individual histories of trial and error.

I am amazed at how many really smart people don’t understand that you can make big mistakes in public and emerge none the worse for it.

We all know people, perhaps ourselves included, who will go to great lengths to avoid admitting they were wrong. But Dennett argues:

Actually, people love it when somebody admits to making a mistake. All kinds of people love pointing out mistakes. Generous-spirited people appreciate your giving them the opportunity to help, and acknowledging it when they succeed in helping you; mean-spirited people enjoy showing you up. Let them! Either way we all win.

Of course, in general, people do not enjoy correcting the stupid mistakes of others. You have to have something worth correcting, something original to be right or wrong about …