Tag: Creativity

How Play Enriches Our Creative Capacity

“Play doesn’t just help us to explore what is essential. It is essential in and of itself.” — Greg McKeown

The value of playing cannot be over-stated. From Einstein and Seneca to Steve Jobs and Google.

“Bob Fagan, a researcher who has spent fifteen years studying the behavior of grizzly bears, discovered bears who played the most tended to survive the longest.” Jaak Panksepp concluded something similar in Affective Neuroscience: The Foundations of Human and Animal Emotions, where he wrote, “One thing is certain, during play, animals are especially prone to behave in flexible and creative ways.”

In Essentialism: The Disciplined Pursuit of Less, Greg Mckeown argues that “when we play, we are engaged in the purest expression of our humanity, the truest expression of our individuality.”

Play expands our minds in ways that allow us to explore: to germinate new ideas or see old ideas in a new light. It makes us more inquisitive, more attuned to novelty, more engaged.

Play fuels exploration in at least three ways.

First, play broadens the range of options available to us. It helps us to see possibilities we otherwise wouldn’t have seen and make connections we would otherwise not have made. It opens our minds and broadens our perspective. It helps us challenge old assumptions and makes us more receptive to untested ideas . It gives us permission to expand our own stream of consciousness and come up with new stories.

Or as Albert Einstein once said, “When I examine myself and my methods of thought, I come to the conclusion that the gift of fantasy has meant more to me than my talent for absorbing positive knowledge.”*

Second, play is an antidote to stress, and this is key because stress, in addition to being an enemy of productivity, can actually shut down the creative, inquisitive, exploratory parts of our brain. You know how it feels: you’re stressed about work and suddenly everything starts going wrong. You can’t find your keys, you bump into things more easily, you forget the critical report on the kitchen table. Recent findings suggest this is because stress increases the activity in the part of the brain that monitors emotions (the amygdala), while reducing the activity in the part responsible for cognitive function (the hippocampus)—the result being, simply, that we really can’t think clearly.

Play causes stress to (temporarily) melt away.

Finally, as Edward M. Hallowell, a psychiatrist who specializes in brain science, explains, play has a positive effect on the brain. “The brain’s executive functions,” he writes in Shine: Using Brain Science to Get the Best from Your People, “include planning, prioritizing, scheduling, anticipating, delegating, deciding, analyzing— in short, most of the skills any executive must master to excel in business.” Play stimulates parts of the brain involved in logical reasoning and carefree exploration.

Hallowell continues:

Columbus was at play when it dawned on him that the world was round. Newton was at play in his mind when he saw the apple tree and suddenly conceived of the force of gravity. Watson and Crick were playing with possible shapes of the DNA molecule when they stumbled upon the double helix. Shakespeare played with iambic pentameter his whole life. Mozart barely lived a waking moment when he was not at play. Einstein’s thought experiments are brilliant examples of the mind invited to play.

Perhaps Roald Dahl said it best: “A little nonsense now and then is cherished by the wisest men.”

Footnotes

Giving up Your Best Loved Ideas and Starting Over

“You need all kinds of influences, including negative ones, to challenge what you believe in.”

— Bill Murray

“Any year that passes in which you don’t destroy one of your best loved ideas is a wasted year,” says Charlie Munger. If only it were that easy. It’s mentally hard to come to an opinion and even harder to give up that attachment and admit that we were wrong. That’s one reason Henry Singleton opted for flexibility instead of predetermined plans.

Dani Shapiro Still Writing

F. Scott Fitzgerald famously wrote: “The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function. One should, for example, be able to see that things are hopeless yet be determined to make them otherwise.”

The idea is to live in the middle of ideas, believing in them enough to take action but not enough so they become too big of an anchor when something better comes along. More than acknowledging the uncertainty of beliefs you need to embrace it. Keats called this ability “negative capability.” Roger Martin argues that successful thinking involves integrating several different ideas while maintaining the ability to act. It is through the exploration of these opposing ideas, or uncertainty if you will, that we come to better outcomes.

I came across this passage in Dani Shapiro’s Still Writing that speaks to the necessity of failure and uncertainty in the creative process.

When writers who are just starting out ask me when it gets easier, my answer is never. It never gets easier. I don’t want to scare them, so I rarely say more than that, but the truth is that, if anything, it gets harder. The writing life isn’t just filled with predictable uncertainties but with the awareness that we are always starting over again. That everything we ever write will be flawed. We may have written one book, or many, but all we know — if we know anything at all — is how to write the book we’re writing. All novels are failures. Perfection itself would be a failure. All we can hope is that we will fail better. That we won’t succumb to fear of the unknown. That we will not fall prey to the easy enchantments of repeating what may have worked in the past. I try to remember that the job — as well as the plight, and the unexpected joy — of the artist is to embrace uncertainty, to be sharpened and honed by it. To be birthed by it. Each time we come to the end of a piece of work, we have failed as we have leapt — spectacularly, brazenly — into the unknown.

Shapiro further highlights the parallels between writing and the broader parallels with the creative life:

The writing life requires courage, patience, persistence, empathy, openness, and the ability to deal with rejection. It requires the willingness to be alone with oneself. To be gentle with oneself. To look at the world without blinders on. To observe and withstand what one sees. To be disciplined, and at the same time, take risks. To be willing to fail—not just once, but again and again, over the course of a lifetime.

[…]
The page is your mirror. What happens inside you is reflected back. You come face-to-face with your own resistance, lack of balance, self-loathing, and insatiable ego—and also with your singular vision, guts, and fortitude.No matter what you’ve achieved the day before, you begin each day at the bottom of the mountain. Isn’t this true for most of us?

Still Writing will help you discover the mindset for a creative life.

Innovation: The Attacker’s Advantage

If you believe Thomas Kuhn’s theory outlined in The Structure of Scientific Revolutions, then the pace of change happens slowly at first and then all at once.

Innovation: The Attacker’s Advantage, an out-of-print book from 1984 takes a timeless look at this theory and applies it to innovation. This is the Innovator’s Dilemma long before the innovator’s dilemma.

The perspective of Richard Foster, the book’s author, is that there is a battle going on in the marketplace between innovators (or attackers) and defenders (who want to maintain their existing advantage).

Some companies have more good years than bad years. What’s the secret behind their success? Foster argues it’s their willingness to cannibalise “their current products and processes just as they are the most lucrative and begin the search again, over and over.

It is about the inexorable and yet stealthy challenge of new technology and the economics of sub situation which force companies to behave like the mythical phoenix, a bird that periodically crashed to earth in order to rejuvenate itself.

The book isn’t about improving process but rather changing your mindset. This is the Attacker’s Advantage.

Henry Ford understood this mindset. In My Life and Work, he wrote,

If to petrify is success, all one has to do is to humor the lazy side of the mind; but if to grow is success, then one must wake up anew every morning and keep awake all day. I saw great businesses become but the ghost of a name because someone thought they could be managed just as they were always managed, and though the management may have been most excellent in its day, its excellence consisted in its alertness to its day, and not in slavish following of its yesterdays. Life, as I see it, is not a location, but a journey. Even the man who most feels himself ‘settled’ is not settled—he is probably sagging back. Everything is in flux, and was meant to be. Life flows. We may live at the same number of the street, but it is never the same man who lives there.

[…]

It could almost be written down as a formula that when a man begins to think that he at last has found his method, he had better begin a most searching examination of himself to see whether some part of his brain has not gone to sleep.

Foster recognizes that innovation is “born from individual greatness” but exists within the context of a marketplace where the S-curve dominates and questions such as “how much change is possible, when it will occur, and how much it will cost,” are critical factors.

Companies are often blindsided by change. Everything is profitable until it isn’t. But leading companies are supposed to have an advantage. Or, are “the advantages outweighed by other inherent disadvantages?” Foster argues this is the case.

The roots of this failure lie in the assumptions behind the key decisions that all companies have to make. Most of the managers of companies that enjoy transitory success assume that tomorrow will be more or less like today. That significant change is unlikely, is unpredictable, and in any case will come slowly. They have thus focused their efforts on making their operations ever more cost effective. While valuing innovation and espousing the latest theories on entrepreneurship, they still believe it is a highly personalized process that cannot be managed or planned to any significant extent. They believe that innovation is risky, more risky than defending their present business.

Some companies make the opposite assumption. They assume tomorrow does not resemble today.

They have assumed that when change comes it will come swiftly. They believe that there are certain patterns of change which are predictable and subject to analysis. They have focused more on being in the right technologies at the right time, being able to protect their positions, and having the best people rather than on becoming ever more efficient in their current lines of business. They believe that innovation is inevitable and manageable. They believe that managing innovation is the key to sustaining high levels of performance for their shareholders. They assume that the innovators, the attackers, will ultimately have the advantage, and they seek to be among those attackers, while not relinquishing the benefits of the present business which they actively defend. They know they will face problems and go through hard times, but they are prepared to weather them. They assume that as risky as innovation is, not innovating is even riskier.

These beliefs are based on a different understanding of competition.

The S-Curve

S-Curve

The S-curve is a graph of the relationship between the effort put into improving a product or process and the results one gets back for the investment. It’s called the S-curve because when the results are plotted, what usually appears is a sinuous line shaped like an S, but pulled to the right at the top and pulled to the left at the bottom.

Initially, as funds are put into developing a new product or process, progress is very slow. Then all hell breaks loose as the key knowledge necessary to make advances is put in place. Finally, as more dollars are put into the development of a product or process, it becomes more and more difficult and expensive to make technical progress. Ships don’t sail much faster, cash registers don’t work much better, and clothes don’t get much cleaner. And that is because of limits at the top of the S-curve.

Limits are the key to understanding the S-curve in the innovation context. When we approach a limit “we must change or not progress anymore.” Management’s ability to recognize limits and change course becomes key.

If you are at the limit, no matter how hard you try you cannot make progress. As you approach limits, the cost of making progress accelerates dramatically. Therefore, knowing the limit is crucial for a company if it is to anticipate change or at least stop pouring money into something that can’t be improved. The problem for most companies is that they never know their limits. They do not systematically seek the one beacon in the night storm that will tell them just how far they can improve their products and processes.

Foster argues that if you don’t understand limits and S-curves you get blindsided by change. I think that’s too neat of an argument — you can understand limits and S-curves and still get blindsided but the odds are reduced. You can think of the S-curve as the blindsided curve or the attacker’s curve depending on your perspective.

For the S-curve to have practical significance there must be technological change in the wind. That is, one competitor must be nearing its limits, while others, perhaps less experienced, are exploring alternative technologies with higher limits. But this is almost always the case. I call the periods of change from one group of products or processes to another, technological discontinuities. There is a break between the S-curves and a new one begins to form. Not from the same knowledge that underlays the old one but from an entirely new and different knowledge base.

I think this argument is starting to sound a lot like the Innovator’s Dilemma but 15 years sooner.

Technological discontinuities are arriving with increasing frequency because we’re in the early stages of the technological revolution. Eventually these developments will revert to the mean and disruptive innovation will become less frequent and incremental innovation more common. Disruptive innovation favors the attacker whereas incremental favors the incumbent — going from Zero to One will be harder.

As limits are approached incremental improvement becomes increasingly expensive.

At the same time, the possibility of new approaches often emerges—new possibilities that frequently depend on skills not well developed in leader companies. As these attacks are launched, they are often unnoticed by the leader, hidden from view by conventional economic analysis. When the youthful attacker is strong he is quite prepared for battle by virtue of success and training in market niches. The defender, lulled by the security of strong economic performance for a long time and by conventional management wisdom that encourages him to stay his course, and buoyed by faith in evolutionary change, finds it’s too late to respond. The final battle is swift and the leader loses.

This means the standard “stick to your knitting” argument becomes contextual and thus psychologically difficult. Sometimes the best strategy may be to move to something unfamiliar. I’d argue that the competitive drive for efficiency makes a lot of companies increasingly fragile. Most dangerous of all, they are blind to their fragility.

The S-curve, limits and attacker’s advantages are at the heart of these problems and they also provide the key to solving them. For example, there are people, call them limitists, who have an unusual ability to recognize limits and ways around them. They ought to be hired or promoted. There are others who can spot ways to circumvent limits by switching to new approaches. They are essential too. Imaginary products need to be designed to understand when a competitive threat is likely to become a reality. Hybrid products that seem to be messy assemblages of old and new technologies (like steam ships with sails) can sometimes be essential for competitive success. Companies can set up separate divisions to produce new technologies and products to compete with old ones. S- curves can be sketched and used to anticipate trouble.

None of this is easy. And it won’t happen unless the chief executive replaces his search for efficiency with a quest for competitiveness.

[…]

Most top executives understand, I think, that technological change is relevant to them and that it is useless and misleading to label their business as high-tech or low-tech. What they don’t have is a picture of the engines of the process by which technology is transformed into competitive advantage and how they can thus get their hands on the throttle.

“If change occurs at the time learning starts to slow,” wrote Phillip Moffitt in a 1980s Esquire article entitled The Dark Side of Excellence, “… then there is a chance to avoid the dramatic deterioration. If we call this the ‘observation point,’ when you can see the past and the future, then there is time to reconsider what one is doing.”

Understanding Limits

Limits are important because of what they imply for the future of the business. For example, we know from the S-curve that as the limits are approached it becomes increasingly expensive to carry out further development. This means that a company will have to increase its technical expenditures at a more rapid pace than in the past in order to maintain the same rate of progress of technical advance in the marketplace, or it will have to accept a declining rate of progress. The slower rate of change could make the company more vulnerable to competitive attack or presage price and profit declines. Neither option is very attractive; they both signal a tougher environment ahead as the limits are approached. Being close to the limits means that all the important opportunities to improve the business by improving the technology have been used. If the business is going to continue to grow and prosper in the future, it will have to look to functional skills other than technology—say marketing, manufacturing or purchasing. Said another way, as the limits of a technology are reached, the key factors for success in the business change. The actions and strategies that have been responsible for the successes of the past will no longer suffice for the future. Things will have to change. Discontinuity is on the way. It is the maturing of a technology, that is the approach to a limit, which opens up the possibility of competitors catching up to the recognized market leader. If the competitors better anticipate the future key factors for success, they will move ahead of the market leaders.

[…]

If one knows that the technology has little potential left, that it will be expensive to tap, and that another technology has more potential (that is, is further from its limits), then one can infer that it may be only a matter of time before a technological discontinuity erupts with its almost inevitable competitive consequence.

Thus finding the limit becomes important.

Finding the Limit

All this presumes we know the answer to the question “Limits of what?” The “what,” as Owens Corning expressed it, was the “technical factors of our product that were most important to the customer.” The trick is relating these “technical factors,” which are measurable attributes of the product or process to the factors that customers perceive as important when making their purchase decision. This is often easy enough when selling products to sophisticated industrial users because suppliers and customers alike have come to focus on these variables, for example, the specific fuel consumption of a jet engine or the purity of a chemical. But it is much tougher to understand these relationships in the consumer arena. How does one measure how clean our clothes are? Do we do it the same way at home as the scientists do in the lab? Do we really measure “cleanness,” or its “brightness” or a “fresh smell” or “bounce?” All of these are attributes of “clean” clothes which may have nothing whatsoever to do with how much dirt is in the clothes. … These are complicated questions to answer because different consumers will feel differently about these factors, creating confusion in the lab. Further, once the consumer has expressed his preference it may be difficult to measure that preference in technical terms. For example, what does “fit” mean? What are the limits of “fit”? If the attribute that consumers want cannot be expressed in technical terms, clearly its limit cannot be found.

Further complicating the seemingly simple question of “limits of what?” is the realization that the consumer’s passion for more of the attribute may be a function of the levels of the attribute itself.

For example, in the detergent battles of the 1950s, P&G and its competitors were all vying to make a product that would produce the “cleanest” clothes. It was soon discovered that in fact the clothes were about as clean as they could ever get. The dirt had been removed, but the clothes often had acquired a gray, dingy look that the consumer associated with dirt. In fact, the gray look was caused by torn and frayed fibers, but the consumer did not appreciate this apparently arcane technical detail. Rather than fight with consumers P&G decided to capitalize on their misperceptions and add “optical brighteners” to the detergent. These are chemicals that reflect light. When they were added to the detergent and were retained on the clothes, they made the clothes appear brighter and therefore cleaner in the consumer’s eyes, even though in the true sense they weren’t any cleaner.

The consumers loved it, and bought all the Tide they could get in order to get their clothes “clean,” that is optically bright.

[…]

Another complication with performance parameters is that they keep changing. Frequently this change is due to the consumer’s satisfaction with the present levels of product performance; optical brightness in our prior example. This often triggers a change in what customers are looking for. No longer will they be satisfied with optical brightness alone; now they want “bounce” or “fresh smell,” and the basis of competition changes. These changes can be due to a change in the social or economic environment as well. For example, new environmental laws (which led to biodegradable detergents), a change in the price of energy, or the emergence of a heretofore unavailable competitive product like the compact audio disc or high-definition TV. These changes in performance factors should trigger the establishment of new sets of tests and standards for the researchers and engineers involved in new product development. But often they don’t. They don’t because these changes are time-consuming and expensive to make, and they are difficult to think through. Thus it often appears easier to just not make the change. But, of course, this decision carries with it potentially significant competitive risks.

The people that should see these changing preferences, the salesmen, often do not because they have a strong incentive to sell today’s products. So the very people that organization has put into place to stay close to the customer often fail to keep the organization informed of a changing landscape. And if they do, it’s still a complicated process to get companies to act on that information.

… The people we rely on to keep us close to the customer and new developments often do not. So our structure and systems work to confirm our disposition to keep doing things the same way. As Alan Kantrow, editor at the Harvard Business Review, puts it, “Our receptor sites are carrying the same chemical codes that we carry. We are thus likely to see only what we expect and want to see.” The chief executive says, “I’ve done good things. We’re scanning our environment.” But in fact he is scanning his own mind

Even if sales and marketing do perceive the need for change, they may not take their discovery back to their tech nical departments for consideration. If the technical departments do hear about these developments, they may not be able to do much about them because of the press of other projects. So all in all, changes in customer preferences get transmitted slowly, usually only after special studies are done specifically to examine changing customer preferences. All this means that answering the “limits of what” question can be tricky under the best of circumstances, and much tougher in an ongoing business.

There are limits to limits of course. First, just because you’re approaching a limit doesn’t meant there is an effective substitute that can solve the problem better. However, “if there is an alternative, and it is economic, then the way the competitors do battle in the industry will change.” Second, it’s possible to be wrong about limits and thus draw the wrong conclusions.

A great example of this is Simon Newcomb, the celebrated astronomer, who in 1900 said “The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which men shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration to be.” Two years later he clarified, “Flight by machines heavier than air is unpractical and insignificant, if not utterly impossible.” It wasn’t even a year before the Wright brothers proved him wrong at Kitty Hawk.

Diminishing Returns

One mistake we make is to confuse time and effort.

It is not the passage of time that leads to progress, but the application of effort. If we plotted our results versus time, we could not by extrapolation draw any conclusion about the future because we would have buried in our time chart implicit assumptions about the rate of effort applied. If we were to change this rate, it would increase or decrease the time it would take for performance to improve. People frequently make the error of trying to plot technological progress versus time and then find the predictions don’t come to pass. Most of the reason for this is not the difficulty of predicting how the technology will evolve, since we have found the S-curve to be rather stable, but rather predicting the rate at which competitors will spend money to develop the technology. The forecasting error is a result of bad competitive analysis, not bad technology analysis.

Thus, it might appear that a technology still has a great potential but in fact what is fuelling its advance is rapidly increasing amounts of investment.

Psychologically, we believe the more effort we put in the more results we should see. This has disastrous effects in organizations unable to recognize limits.

S-Curve pairs

Often there is more than one S-curve, the gap between them represents a discontinuity.

Efficiency Versus Effectiveness

Effectiveness is set when a company determines which S-curve it will pursue (e.g., vacuum tubes or solid state). Efficiency is the slope of the present curve. Effectiveness deals with sustaining a strategy-efficiency with the present utilization of resources. Moving into a new technology almost always appears to be less efficient than staying with the present technology because of the need to bring the new technology up to speed. The cost of progress of an established technology is compared with that of one in its infancy, even though it may eventually cost much less to bring the new technology up to the state of the art than it did to bring the present one there. To paraphrase a comment I’ve heard many times at budget meetings: “In any case the new technology development cost is above and beyond what we’re already paying. Since it doesn’t get us any further than we presently are, it cannot make sense.” The problem with that argument is that someday it will be ten or twenty or thirty times more efficient to invest in the new technology, and it will outperform the existing technology by a wide margin.

There are many decisions that put effectiveness and efficiency at odds with each other, particularly those involving resource allocation. This is one of the toughest areas to come to grips with because it means withdrawing resources from the maturing business.

[…]

In addition, many companies have management policies that, interpreted literally, impede moving from one S-curve to another. For example, “Our first priority will be to protect our existing businesses.” Or “We will operate each business on a self-sustaining basis; each will have to provide its own cash as well as make a contribution to corporate overhead.” These rules are established either in a period of relaxed competition or out of political necessity.

The fundamental dilemma is that it always appears to be more economic to protect the old business than to feed the new one at least until competitors pursuing the new approach get the upper hand. Conventional financial theory has no practical way to take account of the opportunity cost of not investing in the new technology. If it did, the decision to invest in the present technology would often be reversed.

Metrics become distorted and defenders believe they are more productive than they are. Attackers and defenders look at productivity differently.

Even if a defender succeeds in managing his own S-curve better, chances are he will not be able to raise his efficiency by more than, say, 50 percent. Not much use against an attacker whose productivity might be climbing ten times faster because he has chosen a different S-curve. All too frequently the defender believes his productivity is actually higher than his attacker’s and ignores what the attacker potentially may have to offer the customer. Defenders and attackers often have a different perspective when it comes to judging productivity. For the attacker, productivity is the improvement in performance of his new product over his old product divided by the effort he puts into developing the new product. If his technology is beginning to approach the steep part of its S-curve, this could be a big number. The defender, however, observes the productivity through the eyes of the market, which may still be treating the new product as not much more than a curiosity. So in his eyes the attacker’s productivity is quite low. We’ve seen this happen time and again in the electronics industry. Products such as microwaves, audio cassettes and floppy discs failed at first to meet customer standards, but then, almost overnight, they set new high-quality standards and stormed the market.

Even if the defender admits that the attacker’s product may have an edge, he is likely to say it is too small to matter. Since the first version of a wholly new product is frequently just marginally better than the existing product, the defender often thinks the attacker’s productivity is lower, not higher than his own. The danger comes in using this erroneous perception to figure out what is going to happen next. Too often defenders err by thinking that the attacker’s second generation new product will require enormous resources and result in little progress. We know differently. We know from the mathematics of adolescent S-curves that once the first crack appears in the market dam, the flood cannot be far behind. And further, it won’t cost nearly as much since the first product has absorbed much of the start-up costs. No doubt this will be a big shock to the defender who will tell the stock market analysts, “Well, the attacker was just lucky. There was nothing in his record to suggest he could have pulled this thing off.” All true. From the defender’s viewpoint there was nothing in the attacker’s record to suggest that a change was coming. But the underlying forces were at work nevertheless, and in the end they appeared.

Innovation: The Attacker’s Advantage explores why leaders lose and what you can do about it.

How Do People Get New Ideas?

In a previously unpublished 1959 essay, Isaac Asimov explores how people get new ideas.

Echoing Einstein and Seneca, Asimov believes that new ideas come from combining things together. Steve Jobs thought the same thing.

What if the same earth-shaking idea occurred to two men, simultaneously and independently? Perhaps, the common factors involved would be illuminating. Consider the theory of evolution by natural selection, independently created by Charles Darwin and Alfred Wallace.

There is a great deal in common there. Both traveled to far places, observing strange species of plants and animals and the manner in which they varied from place to place. Both were keenly interested in finding an explanation for this, and both failed until each happened to read Malthus’s “Essay on Population.”

Both then saw how the notion of overpopulation and weeding out (which Malthus had applied to human beings) would fit into the doctrine of evolution by natural selection (if applied to species generally).

Obviously, then, what is needed is not only people with a good background in a particular field, but also people capable of making a connection between item 1 and item 2 which might not ordinarily seem connected.

Undoubtedly in the first half of the 19th century, a great many naturalists had studied the manner in which species were differentiated among themselves. A great many people had read Malthus. Perhaps some both studied species and read Malthus. But what you needed was someone who studied species, read Malthus, and had the ability to make a cross-connection.

That is the crucial point that is the rare characteristic that must be found. Once the cross-connection is made, it becomes obvious. Thomas H. Huxley is supposed to have exclaimed after reading On the Origin of Species, “How stupid of me not to have thought of this.”

[…]

Making the cross-connection requires a certain daring. It must, for any cross-connection that does not require daring is performed at once by many and develops not as a “new idea,” but as a mere “corollary of an old idea.”

It is only afterward that a new idea seems reasonable. To begin with, it usually seems unreasonable. It seems the height of unreason to suppose the earth was round instead of flat, or that it moved instead of the sun, or that objects required a force to stop them when in motion, instead of a force to keep them moving, and so on.

The paradox here is that crazy people are good at seeing new connections too, one notable difference being the outcome.

As a brief aside, I wonder if people are creative, in part because they are autodidacts rather than being autodidacts because they are creative? The formal education system doesn’t exactly encourage creativity. Generally, there are right and wrong answers. We’re taught to get the right answer. Autodidacts try new things, often learning negative knowledge instead of positive knowledge.

When you’re right about connections that others cannot see, you are called a creative genius. When you’re wrong, however, you’re often labelled mentally ill.

This comes back to Keynes: “Worldly wisdom teaches that it is better for the reputation to fail conventionally than to succeed unconventionally.”

A great way to connect things is with a commonplace book.

Ideas are not singular

ideas

“If you give a good idea to a mediocre team, they will screw it up. If you give a mediocre idea to a brilliant team, they will either fix it, or throw it away and come up with something better.”

In January 2006, Disney announced it would spend $7.4 billion to buy its “cousin” Pixar Animation Studios. Many wondered about the fate of Disney Animation Studios itself – would Disney shut down the division that forged its identity, but had stagnated since its success in the early 1990s with films like The Lion King and Beauty and the Beast? Would it leave hand-drawn animation behind in favor of computer animation?

Within months, the question was settled. Disney CEO Bob Iger named Pixar’s John Lasseter and Ed Catmull to head Disney Animation, and the duo decided to leave the divisions separate and autonomous.

The decision played out brilliantly. Not only has Pixar continued to release hits like Ratatouille, Wall-E, Up, and Toy Story 3, Disney Animation recently released the best-selling animated movie of all time – Frozen – on the heels of its other well-received animated films, Wreck-it Ralph and Tangled.

***

This kind of success seemed far from reality in 1986, when Steve Jobs decided to purchase a small, struggling division of Lucasfilm with one product: the Pixar Image Computer. As Catmull explains in his book Creativity Inc.:

From the outside, Pixar probably looked like your typical Silicon Valley startup. On the inside, however, we were anything but. Steve Jobs had never manufactured or marketed a high-end machine before, so he had neither the experience nor the intuition about how to do so. We had no sales people and no marketing people and no idea where to find them. Steve, Alvy Ray Smith, John Lasseter, me—none of us knew the first thing about how to run the kind of business we had just started. We were drowning.

By 1990, the team had realized Pixar’s future was not in selling machines, but selling art. Still, it was a tough time. Even as Pixar produced computer animated TV ads and shorts, the company was losing too much money. Jobs tried to sell it more than once – luckily, without success.

Pixar caught its first break in 1991, when Disney’s Jeff Katzenberg asked the company to produce three computer-animated features, which Disney would distribute and own. (These would go on to become Toy Story, A Bug’s Life, and Toy Story 2.)

By the end of 1995, Pixar was a public company and Toy Story a legitimate hit. Amid the success, Catmull had his first existential crisis as President of Pixar Animation:

For twenty years, my life had been defined by the goal of making the first computer graphics movie. Now that goal had been reached, I had what I can only describe as a hollow, lost feeling. As a manager, I felt a troubling lack of purpose. Now what? The thing that had replaced it seemed to be the act of running a company, which was more than enough to keep me busy, but it wasn’t special. Pixar was now public and successful, yet there was something unsatisfying about the prospect of merely keeping it running. It took a serious and unexpected problem to give me a new sense of mission.

Catmull realized that although it had put out a great film, Pixar had a large group of employees who were reluctant to sign on for a second project. With the creative team behind Toy Story being given tremendous resources and status, the production team – responsible for executing thousands of movie-making details – felt marginalized.

In the process of solving his organizational problem, Catmull realized a new purpose: Fostering a sustainable organizational culture.

As I saw it, our mandate was to foster a culture that would seek to keep our sightlines clear, even as we accepted that we were often trying to engage with and fix what we could not see. My hope was to make this culture so vigorous that it would survive when Pixar’s founding members were long gone, enabling the company to continue producing original films that made money, yes, but also contributed positively to the world. This sounds like a lofty goal, but it was there for all of us from the beginning. We were blessed with a remarkable group of employees who valued change, risk, and the unknown and who wanted to rethink how we create. How could we enable the talents of these people, keep them happy, and not let the inevitable complexities that come with any collaborative endeavor undo us along the way? That was the job I assigned myself—and the one that still animates me to this day.

From there, Creativity, Inc. explores the process of developing the culture envisioned in his post-Toy Story hangover. Given his success at Pixar, and then Disney, some of the key points are worth examining.

In the end, it’s about people, not ideas.

If you give a good idea to a mediocre team, they will screw it up. If you give a mediocre idea to a brilliant team, they will either fix it, or throw it away and come up with something better.

[…]

Why are we confused about this? Because too many of us think of ideas as being singular, as if they float in the ether, fully formed and independent of the people who wrestle with them. Ideas, though, are not singular. They are forged through tens of thousands of decisions, often made by dozens of people.

Solicit criticism from a trusted group:

I want to stress that you don’t have to work at Pixar to create a Braintrust. Every creative person, no matter their field, can draft into service those around them who exhibit the right mixture of intelligence, insight, and grace.

[…]

Here are the qualifications required: The people you choose must (a) make you think smarter and (b) put lots of solutions on the table in a short amount of time. I don’t care who it is, the janitor or the intern or one of your most trusted lieutenants: If they can help you do that, they should be at the table.

Failure is necessary for creative work:

Says [Director] Andrew [Stanton]: “You wouldn’t say to somebody who is first learning to play the guitar, ‘You better think really hard about where you put your fingers on the guitar neck before you strum, because you only get to strum once, and that’s it. And if you get that wrong, we’re going to move on.’ That’s no way to learn, is it?”

Even though people in our offices have heard Andrew say this repeatedly, many still miss the point. They think it means accept failure with dignity and move on. The better, more subtle interpretation is that failure if a manifestation of learning and exploration. If you’re not experiencing failure, then you are making a far worse mistake: You are being driven by a desire to avoid it.

Protect the New:

When I advocate for protecting the new, then, I am using the word somewhat differently. I am saying that when someone hatches an original idea, it may be ungainly and poorly defined, but it is also the opposite of established and entrenched—and that is precisely what is most exciting about it. If, while in this vulnerable state, it is exposed to naysayers who fail to see its potential or lack the patience to see it evolve, it could be destroyed. Part of our job is to protect the new from people who don’t understand that in order for greatness to emerge, there must be phases of not-so-greatness.

Conflict is Essential to Creative Progress

As director Brad Bird sees it, every creative organization—be it an animation studio or a record label—is an ecosystem. “You need all the seasons,” he says. “You need storms. It’s like an ecology. To view lack of conflict as optimum is like saying a sunny day is optimum. A sunny day is when the sun wins out over the rain. There’s no conflict. You have a clear winner. But if every day is sunny and it does’t rain, things don’t grow. And if it’s sunny all the time—if, in fact, we don’t ever have night—all kinds of things don’t happen and the planet dries up. The key is to view conflict as essential, because that’s how we know the best ideas will be tested and survive. You know, it can’t only be sunlight.”

Creativity Inc. is an engaging look inside the creativity engine at Pixar.

The Improbable Story of the Online Encyclopedia

More important than determining who deserved credit is ap­preciating the dynamics that occur when people share ideas.

Walter Isaacson is the rare sort of writer that, if you’re like me, you just pre-order everything he writes. The first thing I read that he wrote was the Einstein Biography, then the Steve Jobs Biography, then I went back and ordered everything else. He’s out with a new book, The Innovators, which recounts the story of the people who created the Internet. From Ada Lovelace, Lord Byron’s daughter, who pioneered computer programming in the 1840s, long before anyone else, through to Steve Jobs, Tim Berners-Lee, and Larry Page, Isaacson shows not only the people but how their minds worked.

Below is an excerpt from The Innovators, recounting the improbable story of Wikipedia.

When he launched the Web in 1991, Tim Berners-Lee intended it to be used as a collaboration tool, which is why he was dismayed that the Mosaic browser did not give users the ability to edit the Web pages they were viewing. It turned Web surfers into passive consumers of published content. That lapse was partly mitigated by the rise of blog­ging, which encouraged user-generated content. In 1995 another me­dium was invented that went further toward facilitating collaboration on the Web. It was called a wiki, and it worked by allowing users to modify Web pages—not by having an editing tool in their browser but by clicking and typing directly onto Web pages that ran wiki software.

The application was developed by Ward Cunningham, another of those congenial Midwest natives (Indiana, in his case) who grew up making ham radios and getting turned on by the global communities they fostered. After graduating from Purdue, he got a job at an elec­tronic equipment company, Tektronix, where he was assigned to keep track of projects, a task similar to what Berners-Lee faced when he went to CERN.

To do this he modified a superb software product developed by one of Apple’s most enchanting innovators, Bill Atkinson. It was called HyperCard, and it allowed users to make their own hyper-linked cards and documents on their computers. Apple had little idea what to do with the software, so at Atkinson’s insistence Apple gave it away free with its computers. It was easy to use, and even kids—especially kids—found ways to make HyperCard stacks of linked pictures and games.

Cunningham was blown away by HyperCard when he first saw it, but he found it cumbersome. So he created a super simple way of creating new cards and links: a blank box on each card in which you could type a title or word or phrase. If you wanted to make a link to Jane Doe or Harry’s Video Project or anything else, you simply typed those words in the box. “It was fun to do,” he said.

Then he created an Internet version of his HyperText program, writing it in just a few hundred lines of Perl code. The result was a new content management application that allowed users to edit and contribute to a Web page. Cunningham used the application to build a service, called the Portland Pattern Repository, that allowed soft­ware developers to exchange programming ideas and improve on the patterns that others had posted. “The plan is to have interested parties write web pages about the People, Projects and Patterns that have changed the way they program,” he wrote in an announcement posted in May 1995. “The writing style is casual, like email . . . Think of it as a moderated list where anyone can be moderator and everything is archived. It’s not quite a chat, still, conversation is possible.”

Now he needed a name. What he had created was a quick Web tool, but QuickWeb sounded lame, as if conjured up by a com­mittee at Microsoft. Fortunately, there was another word for quick that popped from the recesses of his memory. When he was on his honeymoon in Hawaii thirteen years earlier, he remembered, “the airport counter agent directed me to take the wiki wiki bus between terminals.” When he asked what it meant, he was told that wiki was the Hawaiian word for quick, and wiki wiki meant superquick. So he named his Web pages and the software that ran them WikiWikiWeb, wiki for short.

In his original version, the syntax Cunningham used for creating links in a text was to smash words together so that there would be two or more capital letters—as in Capital Letters—in a term. It be­came known as CamelCase, and its resonance would later be seen in scores of Internet brands such as AltaVista, MySpace, and YouTube.

WardsWiki (as it became known) allowed anyone to edit and contribute, without even needing a password. Previous versions of each page would be stored, in case someone botched one up, and there would be a “Recent Changes” page so that Cunningham and others could keep track of the edits. But there would be no supervisor or gatekeeper preapproving the changes. It would work, he said with cheery midwestern optimism, because “people are generally good.” It was just what Berners-Lee had envisioned, a Web that was read-write rather than read-only. “Wikis were one of the things that allowed col­laboration,” Berners-Lee said. “Blogs were another.”

Like Berners-Lee, Cunningham made his basic software available for anyone to modify and use. Consequently, there were soon scores of wiki sites as well as open-source improvements to his software. But the wiki concept was not widely known beyond software engineers until January 2001, when it was adopted by a struggling Internet entrepreneur who was trying, without much success, to build a free, online encyclopedia.

***

Jimmy Wales was born in 1966 in Huntsville, Alabama, a town of rednecks and rocket scientists. Six years earlier, in the wake of Sput­nik, President Eisenhower had personally gone there to open the Marshall Space Flight Center. “Growing up in Huntsville during the height of the space program kind of gave you an optimistic view of the future,” Wales observed. “An early memory was of the windows in our house rattling when they were testing the rockets. The space program was basically our hometown sports team, so it was exciting and you felt it was a town of technology and science.”

Wales, whose father was a grocery store manager, went to a one-room private school that was started by his mother and grandmother, who taught music. When he was three, his mother bought a World Book Encyclopedia from a door-to-door salesman; as he learned to read, it became an object of veneration. It put at his fingertips a cor­nucopia of knowledge along with maps and illustrations and even a few cellophane layers of transparencies you could lift to explore such things as the muscles, arteries, and digestive system of a dissected frog. But Wales soon discovered that the World Book had shortcom­ings: no matter how much was in it, there were many more things that weren’t. And this became more so with time. After a few years, there were all sorts of topics—moon landings and rock festivals and protest marches, Kennedys and kings—that were not included. World Book sent out stickers for owners to paste on the pages in order to update the encyclopedia, and Wales was fastidious about doing so. “I joke that I started as a kid revising the encyclopedia by stickering the one my mother bought.”

After graduating from Auburn and a halfhearted stab at graduate school, Wales took a job as a research director for a Chicago financial trading firm. But it did not fully engage him. His scholarly attitude was combined with a love for the Internet that had been honed by playing Multi-User Dungeons fantasies, which were essentially crowdsourced games. He founded and moderated an Internet mailing list discussion on Ayn Rand, the Russian-born American writer who espoused an objectivist and libertarian philosophy. He was very open about who could join the discussion forum, frowned on rants and the personal attack known as flaming, and managed comportment with a gentle hand. “I have chosen a ‘middle-ground’ method of moderation, a sort of behind-the-scenes prodding,” he wrote in a posting.

Before the rise of search engines, among the hottest Internet ser­vices were Web directories, which featured human-assembled lists and categories of cool sites, and Web rings, which created through a common navigation bar a circle of related sites that were linked to one another. Jumping on these bandwagons, Wales and two friends in 1996 started a venture that they dubbed BOMIS, for Bitter Old Men in Suits, and began casting around for ideas. They launched a panoply of startups that were typical of the dotcom boom of the late ’90s: a used-car ring and directory with pictures, a food-ordering service, a business directory for Chicago, and a sports ring. After Wales relo­cated to San Diego, he launched a directory and ring that served as “kind of a guy-oriented search engine,” featuring pictures of scantily clad women.

The rings showed Wales the value of having users help generate the content, a concept that was reinforced as he watched how the crowds of sports bettors on his site provided a more accurate morning line than any single expert could. He also was impressed by Eric Ray­mond’s The Cathedral and the Bazaar, which explained why an open and crowd-generated bazaar was a better model for a website than the carefully controlled top-down construction of a cathedral.

Wales next tried an idea that reflected his childhood love of the World Book: an online encyclopedia. He dubbed it Nupedia, and it had two attributes: it would be written by volunteers, and it would be free. It was an idea that had been proposed in 1999 by Richard Stallman, the pioneering advocate of free software. Wales hoped eventually to make money by selling ads. To help develop it, he hired a doctoral student in philosophy, Larry Sanger, whom he first met in online discussion groups. “He was specifically interested in finding a philoso­pher to lead the project,” Sanger recalled.

Sanger and Wales developed a rigorous, seven-step process for creating and approving articles, which included assigning topics to proven experts, whose credentials had been vetted, and then putting the drafts through outside expert reviews, public reviews, professional copy editing, and public copy editing. “We wish editors to be true experts in their fields and (with few exceptions) possess Ph.Ds.,” the Nupedia policy guidelines stipulated. “Larry’s view was that if we didn’t make it more academic than a traditional encyclopedia, people wouldn’t believe in it and respect it,” Wales explained. “He was wrong, but his view made sense given what we knew at the time.” The first article, published in March 2000, was on atonality by a scholar at the Johannes Gutenberg University in Mainz, Germany.

***

It was a painfully slow process and, worse yet, not a lot of fun. The whole point of writing for free online, as Justin Hall had shown, was that it produced a jolt of joy. After a year, Nupedia had only about a dozen articles published, making it useless as an encyclopedia, and 150 that were still in draft stage, which indicated how unpleasant the process had become. It had been rigorously engineered not to scale.

This hit home to Wales when he decided that he would personally write an article on Robert Merton, an economist who had won the Nobel Prize for creating a mathematical model for markets contain­ing derivatives. Wales had published a paper on option pricing theory, so he was very familiar with Merton’s work. “I started to try to write the article and it was very intimidating, because I knew they were going to send my draft out to the most prestigious finance professors they could find,” Wales said. “Suddenly I felt like I was back in grad school, and it was very stressful. I realized that the way we had set things up was not going to work.”

That was when Wales and Sanger discovered Ward Cunningham’s wiki software. Like many digital-age innovations, the application of wiki software to Nupedia in order to create Wikipedia—combining two ideas to create an innovation—was a collaborative process in­volving thoughts that were already in the air. But in this case a very non-wiki-like dispute erupted over who deserved the most credit.

The way Sanger remembered the story, he was having lunch in early January 2001 at a roadside taco stand near San Diego with a friend named Ben Kovitz, a computer engineer. Kovitz had been using Cunningham’s wiki and described it at length. It then dawned on Sanger, he claimed, that a wiki could be used to help solve the problems he was having with Nupedia. “Instantly I was considering whether wiki would work as a more open and simple editorial system for a free, collaborative encyclopedia,” Sanger later recounted. “The more I thought about it, without even having seen a wiki, the more it seemed obviously right.” In his version of the story, he then convinced Wales to try the wiki approach.

Kovitz, for his part, contended that he was the one who came up with the idea of using wiki software for a crowdsourced encyclopedia and that he had trouble convincing Sanger. “I suggested that instead of just using the wiki with Nupedia’s approved staff, he open it up to the general public and let each edit appear on the site immediately, with no review process,” Kovitz recounted. “My exact words were to allow ‘any fool in the world with Internet access’ to freely modify any page on the site.” Sanger raised some objections: “Couldn’t total idiots put up blatantly false or biased descriptions of things?” Kovitz replied, “Yes, and other idiots could delete those changes or edit them into something better.”

As for Wales’s version of the story, he later claimed that he had heard about wikis a month before Sanger’s lunch with Kovitz. Wikis had, after all, been around for more than four years and were a topic of discussion among programmers, including one who worked at BOMIS, Jeremy Rosenfeld, a big kid with a bigger grin. “Jeremy showed me Ward’s wiki in December 2000 and said it might solve our problem,” Wales recalled, adding that when Sanger showed him the same thing, he responded, “Oh, yes, wiki, Jeremy showed me this last month.” Sanger challenged that recollection, and a nasty cross­fire ensued on Wikipedia’s discussion boards. Wales finally tried to de-escalate the sniping with a post telling Sanger, “Gee, settle down,” but Sanger continued his battle against Wales in a variety of forums.

The dispute presented a classic case of a historian’s challenge when writing about collaborative creativity: each player has a different rec­ollection of who made which contribution, with a natural tendency to inflate his own. We’ve all seen this propensity many times in our friends, and perhaps even once or twice in ourselves. But it is ironic that such a dispute attended the birth of one of history’s most collab­orative creations, a site that was founded on the faith that people are willing to contribute without requiring credit. (Tellingly, and laudably, Wikipedia’s entries on its own history and the roles of Wales and Sanger have turned out, after much fighting on the discussion boards, to be bal­anced and objective.)

More important than determining who deserved credit is ap­preciating the dynamics that occur when people share ideas. Ben Kovitz, for one, understood this. He was the player who had the most insightful view—call it the “bumblebee at the right time” theory—on the collaborative way that Wikipedia was created. “Some folks, aim­ing to criticize or belittle Jimmy Wales, have taken to calling me one of the founders of Wikipedia, or even ‘the true founder,’” he said. “I suggested the idea, but I was not one of the founders. I was only the bumblebee. I had buzzed around the wiki flower for a while, and then pollinated the free-encyclopedia flower. I have talked with many oth­ers who had the same idea, just not in times or places where it could take root.”

That is the way that good ideas often blossom: a bumblebee brings half an idea from one realm, and pollinates another fertile realm filled with half-formed innovations. This is why Web tools are valuable, as are lunches at taco stands.

***

Cunningham was supportive, indeed delighted when Wales called him up in January 2001 to say he planned to use the wiki software to juice up his encyclopedia project. Cunningham had not sought to patent or copyright either the software or the wiki name, and he was one of those innovators who was happy to see his products become tools that anyone could use or adapt.

At first Wales and Sanger conceived of Wikipedia merely as an adjunct to Nupedia, sort of like a feeder product or farm team. The wiki articles, Sanger assured Nupedia’s expert editors, would be rel­egated to a separate section of the website and not be listed with the regular Nupedia pages. “If a wiki article got to a high level it could be put into the regular Nupedia editorial process,” he wrote in a post. Nevertheless, the Nupedia purists pushed back, insisting that Wiki­pedia be kept completely segregated, so as not to contaminate the wisdom of the experts. The Nupedia Advisory Board tersely declared on its website, “Please note: the editorial processes and policies of Wikipedia and Nupedia are totally separate; Nupedia editors and peer reviewers do not necessarily endorse the Wikipedia project, and Wikipedia contributors do not necessarily endorse the Nupedia project.” Though they didn’t know it, the pedants of the Nupedia priesthood were doing Wikipedia a huge favor by cutting the cord.

Unfettered, Wikipedia took off. It became to Web content what GNU/Linux was to software: a peer-to-peer commons collabora­tively created and maintained by volunteers who worked for the civic satisfactions they found. It was a delightful, counterintuitive concept, perfectly suited to the philosophy, attitude, and technology of the Internet. Anyone could edit a page, and the results would show up instantly. You didn’t have to be an expert. You didn’t have to fax in a copy of your diploma. You didn’t have to be authorized by the Powers That Be. You didn’t even have to be registered or use your real name. Sure, that meant vandals could mess up pages. So could idiots or ideologues. But the software kept track of every version. If a bad edit appeared, the community could simply get rid of it by clicking on a “revert” link. “Imagine a wall where it was easier to remove graffiti than add it” is the way the media scholar Clay Shirky explained the process. “The amount of graffiti on such a wall would depend on the commitment of its defenders.” In the case of Wikipedia, its de­fenders were fiercely committed. Wars have been fought with less intensity than the reversion battles on Wikipedia. And somewhat amazingly, the forces of reason regularly triumphed.

One month after Wikipedia’s launch, it had a thousand articles, approximately seventy times the number that Nupedia had after a full year. By September 2001, after eight months in existence, it had ten thousand articles. That month, when the September 11 attacks occurred, Wikipedia showed its nimbleness and usefulness; contribu­tors scrambled to create new pieces on such topics as the World Trade Center and its architect. A year after that, the article total reached forty thousand, more than were in the World Book that Wales’s mother had bought. By March 2003 the number of articles in the English-language edition had reached 100,000, with close to five hundred ac­tive editors working almost every day. At that point, Wales decided to shut Nupedia down.

By then Sanger had been gone for a year. Wales had let him go. They had increasingly clashed on fundamental issues, such as Sanger’s desire to give more deference to experts and scholars. In Wales’s view, “people who expect deference because they have a Ph.D. and don’t want to deal with ordinary people tend to be annoying.” Sanger felt, to the contrary, that it was the nonacademic masses who tended to be annoying. “As a community, Wikipedia lacks the habit or tra­dition of respect for expertise,” he wrote in a New Year’s Eve 2004 manifesto that was one of many attacks he leveled after he left. “A policy that I attempted to institute in Wikipedia’s first year, but for which I did not muster adequate support, was the policy of respect­ing and deferring politely to experts.” Sanger’s elitism was rejected not only by Wales but by the Wikipedia community. “Consequently, nearly everyone with much expertise but little patience will avoid ed­iting Wikipedia,” Sanger lamented.

Sanger turned out to be wrong. The uncredentialed crowd did not run off the experts. Instead the crowd itself became the expert, and the experts became part of the crowd. Early in Wikipedia’s devel­opment, I was researching a book about Albert Einstein and I noticed that the Wikipedia entry on him claimed that he had traveled to Al­bania in 1935 so that King Zog could help him escape the Nazis by getting him a visa to the United States. This was completely untrue, even though the passage included citations to obscure Albanian websites where this was proudly proclaimed, usually based on some third-hand series of recollections about what someone’s uncle once said a friend had told him. Using both my real name and a Wikipedia han­dle, I deleted the assertion from the article, only to watch it reappear. On the discussion page, I provided sources for where Einstein actu­ally was during the time in question (Princeton) and what passport he was using (Swiss). But tenacious Albanian partisans kept reinserting the claim. The Einstein-in-Albania tug-of-war lasted weeks. I became worried that the obstinacy of a few passionate advocates could under­mine Wikipedia’s reliance on the wisdom of crowds. But after a while, the edit wars ended, and the article no longer had Einstein going to Albania. At first I didn’t credit that success to the wisdom of crowds, since the push for a fix had come from me and not from the crowd. Then I realized that I, like thousands of others, was in fact a part of the crowd, occasionally adding a tiny bit to its wisdom.

A key principle of Wikipedia was that articles should have a neutral point of view. This succeeded in producing articles that were generally straightforward, even on controversial topics such as global warming and abortion. It also made it easier for people of different viewpoints to collaborate. “Because of the neutrality policy, we have partisans working together on the same articles,” Sanger explained. “It’s quite remarkable.” The community was usually able to use the lodestar of the neutral point of view to create a consensus article offering competing views in a neutral way. It became a model, rarely emulated, of how digital tools can be used to find common ground in a contentious society.

Not only were Wikipedia’s articles created collaboratively by the community; so were its operating practices. Wales fostered a loose system of collective management, in which he played guide and gentle prodder but not boss. There were wiki pages where users could jointly formulate and debate the rules. Through this mechanism, guidelines were evolved to deal with such matters as reversion practices, media­tion of disputes, the blocking of individual users, and the elevation of a select few to administrator status. All of these rules grew organically from the community rather than being dictated downward by a cen­tral authority. Like the Internet itself, power was distributed. “I can’t imagine who could have written such detailed guidelines other than a bunch of people working together,” Wales reflected. “It’s common in Wikipedia that we’ll come to a solution that’s really well thought out because so many minds have had a crack at improving it.”

As it grew organically, with both its content and its governance sprouting from its grassroots, Wikipedia was able to spread like kudzu. At the beginning of 2014, there were editions in 287 lan­guages, ranging from Afrikaans to Žemaitška. The total number of articles was 30 million, with 4.4 million in the English-language edi­tion. In contrast, the Encyclopedia Britannica, which quit publishing a print edition in 2010, had eighty thousand articles in its electronic edition, less than 2 percent of the number in Wikipedia. “The cumu­lative effort of Wikipedia’s millions of contributors means you are a click away from figuring out what a myocardial infarction is, or the cause of the Agacher Strip War, or who Spangles Muldoon was,” Clay Shirky has written. “This is an unplanned miracle, like ‘the market’ deciding how much bread goes in the store. Wikipedia, though, is even odder than the market: not only is all that material contributed for free, it is available to you free.” The result has been the greatest collaborative knowledge project in history.

***

The Innovators

So why do people contribute? Harvard Professor Yochai Benkler dubbed Wikipedia, along with open-source software and other free collaborative projects, examples of “commons-based peer produc­tion.” He explained, “Its central characteristic is that groups of in­dividuals successfully collaborate on large-scale projects following a diverse cluster of motivational drives and social signals, rather than either market prices or managerial commands.” These motivations include the psychological reward of interacting with others and the personal gratification of doing a useful task. We all have our little joys, such as collecting stamps or being a stickler for good grammar, knowing Jeff Torborg’s college batting average or the order of battle at Trafalgar. These all find a home on Wikipedia.

There is something fundamental, almost primordial at work. Some Wikipedians refer to it as “wiki-crack.” It’s the rush of dopamine that seems to hit the brain’s pleasure center when you make a smart edit and it appears instantly in a Wikipedia article. Until recently, being published was a pleasure afforded only to a select few. Most of us in that category can remember the thrill of seeing our words appear in public for the first time. Wikipedia, like blogs, made that treat avail­able to anyone. You didn’t have to be credentialed or anointed by the media elite.

For example, many of Wikipedia’s articles on the British aristoc­racy were largely written by a user known as Lord Emsworth. They were so insightful about the intricacies of the peerage system that some were featured as the “Article of the Day,” and Lord Emsworth rose to become a Wikipedia administrator. It turned out that Lord Emsworth, a name taken from P. G. Wodehouse’s novels, was actu­ally a 16-year-old schoolboy in South Brunswick, New Jersey. On Wikipedia, nobody knows you’re a commoner.

Connected to that is the even deeper satisfaction that comes from helping to create the information that we use rather than just pas­sively receiving it. “Involvement of people in the information they read,” wrote the Harvard professor Jonathan Zittrain, “is an important end itself.” A Wikipedia that we create in common is more mean­ingful than would be the same Wikipedia handed to us on a platter. Peer production allows people to be engaged.

Jimmy Wales often repeated a simple, inspiring mission for Wiki­pedia: “Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That’s what we’re doing.” It was a huge, audacious, and worthy goal. But it badly understated what Wikipedia did. It was about more than people being “given” free access to knowledge; it was also about empowering them, in a way not seen before in history, to be part of the process of creating and distributing knowledge. Wales came to realize that. “Wikipedia allows people not merely to access other people’s knowl­edge but to share their own,” he said. “When you help build some­thing, you own it, you’re vested in it. That’s far more rewarding than having it handed down to you.”

Wikipedia took the world another step closer to the vision pro­pounded by Vannevar Bush in his 1945 essay, “As We May Think,” which predicted, “Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified.” It also harkened back to Ada Lovelace, who asserted that machines would be able to do almost anything, except think on their own. Wikipedia was not about building a machine that could think on its own. It was instead a dazzling example of human-machine symbiosis, the wisdom of humans and the processing power of computers being woven to­gether like a tapestry. When Wales and his new wife had a daughter in 2011, they named her Ada, after Lady Lovelace.

The Innovators is a must read for anyone looking to better understand the creative mind.

​​(h/t The Daily Beast)