Tag: Creativity

The Glass Cage: Automation and US

People have worried about losing their jobs to robots for decades now. But how is growing automation really going to change us? Let’s take a look at the limitations of automation and the uniquely human skills that will remain valuable.

***

The impact of technology is all around us. Maybe we’re at another Gutenberg moment and maybe we’re not.

Marshall McLuhan said it best.

When any new form comes into the foreground of things, we naturally look at it through the old stereos. We can’t help that. This is normal, and we’re still trying to see how will our previous forms of political and educational patterns persist under television. We’re just trying to fit the old things into the new form, instead of asking what is the new form going to do to all the assumptions we had before.

He also wrote that “a new medium is never an addition to an old one, nor does it leave the old one in peace.”

In The Glass Cage: Automation and US, Nick Carr, one of my favorite writers, enters the debate about the impact automation has on us, “examining the personal as well as the economic consequences of our growing dependence on computers.”

We know that the nature of jobs is going to change in the future thanks to technology. Tyler Cowen argues “If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch.”

Carr’s book shows another side to the argument – the broader human consequences to living in a world where computers and software do the things we used to do.

Computer automation makes our lives easier, our chores less burdensome. We’re often able to accomplish more in less time—or to do things we simply couldn’t do before. But automation also has deeper, hidden effects. As aviators have learned, not all of them are beneficial. Automation can take a toll on our work, our talents, and our lives. It can narrow our perspectives and limit our choices. It can open us to surveillance and manipulation. As computers become our constant companions, our familiar, obliging helpmates, it seems wise to take a closer look at exactly how they’re changing what we do and who we are.

On the autonomous automobile, for example, Carr argues that while they have a ways to go before they start chauffeuring us around, there are broader questions that need to be answered first.

Although Google has said it expects commercial versions of its car to be on sale by the end of the decade, that’s probably wishful thinking. The vehicle’s sensor systems remain prohibitively expensive, with the roof-mounted laser apparatus alone going for eighty thousand dollars. Many technical challenges remain to be met, such as navigating snowy or leaf-covered roads, dealing with unexpected detours, and interpreting the hand signals of traffic cops and road workers. Even the most powerful computers still have a hard time distinguishing a bit of harmless road debris (a flattened cardboard box, say) from a dangerous obstacle (a nail-studded chunk of plywood). Most daunting of all are the many legal, cultural, and ethical hurdles a driverless car faces-Where, for instance, will culpability and liability reside should a computer-driven automobile cause an accident that kills or injures someone? With the car’s owner? With the manufacturer that installed the self-driving system? With the programmers who wrote the software? Until such thorny questions get sorted out, fully automated cars are unlikely to grace dealer showrooms.

Tacit and Explicit Knowledge

Self-driving cars are just one example of a technology that forces us “to change our thinking about what computers and robots can and can’t do.”

Up until that fateful October day, it was taken for granted that many important skills lay beyond the reach of automation. Computers could do a lot of things, but they couldn’t do everything. In an influential 2004 book, The New Division of Labor: How Computers Are Creating the Next Job Market, economists Frank Levy and Richard Murnane argued, convincingly, that there were practical limits to the ability of software programmers to replicate human talents, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed specifically to the example of driving a car on the open road, a talent that requires the instantaneous interpretation of a welter of visual signals and an ability to adapt seamlessly to shifting and often unanticipated situations. We hardly know how we pull off such a feat ourselves, so the idea that programmers could reduce all of driving’s intricacies, intangibilities, and contingencies to a set of instructions, to lines of software code, seemed ludicrous. “Executing a left turn across oncoming traffic,” Levy and Murnane wrote, “involves so many factors that it is hard to imagine the set of rules that can replicate a drivers behavior.” It seemed a sure bet, to them and to pretty much everyone else, that steering wheels would remain firmly in the grip of human hands.

In assessing computers’ capabilities, economists and psychologists have long drawn on a basic distinction between two kinds of knowledge: tacit and explicit. Tacit knowledge, which is also sometimes called procedural knowledge, refers to all the stuff we do without actively thinking about it: riding a bike, snagging a fly ball, reading a book, driving a car. These aren’t innate skills—we have to learn them, and some people are better at them than others—but they can’t be expressed as a simple recipe, a sequence of precisely defined steps. When you make a turn through a busy intersection in your car, neurological studies have shown, many areas of your brain are hard at work, processing sensory stimuli, making estimates of time and distance, and coordinating your arms and legs. But if someone asked you to document everything involved in making that turn, you wouldn’t be able to, at least not without resorting to generalizations and abstractions.The ability resides deep in your nervous system outside the ambit of your conscious mind. The mental processing goes on without your awareness.

Much of our ability to size up situations and make quick judgments about them stems from the fuzzy realm of tacit knowledge. Most of our creative and artistic skills reside there too. Explicit knowledge, which is also known as declarative knowledge, is the stuff you can actually write down: how to change a flat tire, how to fold an origami crane, how to solve a quadratic equation. These are processes that can be broken down into well-defined steps. One person can explain them to another person through written or oral instructions: do this, then this, then this.

Because a software program is essentially a set of precise, written instructions—do this, then this, then this—we’ve assumed that while computers can replicate skills that depend on explicit knowledge, they’re not so good when it comes to skills that flow from tacit knowledge. How do you translate the ineffable into lines of code, into the rigid, step-by-step instructions of an algorithm? The boundary between the explicit and the tacit has always been a rough one—a lot of our talents straddle the line—but it seemed to offer a good way to define the limits of automation and, in turn, to mark out the exclusive precincts of the human. The sophisticated jobs Levy and Murnane identified as lying beyond the reach of computers—in addition to driving, they pointed to teaching and medical diagnosis—were a mix of the mental and the manual, but they all drew on tacit knowledge.

Google’s car resets the boundary between human and computer, and it does so more dramatically, more decisively, than have earlier breakthroughs in programming. It tells us that our idea of the limits of automation has always been something of a fiction. Were not as special as we think we are. While the distinction between tacit and explicit knowledge remains a useful one in the realm of human psychology, it has lost much of its relevance to discussions of automation.

Tomorrowland

That doesn’t mean that computers now have tacit knowledge, or that they’ve started to think the way we think, or that they’ll soon be able to do everything people can do. They don’t, they haven’t, and they won’t. Artificial intelligence is not human intelligence. People are mindful; computers are mindless. But when it comes to performing demanding tasks, whether with the brain or the body, computers are able to replicate our ends without replicating our means. When a driverless car makes a left turn in traffic, it’s not tapping into a well of intuition and skill; it’s following a program. But while the strategies are different, the outcomes, for practical purposes, are the same. The superhuman speed with which computers can follow instructions, calculate probabilities, and receive and send data means that they can use explicit knowledge to perform many of the complicated tasks that we do with tacit knowledge. In some cases, the unique strengths of computers allow them to perform what we consider to be tacit skills better than we can perform them ourselves. In a world of computer-controlled cars, you wouldn’t need traffic lights or stop signs. Through the continuous, high-speed exchange of data, vehicles would seamlessly coordinate their passage through even the busiest of intersections—just as computers today regulate the flow of inconceivable numbers of data packets along the highways and byways of the internet. What’s ineffable in our own minds becomes altogether effable in the circuits of a microchip.

Many of the cognitive talents we’ve considered uniquely human, it turns out, are anything but. Once computers get quick enough, they can begin to replicate our ability to spot patterns, make judgments, and learn from experience.

It’s not only vocations that are increasingly being computerized, avocations are too.

Thanks to the proliferation of smartphones, tablets, and other small, affordable, and even wearable computers, we now depend on software to carry out many of our daily chores and pastimes. We launch apps to aid us in shopping, cooking, exercising, even finding a mate and raising a child. We follow turn-by-turn GPS instructions to get from one place to the next. We use social networks to maintain friendships and express our feelings. We seek advice from recommendation engines on what to watch, read, and listen to. We look to Google, or to Apple’s Siri, to answer our questions and solve our problems. The computer is becoming our all-purpose tool for navigating, manipulating, and understanding the world, in both its physical and its social manifestations. Just think what happens these days when people misplace their smartphones or lose their connections to the net. Without their digital assistants, they feel helpless.

As Katherine Hayles, a literature professor at Duke University, observed in her 2012 book How We Think, “When my computer goes down or my Internet connection fails, I feel lost, disoriented, unable to work—in fact, I feel as if my hands have been amputated.”

While our dependence on computers is “disconcerting at times,” we welcome it.

We’re eager to celebrate and show off our whizzy new gadgets and apps—and not only because they’re so useful and so stylish. There’s something magical about computer automation. To watch an iPhone identify an obscure song playing over the sound system in a bar is to experience something that would have been inconceivable to any previous generation.

Miswanting

The trouble with automation is “that it often gives us what we don’t need at the cost of what we do.”

To understand why that’s so, and why we’re eager to accept the bargain, we need to take a look at how certain cognitive biases—flaws in the way we think—can distort our perceptions. When it comes to assessing the value of labor and leisure, the mind’s eye can’t see straight.

Mihaly Csikszentmihalyi, a psychology professor and author of the popular 1990 book Flow, has described a phenomenon that he calls “the paradox of work.” He first observed it in a study conducted in the 1980s with his University of Chicago colleague Judith LeFevre. They recruited a hundred workers, blue-collar and white-collar, skilled and unskilled, from five businesses around Chicago. They gave each an electronic pager (this was when cell phones were still luxury goods) that they had programmed to beep at seven random moments a day over the course of a week. At each beep, the subjects would fill out a short questionnaire. They’d describe the activity they were engaged in at that moment, the challenges they were facing, the skills they were deploying, and the psychological state they were in, as indicated by their sense of motivation, satisfaction, engagement, creativity, and so forth. The intent of this “experience sampling,” as Csikszentmihalyi termed the technique, was to see how people spend their time, on the job and off, and how their activities influence their “quality of experience.”

The results were surprising. People were happier, felt more fulfilled by what they were doing, while they were at work than during their leisure hours. In their free time, they tended to feel bored and anxious. And yet they didn’t like to be at work. When they were on the job, they expressed a strong desire to be off the job, and when they were off the job, the last thing they wanted was to go back to work. “We have,” reported Csikszentmihalyi and LeFevre, “the paradoxical situation of people having many more positive feelings at work than in leisure, yet saying that they wish to be doing something else when they are at work, not when they are in leisure.” We’re terrible, the experiment revealed, at anticipating which activities will satisfy us and which will leave us discontented. Even when we’re in the midst of doing something, we don’t seem able to judge its psychic consequences accurately.

Those are symptoms of a more general affliction, on which psychologists have bestowed the poetic name miswanting. We’re inclined to desire things we don’t like and to like things we don’t desire. “When the things we want to happen do not improve our happiness, and when the things we want not to happen do,” the cognitive psychologists Daniel Gilbert and Timothy Wilson have observed, “it seems fair to say we have wanted badly.” And as slews of gloomy studies show, we’re forever wanting badly. There’s also a social angle to our tendency to misjudge work and leisure. As Csikszentmihalyi and LeFevre discovered in their experiments, and as most of us know from our own experience, people allow themselves to be guided by social conventions—in this case, the deep-seated idea that being “at leisure” is more desirable, and carries more status, than being “at work”—rather than by their true feelings. “Needless to say,” the researchers concluded, “such a blindness to the real state of affairs is likely to have unfortunate consequences for both individual wellbeing and the health of society.” As people act on their skewed perceptions, they will “try to do more of those activities that provide the least positive experiences and avoid the activities that are the source of their most positive and intense feelings.” That’s hardly a recipe for the good life.

It’s not that the work we do for pay is intrinsically superior to the activities we engage in for diversion or entertainment. Far from it. Plenty of jobs are dull and even demeaning, and plenty of hobbies and pastimes are stimulating and fulfilling. But a job imposes a structure on our time that we lose when we’re left to our own devices. At work, were pushed to engage in the kinds of activities that human beings find most satisfying. We’re happiest when we’re absorbed in a difficult task, a task that has clear goals and that challenges us not only to exercise our talents but to stretch them. We become so immersed in the flow of our work, to use Csikszentmihalyi s term, that we tune out distractions and transcend the anxieties and worries that plague our everyday lives. Our usually wayward attention becomes fixed on what we’re doing. “Every action, movement, and thought follows inevitably from the previous one,” explains Csikszentmihalyi. “Your whole being is involved, and you’re using your skills to the utmost.” Such states of deep absorption can be produced by all manner of effort, from laying tile to singing in a choir to racing a dirt bike. You don’t have to be earning a wage to enjoy the transports of flow.

More often than not, though, our discipline flags and our mind wanders when we’re not on the job. We may yearn for the workday to be over so we can start spending our pay and having some fun, but most of us fritter away our leisure hours. We shun hard work and only rarely engage in challenging hobbies. Instead, we watch TV or go to the mall or log on to Facebook. We get lazy. And then we get bored and fretful. Disengaged from any outward focus, our attention turns inward, and we end up locked in what Emerson called the jail of self-consciousness. Jobs, even crummy ones, are “actually easier to enjoy than free time,” says Csikszentmihalyi, because they have the “built-in” goals and challenges that “encourage one to become involved in one’s work, to concentrate and lose oneself in it.” But that’s not what our deceiving minds want us to believe. Given the opportunity, we’ll eagerly relieve ourselves of the rigors of labor. We’ll sentence ourselves to idleness.

Automation offers us innumerable promises. Our lives, we think, will be greater if more things are automated. Yet as Carr explores in The Glass Cage, automation extracts a cost. Removing “complexity from jobs, diminishing the challenge they present and hence the level of engagement they promote.” This doesn’t mean that Carr is anti-automation. He’s not. He just wants us to see another side.

“All too often,” Carr warns, “automation frees us from that which makes us feel free.”

What If? Serious Scientific Answers to Absurd Hypothetical Questions

xkcd-title
Source: What If?: Serious Scientific Answers to Absurd Hypothetical Questions

Randall Munroe, the creator of xkcd, has written a book: What If?: Serious Scientific Answers to Absurd Hypothetical Questions

Here are a few questions, which I loved, that are sure to spark your curiosity and imagination.

What would happen if you tried to hit a baseball pitched at 90 percent the speed of light?

xkcd-baseball 1
Source: Source: What If?: Serious Scientific Answers to Absurd Hypothetical Questions

The answer turns out to be “a lot of things ,” and they all happen very quickly, and it doesn’t end well for the batter (or the pitcher). I sat down with some physics books, a Nolan Ryan action figure, and a bunch of videotapes of nuclear tests and tried to sort it all out. What follows is my best guess at a nanosecond-by-nanosecond portrait.

The ball would be going so fast that everything else would be practically stationary. Even the molecules in the air would stand still. Air molecules would vibrate back and forth at a few hundred miles per hour, but the ball would be moving through them at 600 million miles per hour. This means that as far as the ball is concerned, they would just be hanging there, frozen.

The ideas of aerodynamics wouldn’t apply here. Normally, air would flow around anything moving through it. But the air molecules in front of this ball wouldn’t have time to be jostled out of the way. The ball would smack into them so hard that the atoms in the air molecules would actually fuse with the atoms in the ball’s surface. Each collision would release a burst of gamma rays and scattered particles.

xkcd-baseball 2
Source: What If?: Serious Scientific Answers to Absurd Hypothetical Questions

These gamma rays and debris would expand outward in a bubble centered on the pitcher’s mound. They would start to tear apart the molecules in the air, ripping the electrons from the nuclei and turning the air in the stadium into an expanding bubble of incandescent plasma. The wall of this bubble would approach the batter at about the speed of light— only slightly ahead of the ball itself.

The constant fusion at the front of the ball would push back on it, slowing it down, as if the ball were a rocket flying tail-first while firing its engines. Unfortunately, the ball would be going so fast that even the tremendous force from this ongoing thermonuclear explosion would barely slow it down at all. It would, however, start to eat away at the surface, blasting tiny fragments of the ball in all directions. These fragments would be going so fast that when they hit air molecules, they would trigger two or three more rounds of fusion.

After about 70 nanoseconds the ball would arrive at home plate. The batter wouldn’t even have seen the pitcher let go of the ball, since the light carrying that information would arrive at about the same time the ball would. Collisions with the air would have eaten the ball away almost completely, and it would now be a bullet-shaped cloud of expanding plasma (mainly carbon, oxygen, hydrogen, and nitrogen) ramming into the air and triggering more fusion as it went. The shell of x-rays would hit the batter first, and a handful of nanoseconds later the debris cloud would hit.

When it would reach home plate, the center of the cloud would still be moving at an appreciable fraction of the speed of light. It would hit the bat first, but then the batter, plate, and catcher would all be scooped up and carried backward through the backstop as they disintegrated. The shell of x-rays and superheated plasma would expand outward and upward, swallowing the backstop, both teams, the stands, and the surrounding neighborhood— all in the first microsecond.

Suppose you’re watching from a hilltop outside the city. The first thing you would see would be a blinding light, far outshining the sun. This would gradually fade over the course of a few seconds, and a growing fireball would rise into a mushroom cloud. Then, with a great roar, the blast wave would arrive, tearing up trees and shredding houses.

Everything within roughly a mile of the park would be leveled, and a firestorm would engulf the surrounding city. The baseball diamond, now a sizable crater, would be centered a few hundred feet behind the former location of the backstop.

xkcd-baseball3
Source: What If?: Serious Scientific Answers to Absurd Hypothetical Questions

Major League Baseball Rule 6.08( b) suggests that in this situation, the batter would be considered “hit by pitch,” and would be eligible to advance to first base.

***

What would happen if everyone on Earth stood as close to each other as they could and jumped, everyone landing on the ground at the same instant?

This is one the most popular questions submitted through my website. It’s been examined before, including by ScienceBlogs and The Straight Dope. They cover the kinematics pretty well. However, they don’t tell the whole story.

Let’s take a closer look.

At the start of the scenario, the entire Earth’s population has been magically transported together into one place.

xkcd-prejump
Source: What If?: Serious Scientific Answers to Absurd Hypothetical Questions

This crowd takes up an area the size of Rhode Island. But there’s no reason to use the vague phrase “an area the size of Rhode Island.” This is our scenario; we can be specific. They’re actually in Rhode Island.

At the stroke of noon, everyone jumps.

xkcd-jumping
Source: What If?: Serious Scientific Answers to Absurd Hypothetical Questions

As discussed elsewhere, it doesn’t really affect the planet. Earth outweighs us by a factor of over ten trillion. On average, we humans can vertically jump maybe half a meter on a good day. Even if the Earth were rigid and responded instantly, it would be pushed down by less than an atom’s width.

Next, everyone falls back to the ground.

Technically, this delivers a lot of energy into the Earth, but it’s spread out over a large enough area that it doesn’t do much more than leave footprints in a lot of gardens. A slight pulse of pressure spreads through the North American continental crust and dissipates with little effect. The sound of all those feet hitting the ground creates a loud, drawn-out roar lasting many seconds.

Eventually, the air grows quiet.

Seconds pass. Everyone looks around. There are a lot of uncomfortable glances. Someone coughs.

A cell phone comes out of a pocket. Within seconds, the rest of the world’s five billion phones follow. All of them —even those compatible with the region’s towers— are displaying some version of “NO SIGNAL.” The cell networks have all collapsed under the unprecedented load. Outside Rhode Island, abandoned machinery begins grinding to a halt.

The T. F. Green Airport in Warwick, Rhode Island, handles a few thousand passengers a day. Assuming they got things organized (including sending out scouting missions to retrieve fuel), they could run at 500 percent capacity for years without making a dent in the crowd.

The addition of all the nearby airports doesn’t change the equation much. Nor does the region’s light rail system. Crowds climb on board container ships in the deep-water port of Providence, but stocking sufficient food and water for a long sea voyage proves a challenge.

Rhode Island’s half-million cars are commandeered. Moments later, I-95, I-195, and I-295 become the sites of the largest traffic jam in the history of the planet. Most of the cars are engulfed by the crowds, but a lucky few get out and begin wandering the abandoned road network.

Some make it past New York or Boston before running out of fuel. Since the electricity is probably not on at this point, rather than find a working gas pump, it’s easier to just abandon the car and steal a new one. Who can stop you? All the cops are in Rhode Island.

The edge of the crowd spreads outward into southern Massachusetts and Connecticut. Any two people who meet are unlikely to have a language in common, and almost nobody knows the area. The state becomes a chaotic patchwork of coalescing and collapsing social hierarchies. Violence is common. Everybody is hungry and thirsty. Grocery stores are emptied. Fresh water is hard to come by and there’s no efficient system for distributing it.

Within weeks, Rhode Island is a graveyard of billions.

The survivors spread out across the face of the world and struggle to build a new civilization atop the pristine ruins of the old. Our species staggers on, but our population has been greatly reduced. Earth’s orbit is completely unaffected— it spins along exactly as it did before our species-wide jump.

But at least now we know.

What If?: Serious Scientific Answers to Absurd Hypothetical Questions is sure to spark your imagination and reignite your creativity.

Eight Things I Learned from Peter Thiel’s Zero To One

Peter Thiel is an entrepreneur and investor. He co-founded PayPal and Palantir. He also made the first outside investment in Facebook and was an early investor in companies like SpaceX and LinkedIn. And now he’s written a book, Zero to One: Notes on Startups, or How to Build the Future, with the goal of helping us “see beyond the tracks laid down” to the “broader future that there is to create.”

Zero To One is an exercise in thinking — about questioning and rethinking received wisdom to create the future. And thinking about thinking is what we’re all about.

Here are eight lessons I took away from the book.

1. Each Moment Happens Once

Like Heraclitus, who said that you can only step into the same river once, Thiel believes that each moment in business happens only once.

The next Bill Gates will not build an operating system. The next Larry Page or Sergey Brin won’t make a search engine. And the next Mark Zuckerberg won’t create a social network. If you are copying these guys, you aren’t learning from them.

Of course, it’s easier to copy a model than to make something new. Doing what we already know how to do takes the world from 1 to n, adding more of something familiar. But every time we create something new, we go from 0 to 1. The act of creation is singular, as is the moment of creation, and the result is something fresh and strange.

2. There is no Formula

The paradox of teaching entrepreneurship is that such a formula (for innovation) cannot exist; because every innovation is new and unique, no authority can prescribe in concrete terms how to be more innovative. Indeed, the single most powerful pattern I have noticed is that successful people find value in unexpected places, and they do this by thinking about business from first principles instead of formulas.

3. The Best Interview Question

Whenever I interview someone for a job, I like to ask this question: “What important truth do very few people agree with you on?”

This is a question that sounds easy because it’s straightforward. Actually, it’s very hard to answer. It’s intellectually difficult because the knowledge that everyone is taught in school is by definition agreed upon. And it’s psychologically difficult because anyone trying to answer must say something she knows to be unpopular. Brilliant thinking is rare, but courage is in even shorter supply than genius.

Most commonly, I hear answers like the following:

“Our educational system is broken and urgently needs to be fixed.”

“America is exceptional.”

“There is no God.”

These are bad answers. The first and the second statements might be true, but many people already agree with them. The third statement simply takes one side in a familiar debate. A good answer takes the following form: “Most people believe in x, but the truth is the opposite of x.”

What does this have to do with the future?

In the most minimal sense, the future is simply the set of all moments yet to come. But what makes the future distinctive and important isn’t that it hasn’t happened yet, but rather that it will be a time when the world looks different from today. … Most answers to the contrarian questions are different ways of seeing the present; good answers are as close as we can come to looking into the future.

4. A Company’s Most Important Strength

Properly defined, a startup is the largest group of people you can convince of a plan to build a different future. A new company’s most important strength is new thinking: even more important than nimbleness, small size affords space to think.

“Madness is rare in individuals—but in groups, parties, nations, and ages it is the rule.”

— Nietzche

5. The Contrarian Question

The question “What important truth do very few people agree with you on?” is hard to answer at first. It’s better to start with, “what does everybody agree on?”

If you can identify a delusional popular belief, you can find what lies hidden behind it: the contrarian truth.

[…]

Conventional beliefs only ever come to appear arbitrary and wrong in retrospect; whenever one collapses we call the old belief a bubble, but the distortions caused by bubbles don’t disappear when they pop. The internet bubble of the ‘90s was the biggest of the last two decades, and the lessons learned afterward define and distort almost all thinking about technology today. The first step to thinking clearly is to question what we think we know about the past.

Here is an example Thiel gives to help illuminate this idea.

The entrepreneurs who stuck with Silicon Valley learned four big lessons from the dot-com crash that still guide business thinking today:

1. Make incremental advances — “Grand visions inflated the bubble, so they should not be indulged. Anyone who claims to be able to do something great is suspect, and anyone who wants to change the world should be more humble. Small, incremental steps are the only safe path forward.”

2. Stay lean and flexible — “All companies must be lean, which is code for unplanned. You should not know what your business will do; planning is arrogant and inflexible. Instead you should try things out, iterate, and treat entrepreneurship as agnostic experimentation.”

3. Improve on the competition — “Don’t try to create a new market prematurely. The only way to know that you have a real business is to start with an already existing customer, so you should build your company by improving on recognizable products already offered by successful competitors.”

4. Focus on product, not sales — “If your product requires advertising or salespeople to sell it, it’s not good enough: technology is primarily about product development, not distribution. Bubble-era advertising was obviously wasteful, so the only sustainable growth is viral growth.”

These lessons have become dogma in the startup world; those who would ignore them are presumed to invite the justified doom visited upon technology in the great crash of 2000. And yet the opposite principles are probably more correct.

1. It is better to risk boldness than triviality.
2. A bad plan is better than no plan.
3. Competitive markets destroy profits.
4. Sales matters just as much as product.”

To build the future we need to challenge the dogmas that shape our view of the past. That doesn’t mean the opposite of what is believed is necessarily true, it means that you need to rethink what is and is not true and determine how that shapes how we see the world today. As Thiel says, “The most contrarian thing of all is not to oppose the crowd but to think for yourself.

6. Progress Comes From Monopoly, not Competition

The problem with a competitive business goes beyond lack of profits. Imagine you’re running one of those restaurants in Mountain View. You’re not that different from dozens of your competitors, so you’ve got to fight hard to survive. If you offer affordable food with low margins, you can probably pay employees only minimum wage. And you’ll need to squeeze out every efficiency: That is why small restaurants put Grandma to work at the register and make the kids wash dishes in the back.

A monopoly like Google is different. Since it doesn’t have to worry about competing with anyone, it has wider latitude to care about its workers, its products and its impact on the wider world. Google’s motto—”Don’t be evil”—is in part a branding ploy, but it is also characteristic of a kind of business that is successful enough to take ethics seriously without jeopardizing its own existence. In business, money is either an important thing or it is everything. Monopolists can afford to think about things other than making money; non-monopolists can’t. In perfect competition, a business is so focused on today’s margins that it can’t possibly plan for a long-term future. Only one thing can allow a business to transcend the daily brute struggle for survival: monopoly profits.

So a monopoly is good for everyone on the inside, but what about everyone on the outside? Do outsize profits come at the expense of the rest of society? Actually, yes: Profits come out of customers’ wallets, and monopolies deserve their bad reputation—but only in a world where nothing changes.

In a static world, a monopolist is just a rent collector. If you corner the market for something, you can jack up the price; others will have no choice but to buy from you. Think of the famous board game: Deeds are shuffled around from player to player, but the board never changes. There is no way to win by inventing a better kind of real-estate development. The relative values of the properties are fixed for all time, so all you can do is try to buy them up.

But the world we live in is dynamic: We can invent new and better things. Creative monopolists give customers more choices by adding entirely new categories of abundance to the world. Creative monopolies aren’t just good for the rest of society; they’re powerful engines for making it better.

7. Rivalry Causes us to Copy the Past

Marx and Shakespeare provide two models that we can use to understand almost every kind of conflict.

According to Marx, people fight because they are different. The proletariat fights the bourgeoisie because they have completely different ideas and goals (generated, for Marx, by their very different material circumstances). The greater the difference, the greater the conflict.

To Shakespeare, by contrast, all combatants look more or less alike. It’s not at all clear why they should be fighting since they have nothing to fight about. Consider the opening to Romeo and Juliet: “Two households, both alike in dignity.” The two houses are alike, yet they hate each other. They grow even more similar as the feud escalates. Eventually, they lose sight of why they started fighting in the first place.”

In the world of business, at least, Shakespeare proves the superior guide. Inside a firm, people become obsessed with their competitors for career advancement. Then the firms themselves become obsessed with their competitors in the marketplace. Amid all the human drama, people lose sight of what matters and focus on their rivals instead.

[…]

Rivalry causes us to overemphasize old opportunities and slavishly copy what has worked in the past.

8. Last can be First

You’ve probably heard about “first mover advantage”: if you’re the first entrant into a market, you can capture significant market share while competitors scramble to get started. That can work, but moving first is a tactic, not a goal. What really matters is generating cash flows in the future, so being the first mover doesn’t do you any good if someone else comes along and unseats you. It’s much better to be the last mover – that is, to make the last great development in a specific market and enjoy years or even decades of monopoly profits.

Chess Grand-master José Raúl Capablanca put it well: to succeed, “you must study the endgame before everything else.”

Zero to One is full of counter-intuitive insights that will help your thinking and ignite possibility.

John Keats on the Quality That Formed a Man of Achievement: Negative Capability

John Keats coined the term negative capability to describe the willingness to embrace uncertainty, mysteries and doubts.

The first and only time Keats used the phrase was in a letter on 21 December 1817 to his brothers in reference to his disagreement with the English poet and philosopher Coleridge, who Keats believed “sought knowledge over beauty.”

I had not a dispute but a disquisition with Dilke, upon various subjects; several things dove-tailed in my mind, and at once it struck me what quality went to form a Man of Achievement, especially in Literature, and which Shakespeare possessed so enormously – I mean Negative Capability, that is, when a man is capable of being in uncertainties, mysteries, doubts, without any irritable reaching after fact and reason – Coleridge, for instance, would let go by a fine isolated verisimilitude caught from the Penetralium of mystery, from being incapable of remaining content with half-knowledge. This pursued through volumes would perhaps take us no further than this, that with a great poet the sense of Beauty overcomes every other consideration, or rather obliterates all consideration.

From Wikipedia:

Keats understood Coleridge as searching for a single, higher-order truth or solution to the mysteries of the natural world. He went on to find the same fault in Dilke and Wordsworth. All these poets, he claimed, lacked objectivity and universality in their view of the human condition and the natural world. In each case, Keats found a mind which was a narrow private path, not a “thoroughfare for all thoughts.” Lacking for Keats were the central and indispensable qualities requisite for flexibility and openness to the world, or what he referred to as negative capability.

This concept of Negative Capability is precisely a rejection of set philosophies and preconceived systems of nature. He demanded that the poet be receptive rather than searching for fact or reason, and to not seek absolute knowledge of every truth, mystery, or doubt.

The origin of the term is unknown, but some scholars have hypothesized that Keats was influenced in his studies of medicine and chemistry, and that it refers to the negative pole of an electric current which is passive and receptive. In the same way that the negative pole receives the current from the positive pole, the poet receives impulses from a world that is full of mystery and doubt, which cannot be explained but which the poet can translate into art.

Although this was the only time that Keats used the term, this view of aesthetics and rejection of a rationalizing tendency has influenced much commentary on Romanticism and the tenets of human experience.

For the twentieth-century British psychoanalyst Wilfred Bion, negative capability “was the ability to tolerate the pain and confusion of not knowing, rather than imposing ready-made or omnipotent certainties upon an ambiguous situation or emotional challenge.”

If you’re still curious, I recommend reading this thesis on Negative Capability and Wise Passiveness.

Steve Jobs on Creativity

“Originality depends on new and striking combinations of ideas.”
— Rosamund Harding

In a beautiful article for The Atlantic, Nancy Andreasen, a neuroscientist who has spent decades studying creativity, writes:

[C]reative people are better at recognizing relationships, making associations and connections, and seeing things in an original way—seeing things that others cannot see. … Having too many ideas can be dangerous. Part of what comes with seeing connections no one else sees is that not all of these connections actually exist.

The same point of view is offered by James Webb Young, who many years earlier, wrote:

An idea is nothing more nor less than a new combination of old elements [and] the capacity to bring old elements into new combinations depends largely on the ability to see relationships.

A lot of creative luminaries think about creativity in the same way. Steve Jobs had a lot to say about creativity.

In I, Steve: Steve Jobs in His Own Words, editor George Beahm draws on more than 30 years of media coverage of Steve Jobs in order to find Jobs’ most thought-provoking insights on many aspects of life and creativity.

In one particularly notable excerpt Jobs says:

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things. And the reason they were able to do that was that they’ve had more experiences or they have thought more about their experiences than other people. Unfortunately, that’s too rare a commodity. A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem. The broader one’s understanding of the human experience, the better design we will have.

The more you learn about, the more you can connect things. This becomes an argument for a broad-based education. In Jobs’ 2005 commencement address to the class of Stanford, Jobs makes the case for learning things that, at the time, may not offer the most practical benefit. Over time, however, these things add up to give you a broader base of knowledge from which to connect ideas:

Throughout the campus every poster, every label on every drawer, was beautifully hand calligraphed. Because I had dropped out and didn’t have to take the normal classes, I decided to take a calligraphy class to learn how to do this. I learned about serif and san serif typefaces, about varying the amount of space between different letter combinations, about what makes great typography great. It was beautiful, historical, artistically subtle in a way that science can’t capture, and I found it fascinating.

None of this had even a hope of any practical application in my life. But ten years later, when we were designing the first Macintosh computer, it all came back to me.

While education is important for building up a repository for which you can connect things, it’s not enough. You need broad life experiences as well.

I, Steve: Steve Jobs in His Own Words is full of things that will make you think.

Paula Scher on Process versus Outcome

Iconic typography designer Paula Scher discusses her creative process, including the famous Citi logo. Interestingly, the idea came to her in seconds and that presented a problem for the client. They wanted to buy a process not an outcome. Scher’s process is very much one of combinatory creativity, whereby she combines existing things in new ways.

A lot of clients like to buy process. It’s like they think they are not getting their money’s worth because I solved it too fast.

[…]

How can it be that you talk to someone and it’s done in a second? But it IS done in a second — it’s done in a second and 34 years. It’s done in a second and every experience, and every movie, and everything in my life that’s in my head.

This reminds me of an old story with many variations. Here is one version.

A giant ship engine failed. The ship’s owners tried one expert after another, but none of them could figure but how to fix the engine.

Then they brought in an old man who had been fixing ships since he was a young [boy]. He carried a large bag of tools with him, and when he arrived, he immediately went to work. He inspected the engine very carefully, top to bottom.

Two of the ship’s owners were there, watching this man, hoping he would know what to do. After looking things over, the old man reached into his bag and pulled out a small hammer. He gently tapped something. Instantly, the engine lurched into life. He carefully put his hammer away. The engine was fixed!

A week later, the owners received a bill from the old man for ten thousand dollars.

“What?!” the owners exclaimed. “He hardly did anything!” So they wrote the old man a note saying, “Please send us an itemised bill.”

The man sent a bill that read:
Tapping with a hammer………………….. $ 2.00
Knowing where to tap…………………….. $ 9,998.00

*Effort is important, but knowing where to make an effort makes all the difference!*