Tag: Nicolas Carr

The Glass Cage: Automation and US

The Glass Cage

The impact of technology is all around us. Maybe we’re at another Gutenberg moment and maybe we’re not.

Marshall McLuhan said it best.

When any new form comes into the foreground of things, we naturally look at it through the old stereos. We can’t help that. This is normal, and we’re still trying to see how will our previous forms of political and educational patterns persist under television. We’re just trying to fit the old things into the new form, instead of asking what is the new form going to do to all the assumptions we had before.

He also wrote that “a new medium is never an addition to an old one, nor does it leave the old one in peace.”

In The Glass Cage: Automation and US, Nick Carr, one of my favorite writers, enters the debate about the impact automation has on us, “examining the personal as well as the economic consequences of our growing dependence on computers.”

We know that the nature of jobs is going to change in the future thanks to technology. Tyler Cowen argues “If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch.”

Carr’s book shows another side to the argument – the broader human consequences to living in a world where computers and software do the things we used to do.

Computer automation makes our lives easier, our chores less burdensome. We’re often able to accomplish more in less time—or to do things we simply couldn’t do before. But automation also has deeper, hidden effects. As aviators have learned, not all of them are beneficial. Automation can take a toll on our work, our talents, and our lives. It can narrow our perspectives and limit our choices. It can open us to surveillance and manipulation. As computers become our constant companions, our familiar, obliging helpmates, it seems wise to take a closer look at exactly how they’re changing what we do and who we are.

On the autonomous automobile, for example, Carr agues that while they have a ways to go before they start chauffeuring us around, there are broader questions that need to be answered first.

Although Google has said it expects commercial versions of its car to be on sale by the end of the decade, that’s probably wishful thinking. The vehicle’s sensor systems remain prohibitively expensive, with the roof-mounted laser apparatus alone going for eighty thousand dollars. Many technical challenges remain to be met, such as navigating snowy or leaf-covered roads, dealing with unexpected detours, and interpreting the hand signals of traffic cops and road workers. Even the most powerful computers still have a hard time distinguishing a bit of harmless road debris (a flattened cardboard box, say) from a dangerous obstacle (a nail-studded chunk of plywood). Most daunting of all are the many legal, cultural, and ethical hurdles a driverless car faces-Where, for instance, will culpability and liability reside should a computer-driven automobile cause an accident that kills or injures someone? With the car’s owner? With the manufacturer that installed the self-driving system? With the programmers who wrote the software? Until such thorny questions get sorted out, fully automated cars are unlikely to grace dealer showrooms.

Tacit and Explicit Knowledge

Self-driving cars are just one example of a technology that forces us “to change our thinking about what computers and robots can and can’t do.”

Up until that fateful October day, it was taken for granted that many important skills lay beyond the reach of automation. Computers could do a lot of things, but they couldn’t do everything. In an influential 2004 book, The New Division of Labor: How Computers Are Creating the Next Job Market, economists Frank Levy and Richard Murnane argued, convincingly, that there were practical limits to the ability of software programmers to replicate human talents, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed specifically to the example of driving a car on the open road, a talent that requires the instantaneous interpretation of a welter of visual signals and an ability to adapt seamlessly to shifting and often unanticipated situations. We hardly know how we pull off such a feat ourselves, so the idea that programmers could reduce all of driving’s intricacies, intangibilities, and contingencies to a set of instructions, to lines of software code, seemed ludicrous. “Executing a left turn across oncoming traffic,” Levy and Murnane wrote, “involves so many factors that it is hard to imagine the set of rules that can replicate a drivers behavior.” It seemed a sure bet, to them and to pretty much everyone else, that steering wheels would remain firmly in the grip of human hands.

In assessing computers’ capabilities, economists and psychologists have long drawn on a basic distinction between two kinds of knowledge: tacit and explicit. Tacit knowledge, which is also sometimes called procedural knowledge, refers to all the stuff we do without actively thinking about it: riding a bike, snagging a fly ball, reading a book, driving a car. These aren’t innate skills—we have to learn them, and some people are better at them than others—but they can’t be expressed as a simple recipe, a sequence of precisely defined steps. When you make a turn through a busy intersection in your car, neurological studies have shown, many areas of your brain are hard at work, processing sensory stimuli, making estimates of time and distance, and coordinating your arms and legs. But if someone asked you to document everything involved in making that turn, you wouldn’t be able to, at least not without resorting to generalizations and abstractions.The ability resides deep in your nervous system outside the ambit of your conscious mind. The mental processing goes on without your awareness.

Much of our ability to size up situations and make quick judgments about them stems from the fuzzy realm of tacit knowledge. Most of our creative and artistic skills reside there too. Explicit knowledge, which is also known as declarative knowledge, is the stuff you can actually write down: how to change a flat tire, how to fold an origami crane, how to solve a quadratic equation. These are processes that can be broken down into well-defined steps. One person can explain them to another person through written or oral instructions: do this, then this, then this.

Because a software program is essentially a set of precise, written instructions—do this, then this, then this—we’ve assumed that while computers can replicate skills that depend on explicit knowledge, they’re not so good when it comes to skills that flow from tacit knowledge. How do you translate the ineffable into lines of code, into the rigid, step-by-step instructions of an algorithm? The boundary between the explicit and the tacit has always been a rough one—a lot of our talents straddle the line—but it seemed to offer a good way to define the limits of automation and, in turn, to mark out the exclusive precincts of the human. The sophisticated jobs Levy and Murnane identified as lying beyond the reach of computers—in addition to driving, they pointed to teaching and medical diagnosis—were a mix of the mental and the manual, but they all drew on tacit knowledge.

Google’s car resets the boundary between human and computer, and it does so more dramatically, more decisively, than have earlier breakthroughs in programming. It tells us that our idea of the limits of automation has always been something of a fiction. Were not as special as we think we are. While the distinction between tacit and explicit knowledge remains a useful one in the realm of human psychology, it has lost much of its relevance to discussions of automation.

Tomorrowland

That doesn’t mean that computers now have tacit knowledge, or that they’ve started to think the way we think, or that they’ll soon be able to do everything people can do. They don’t, they haven’t, and they won’t. Artificial intelligence is not human intelligence. People are mindful; computers are mindless. But when it comes to performing demanding tasks, whether with the brain or the body, computers are able to replicate our ends without replicating our means. When a driverless car makes a left turn in traffic, it’s not tapping into a well of intuition and skill; it’s following a program. But while the strategies are different, the outcomes, for practical purposes, are the same. The superhuman speed with which computers can follow instructions, calculate probabilities, and receive and send data means that they can use explicit knowledge to perform many of the complicated tasks that we do with tacit knowledge. In some cases, the unique strengths of computers allow them to perform what we consider to be tacit skills better than we can perform them ourselves. In a world of computer-controlled cars, you wouldn’t need traffic lights or stop signs. Through the continuous, high-speed exchange of data, vehicles would seamlessly coordinate their passage through even the busiest of intersections—just as computers today regulate the flow of inconceivable numbers of data packets along the highways and byways of the internet. What’s ineffable in our own minds becomes altogether effable in the circuits of a microchip.

Many of the cognitive talents we’ve considered uniquely human, it turns out, are anything but. Once computers get quick enough, they can begin to replicate our ability to spot patterns, make judgments, and learn from experience.

It’s not only vocations that are increasingly being computerized, avocations are too.

Thanks to the proliferation of smartphones, tablets, and other small, affordable, and even wearable computers, we now depend on software to carry out many of our daily chores and pastimes. We launch apps to aid us in shopping, cooking, exercising, even finding a mate and raising a child. We follow turn-by-turn GPS instructions to get from one place to the next. We use social networks to maintain friendships and express our feelings. We seek advice from recommendation engines on what to watch, read, and listen to. We look to Google, or to Apple’s Siri, to answer our questions and solve our problems. The computer is becoming our all-purpose tool for navigating, manipulating, and understanding the world, in both its physical and its social manifestations. Just think what happens these days when people misplace their smartphones or lose their connections to the net. Without their digital assistants, they feel helpless.

As Katherine Hayles, a literature professor at Duke University, observed in her 2012 book How We Think, “When my computer goes down or my Internet connection fails, I feel lost, disoriented, unable to work—in fact, I feel as if my hands have been amputated.”

While our dependency on computers is “disconcerting at times,” we welcome it.

We’re eager to celebrate and show off our whizzy new gadgets and apps—and not only because they’re so useful and so stylish. There’s something magical about computer automation. To watch an iPhone identify an obscure song playing over the sound system in a bar is to experience something that would have been inconceivable to any previous generation.

Miswanting

The trouble with automation is “that it often gives us what we don’t need at the cost of what we do.”

To understand why that’s so, and why we’re eager to accept the bargain, we need to take a look at how certain cognitive biases—flaws in the way we think—can distort our perceptions. When it comes to assessing the value of labor and leisure, the mind’s eye can’t see straight.

Mihaly Csikszentmihalyi, a psychology professor and author of the popular 1990 book Flow, has described a phenomenon that he calls “the paradox of work.” He first observed it in a study conducted in the 1980s with his University of Chicago colleague Judith LeFevre. They recruited a hundred workers, blue-collar and white-collar, skilled and unskilled, from five businesses around Chicago. They gave each an electronic pager (this was when cell phones were still luxury goods) that they had programmed to beep at seven random moments a day over the course of a week. At each beep, the subjects would fill out a short questionnaire. They’d describe the activity they were engaged in at that moment, the challenges they were facing, the skills they were deploying, and the psychological state they were in, as indicated by their sense of motivation, satisfaction, engagement, creativity, and so forth. The intent of this “experience sampling,” as Csikszentmihalyi termed the technique, was to see how people spend their time, on the job and off, and how their activities influence their “quality of experience.”

The results were surprising. People were happier, felt more fulfilled by what they were doing, while they were at work than during their leisure hours. In their free time, they tended to feel bored and anxious. And yet they didn’t like to be at work. When they were on the job, they expressed a strong desire to be off the job, and when they were off the job, the last thing they wanted was to go back to work. “We have,” reported Csikszentmihalyi and LeFevre, “the paradoxical situation of people having many more positive feelings at work than in leisure, yet saying that they wish to be doing something else when they are at work, not when they are in leisure.” We’re terrible, the experiment revealed, at anticipating which activities will satisfy us and which will leave us discontented. Even when we’re in the midst of doing something, we don’t seem able to judge its psychic consequences accurately.

Those are symptoms of a more general affliction, on which psychologists have bestowed the poetic name miswanting. We’re inclined to desire things we don’t like and to like things we don’t desire. “When the things we want to happen do not improve our happiness, and when the things we want not to happen do,” the cognitive psychologists Daniel Gilbert and Timothy Wilson have observed, “it seems fair to say we have wanted badly.” And as slews of gloomy studies show, we’re forever wanting badly. There’s also a social angle to our tendency to misjudge work and leisure. As Csikszentmihalyi and LeFevre discovered in their experiments, and as most of us know from our own experience, people allow themselves to be guided by social conventions—in this case, the deep-seated idea that being “at leisure” is more desirable, and carries more status, than being “at work”—rather than by their true feelings. “Needless to say,” the researchers concluded, “such a blindness to the real state of affairs is likely to have unfortunate consequences for both individual wellbeing and the health of society.” As people act on their skewed perceptions, they will “try to do more of those activities that provide the least positive experiences and avoid the activities that are the source of their most positive and intense feelings.” That’s hardly a recipe for the good life.

It’s not that the work we do for pay is intrinsically superior to the activities we engage in for diversion or entertainment. Far from it. Plenty of jobs are dull and even demeaning, and plenty of hobbies and pastimes are stimulating and fulfilling. But a job imposes a structure on our time that we lose when we’re left to our own devices. At work, were pushed to engage in the kinds of activities that human beings find most satisfying. We’re happiest when we’re absorbed in a difficult task, a task that has clear goals and that challenges us not only to exercise our talents but to stretch them. We become so immersed in the flow of our work, to use Csikszentmihalyi s term, that we tune out distractions and transcend the anxieties and worries that plague our everyday lives. Our usually wayward attention becomes fixed on what we’re doing. “Every action, movement, and thought follows inevitably from the previous one,” explains Csikszentmihalyi. “Your whole being is involved, and you’re using your skills to the utmost.” Such states of deep absorption can be produced by all manner of effort, from laying tile to singing in a choir to racing a dirt bike. You don’t have to be earning a wage to enjoy the transports of flow.

More often than not, though, our discipline flags and our mind wanders when we’re not on the job. We may yearn for the workday to be over so we can start spending our pay and having some fun, but most of us fritter away our leisure hours. We shun hard work and only rarely engage in challenging hobbies. Instead, we watch TV or go to the mall or log on to Facebook. We get lazy. And then we get bored and fretful. Disengaged from any outward focus, our attention turns inward, and we end up locked in what Emerson called the jail of self-consciousness. Jobs, even crummy ones, are “actually easier to enjoy than free time,” says Csikszentmihalyi, because they have the “built-in” goals and challenges that “encourage one to become involved in one’s work, to concentrate and lose oneself in it.” But that’s not what our deceiving minds want us to believe. Given the opportunity, we’ll eagerly relieve ourselves of the rigors of labor. We’ll sentence ourselves to idleness.

Automation offers us innumerable promises. Our lives, we think, will be greater if more things are automated. Yet as Carr explores in The Glass Cage, automation extracts a cost. Removing “complexity from jobs, diminishing the challenge they present and hence the level of engagement they promote.” This doesn’t mean that Carr is anti-automation. He’s not. He just wants us to see another side.

“All too often,” Carr warns, “automation frees us from that which makes us feel free.”

The Impoverishment of Attention

“While the link between attention and excellence remains hidden most of the time, it ripples through almost everything we seek to accomplish.”

***

Focus matters enormously for success in life and yet we seem to give it little attention.

Daniel Goleman‘s book, Focus: The Hidden Driver of Excellence, explores the power of attention. “Attention works much like a muscle,” he writes, “use it poorly and it can wither; work it well and it grows.”

To get the results we want in life, Goleman argues we need three kinds of focus: inner, other, and outer.

Inner focus attunes us to our intuitions, guiding values, and better decisions. Other focus smooths our connections to the people in our lives. And outer focus lets us navigate in the larger world. A (person) tuned out of his internal world will be rudderless; one blind to the world of others will be clueless; those indifferent to the larger systems within which they operate will be blindsided.

How we deploy attention shapes what we see. Or as Yoda says, “Your focus is your reality.”

Goleman argues that, despite the advantages of everything being only a click away, our attention span is suffering.

An eighth-grade teacher tells me that for many years she has had successive classes of students read the same book, Edith Hamilton’s Mythology. Her students have loved it— until five years or so ago. “I started to see kids not so excited— even high-achieving groups could not get engaged with it,” she told me. “They say the reading is too hard; the sentences are too complicated; it takes a long time to read a page.”

She wonders if perhaps her students’ ability to read has been somehow compromised by the short, choppy messages they get in texts. One student confessed he’d spent two thousand hours in the last year playing video games. She adds, “It’s hard to teach comma rules when you are competing with World of WarCraft.”

Here is a telling story. I was in a coffee shop just the other day and I noticed that when two people were having a conversation they couldn’t go more than a few minutes without picking up their phone. Our inability to resist checking email, Facebook, and Twitter rather than focus on the here and now leads to a real life out-of-office. Sociologist Erving Goffman, calls this “away,” which tells other people “I’m not interested” in you right now.

We continually fight distractions. From televisions on during supper, text messages, emails, phone calls … you get the picture. This is one reason I’ve changed my media consumption habits.

It feels like we’re going through life in a state of “continuous partial attention.” We’re there but not really there. Unaware of where we place our attention. Unconscious about how we live.

I once worked with the CEO of a private organization. We often discussed board meetings, agendas, and other areas of time allocation. I sensed a disconnect between where he wanted to spend his time and what he actually spent time on.

To verify, I went back over the last year of board meetings and categorized each scheduled agenda item. I found a substantial mismatch; he was spending a great deal of time on issues he thought were not important. In fact, the ‘scheduled time’ was almost the complete inverse of what he wanted to focus on.

Goleman also points to some of the implications of our modern world.

The onslaught of incoming data leads to sloppy shortcuts, like triaging email by heading, skipping much of voice mails, skimming messages and memos. It’s not just that we’ve developed habits of attention that make us less effective, but that the weight of messages leaves us too little time simply to reflect on what they really mean.

In 1977, foreseeing what was going to happen, the Nobel-winning economist Herbert Simon wrote:

What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.

William James, a pioneer of modern psychology, defined attention as “the sudden taking possession by the mind, in clear and vivid form, of one of what seems several simultaneously possible objects or trains of thought.”

We naturally focus when we’re lost. Imagine for a second the last time you were driving in your car without your GPS and you got lost. Think back to the first thing you did in response. I bet you turned off the radio so you could increase your focus.

Goleman, paraphrasing research, argues there are two main varieties of distractions: sensory and emotional.

The sensory distractors are easy: as you read these words you’re tuning out (our sponsor and all of the text on the right). Or notice for a moment the feeling of your tongue against your upper palate—just one of an endless wave of incoming stimuli your brain weeds out from the continuous wash of background sounds, shapes and colors, tastes, smells, sensations, and on and on.

More daunting is the second variety of lures: emotionally loaded signals. While you might find it easy to concentrate on answering your email in the hubbub of your local coffee shop, if you should overhear someone mention your name (potent emotional bait, that) it’s almost impossible to tune out the voice that carries it— your attention reflexively alerts to hear what’s being said about you. Forget that email. The dividing line between fruitless rumination and productive reflection lies in whether or not we come up with some tentative solution or insight and then can let those distressing thoughts go—or if, on the other hand, we just keep obsessing over the same loop of worry.

The more our focus gets disrupted, the worse we do.

To focus we must tune out emotional distractions. But not at all costs. The power to disengage focus is also important.

That means those who focus best are relatively immune to emotional turbulence, more able to stay unflappable in a crisis and to keep on an even keel despite life’s emotional waves.

Failure to drop one focus and move on to others can, for example, leave the mind lost in repeating loops of chronic anxiety. At clinical extremes it means being lost in helplessness, hopelessness, and self-pity in depression; or panic and catastrophizing in anxiety disorders; or countless repetitions of ritualistic thoughts or acts (touch the door fifty times before leaving) in obsessive-compulsive disorder. The power to disengage our attention from one thing and move it to another is essential for well-being.

We’ve all seen what a strong selective focus looks like. It’s the couple in the coffee shop mentioned above, eyes locked, who fail to realize they are not alone.

It should come as no surprise that we learn best with focused attention.

As we focus on what we are learning, the brain maps that information on what we already know, making new neural connections. If you and a small toddler share attention toward something as you name it, the toddler learns that name; if her focus wanders as you say it, she won’t.

When our mind wanders off, our brain activates a host of brain circuits that chatter about things that have nothing to do with what we’re trying to learn. Lacking focus, we store no crisp memory of what we’re learning.

Goleman goes on to discuss how we connect what we read to our mental models, which is the heart of learning.

As we read a book, a blog, or any narrative, our mind constructs a mental model that lets us make sense of what we are reading and connects it to the universe of such models we already hold that bear on the same topic.

If we can’t focus we’ll have more holes in our understanding. (To find holes in your understanding, try the Feynman Technique, which was actually an invention of George Eliot’s but I’ll save that for another day.)

When we read a book, our brain constructs a network of pathways that embodies that set of ideas and experiences. Contrast that deep comprehension with the interruptions and distractions that typify the ever-seductive Internet.

The continuous onslaught of texts, meetings, videos, music, email, Twitter, Facebook, and more is the enemy of understanding. The key, argues Nicolas Carr, author of The Shallows: What the Internet Is Doing to Our Brains, is “deep reading.” And the internet is making this nearly impossible.

There is, however, perhaps no skill better than deep and focused thought. “The more information that’s out there,” says Tyler Cowen, author of Average Is Over: Powering America Beyond the Age of the Great Stagnation, “the greater the returns to just being willing to sit down and apply yourself. Information isn’t what’s scarce; it’s the willingness to do something with it.” Deep thought must be learned. In order to do that, however, we must tune out most of the distractions and focus.

Goleman reminds us that some of this too was foreseen.

Way back in the 1950s the philosopher Martin Heidegger warned against a looming “tide of technological revolution” that might “so captivate, bewitch, dazzle, and beguile man that calculative thinking may someday come to be … the only way of thinking.” That would come at the loss of “meditative thinking,” a mode of reflection he saw as the essence of our humanity.

I hear Heidegger’s warning in terms of the erosion of an ability at the core of reflection, the capacity to sustain attention to an ongoing narrative. Deep thinking demands sustaining a focused mind. The more distracted we are, the more shallow our reflections; likewise, the shorter our reflections, the more trivial they are likely to be. Heidegger, were he alive today, would be horrified if asked to tweet.

The rest of Focus: The Hidden Driver of Excellence goes on to narrow in on “the elusive and under-appreciated mental faculty in the mind’s operations” known as attention and its role in living “a fulfilling life.”

The Tyranny of Email — 10 Tips to Save You

The Tyranny of E-Mail

I’ve been giving a lot of thought to my habits recently and how they affect me. One thing I’ve placed an increasingly watchful eye on is email.

Email seems pervasive in our lives. We check email on the bus, we check it in the bath. We check it first thing in the morning. We even check it midconversation, with the belief that no one will notice.

John Freeman argues in The Tyranny of Email that the average office worker “sends and receives two hundred emails a day.”

Email makes us reactive, as we race to keep up with the never-ending onslaught.

In the past, only a few professions—doctors, plumbers perhaps, emergency service technicians, prime ministers—required this kind of state of being constantly on call. Now, almost all of us live this way. Everything must be attended to—and if it isn’t, chances are another email will appear in a few hours asking if indeed the first message was received at all.

Working At The Speed of Email

Working at the speed of email is like trying to gain a topographic understanding of our daily landscape from a speeding train—and the consequences for us as workers are profound. Interrupted every thirty seconds or so, our attention spans are fractured into a thousand tiny fragments. The mind is denied the experience of deep flow, when creative ideas flourish and complicated thinking occurs. We become task-oriented, tetchy, terrible at listening as we try to keep up with the computer. The email inbox turns our mental to-do list into a palimpsest—there’s always something new and even more urgent erasing what we originally thought was the day’s priority. Incoming mail arrives on several different channels–via email, Facebook, Twitter, instant message–and in this era of backup we’re sure that we should keep records of our participation in all these conversations. The result is that at the end of the day we have a few hundred or even a few thousand emails still sitting in our inbox.

Part of us likes all of the attention email gives us. It has been shown that email is addictive in many of the same ways slot machines are addictive — variable reinforcement.

Tom Stafford, a lecturer in the Department of Psychology at the University of Sheffield, explains:

“This means that rather than reward an action every time it is performed, you reward it sometimes, but not in a predictable way. So with email, usually when I check it there is nothing interesting, but every so often there’s something wonderful —an invite out, or maybe some juicy gossip—and I get a reward.”

There are chemical reasons this happens that go well beyond our love of gossip. If we’re doing something that pays out randomly, our brain releases dopamine when we get something good and our body learns that we need to keep going if we want a reward.

Connections

“Ironically,” Freeman writes, “tools meant to connect us are enabling us to spend even more time apart.” The consequences are disastrous.

Spending our days communicating through this medium, which by virtue of its sheer volume forces us to talk in short bursts, we are slowly eroding our ability to explain — in a careful, complex way — why it is so wrong for us and to complain, resist, or redesign our workdays so that they are manageable.

Life On The Email Treadmill

“If the medium is the message, what does that say about new survey results that found nearly 60% of respondents check their email when they’re answering the call of nature.” — Michelle Masterson

When you arrive at work and there are twenty emails in your inbox, the weight of that queue is clear: everyone is waiting for you.

So you clear and clear and clear, only to learn that the faster you reply, the faster the replies come boomeranging back to you—thanks, follow-ups, additional requests, and that one-line sinker, “How are you doing these days?” It shouldn’t be such a burden to be asked your state of mind. In the workplace, however, where the sheer volume of correspondence can feel as if it has been designed on the high to enforce a kind of task-oriented tunnel vision, such a question is either a trapdoor or an escape hatch.

At the workplace it used to be hard to share things without a lot of friction. Now sharing is frictionless and free. CC’ing and forwarding to keep people “in the loop” has become a mixed blessing. Now everything is collaborative and if people are left off emails they literally feel left out.

Working in a Climate of Interruption

What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it. — Herb Simon

We live in a culture in which doing everything all at once is admired and encouraged—have our spreadsheet open while we check email, chin on the phone into our shoulder, and accept notes from a passing office messenger. Our desk is Grand Central and we are the conductor, and it feels good. Why? If we’re this busy, clearly we’re needed; we have a purpose. We are essential. The Internet and email have certainly created a “desire to be in the know, to not be left out, that ends up taking up a lot of our time”—at the expense of getting things done, said Mark Ellwood, the president of Pace Productivity, which studies how employees spend their time.

Of course we can’t multitask the way technology leads us to believe we can. “Multitasking,” Walter Kirn wrote in an essay called “The Autumn of the Multitaskers,” messes with the brain in several ways:”

At the most basic level, the mental balancing acts that it requires—the constant switching and pivoting—energize regions of the brain that specialize in visual processing and physical coordination and simultaneously appear to shortchange some of the higher areas related to memory and learning. We concentrate on the act of concentration at the expense of whatever it is that we’re supposed to be concentrating on.

What does this mean in practice? Consider a recent experiment at UCLA, where researchers asked a group of 20-somethings to sort index cards in two trials, once in silence and once while simultaneously listening for specific tones in a series of randomly presented sounds. The subjects’ brains coped with the additional task by shifting responsibility from the hippocampus—which stores and recalls information—to the striatum, which takes care of rote, repetitive activities. Thanks to this switch, the subjects managed to sort the cards just as well with the musical distraction— but they had a much harder time remembering what, exactly, they’d been sorting once the experiment was over.

Even worse, certain studies find that multitasking boosts the level of stress related hormones such as cortisol and adrenaline and wears down our systems through biochemical friction, prematurely aging us. In the short term, the confusion, fatigue, and chaos merely hamper our ability to focus and analyze, but in the long term, they may cause it to atrophy.

“In other words,” writes Freeman in The Tyranny of Email, “a work climate that revolves around multitasking, and constant interruptions has narrowed our cognitive window down to a care, basic facility: rote, mechanical tasks.”

We like to think we are in control of our environment, that we act upon it and shape it to our needs. It works both ways, though; changes we make to the world can have unseen ramifications that impact our ability to live in it.

Attention means being present. Being present helps mindfullness.

Thanks to an environment of constant stimulation the biggest challenge these days is maintaining focus.

“Immersing myself in a book or lengthy article used to be easy,” wrote Nicolas Carr in an essay entitled “Is Google Making Us Stupid?

My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.

Carr wrote an excellent book on the subject, The Shallows: What the Internet Is Doing to Our Brains. If you don’t have the time, or attention span, to read the book, you can watch the video.

Flow

Reading and other meditative tasks are best performed in what psychologist Mihaly Csikszentmihalyi calls a “state-of-flow,” in which “our focus narrows, the world seems to drop away, and we become less conscious of ourselves and more deeply immersed in ideas and language and complex thoughts,” Freeman writes.

Communication tools, however, seem to be working against this state.

In Flow: The Psychology of Optimal Experience, Csikszentmihalyi writes:

In today’s world we have come to neglect the habit of writing because so many other media of communication have taken its place. Telephones and tape recorders, computers and fax machines are more efficient in conveying news. If the only point to writing were to transmit information, then it would deserve to become obsolete. But the point of writing is to create information, not simply to pass it along. In the past, educated persons used journals and personal correspondence to put their experiences into words, which allowed them to reflect on what had happened during the day. The prodigiously detailed letters so many Victorians wrote are an example of how people created patterns of order out of the mainly random events impinging on their consciousness. The kind of material we write in diaries and letters does not exist before it is written down.

It is the slow, organically growing process of thought involved in writing that lets the ideas emerge in the first place

In The Tyranny of Email, Freeman sums up the multitasking argument:

Multitasking may not be perfect, but it can push the brain to add new capacity; the problem, however, remains that the small gains in capacity are continuously, rapidly, outstripped by the speeding up and growing volume of incoming demand on our attention.

Why is it so Hard to Read These Days?

In his essay on Google Carr writes:

It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.

Some of this is due to changes in the medium itself. Newspaper articles are shorter and catchier. Text has become bigger. We’re becoming a powerpoint culture. We need bullet points, short sentences, and fancy graphics. We skim rather than read. Online readers are “selfish, lazy, and ruthless,” said Jakob Nielson, a usability engineer. If we don’t get what we want, as soon as we want it, we move to the next site.

But all of this has a cost.

What We Are Losing

“What we are losing in this country, and presumably around the world is the sustained, focused, linear attention developed by reading,” said Dana Gioia, a former chairman of the National Endowment for the Arts. “I would believe people who tell me that the Internet develops reading if I did not see such a universal decline in reading ability and reading comprehension on virtually all tests.”

“If the research on multitasking is any guide,” Freeman writes in the Tyranny of Email, “and if several centuries of liberal arts education have proven anything, the ability to think clearly and critically and develop an argument comes from reading in a focused manner.”

These skills are important because they enable employees to step back from an atmosphere of frenzy and make sense in a busy, nearly chaotic environment. If all companies want, though, is worker bees who will simply type till they drop and badger one another into a state of overload, a new generation of inveterate multitaskaholics might be just what they get. If that’s the case, workplace productivity isn’t the only thing that will suffer.

Freeman concludes his book by offering several tips you can do to take back control of your life and the mental space email is consuming.

1. Don’t Send.

The most important thing you can do to improve the state of your inbox, free up your attention span, and break free of the tyranny of email is not to send an email. As most people now know, email only creates more email, so by stepping away from the messaging treadmill, even if for a moment every day, you instantly dial down the speed of the email messagopolis.

2.Don’t Check it First Thing in The Morning Or Late at Night

… Not checking your email first thing will also reinforce a boundary between your work and your private life, which is essential if you want to be fully present in either place. If you check your email before getting to work, you will probably begin to worry about work matters before you actually get there. Checking your e-mail first thing at home doesn’t give you a jump on the workday; it just extends it. Sending email before and after office hours has a compounded effect, since it creates an environment in which workers are tacitly expected to check their email at the same time and squeeze more work out of their tired bodies.

3. Check it Twice a Day

… Checking your email twice a day will … allow you to set the agenda for your day, which is essential if you want to stay on task and get things done in a climate of constant communication.

4. Keep a Written To-do List and Incorporate email into It.

5. Give Good Email

6. Read the Entire Incoming email before Replying

This seems like a pretty basic rule, but a great deal of email is generated by people replying without having properly read initial messages.

7. Do Not Debate Complex or Sensitive Matters by email

8. If You Have to Work as a Group by email, Meet Your Correspondents Face-to-Face

9. Set Up Your Desktop to Do Something Else besides email

As much as you can, take control over your office space by setting aside part of your desk for work that isn’t done on the computer. Imagine it as your thinking area, where you can read or take notes or doodle as you work out a problem.

10. Schedule Media-free Time.

Still Curious? Read The Single Most Important Change You Can Make In Your Working Habits next.

Too Many Decisions

The first thing you do in the morning is to make a decision. And those decisions pile up fast. Should I hit snooze? What clothes should I wear? What should I have for breakfast? What combination of choices from Starbucks will make my morning go smoother?

You’ve already made more decisions than most of our ancestors would make in a day by the time you arrive at work. Unfortunately — at least as far as the quality of your decisions is concerned — your day is just getting started.

Decisions take a lot of mental effort. And that’s a problem. Making choices reduces physical stamina, reduces persistence, reduces willpower, and even encourages procrastination.

John Tierney, adapted part of his upcoming book, Willpower: Rediscovering the Greatest Human Strength, for a New York Times Magazine article: Do You Suffer From Decision Fatigue?

Our brains are tired of making decisions. This has been coined as “Decision fatigue” and helps explain why, in the words of Tierney, “normally sensible people get angry at colleagues and families, splurge on clothes, buy junk food at the supermarket and can’t resist the dealer’s offer to rustproof their new car.”

No matter how rational you are (or try to be), you can’t make decision after decision without paying a mental price. “It’s different”, Tierney writes, “from ordinary physical fatigue — you’re not consciously aware of being tired — but you’re low on mental energy.”

The more choices you make, the harder they become. To save energy your brain starts to look for shortcuts. One shortcut is to be reckless and act impulsively (rather than rationally). The other shortcut is to do nothing, which saves as much energy as possible (and often creates bigger problems in the long run).

It turns out that glucose is a vital part of willpower. Tierney writes, “Your brain does not stop working when glucose is low. It stops doing some things and starts doing others. It responds more strongly to immediate rewards and plays less attention to long-term prospects.”

Glucose explains a lot. For instance, why people with phenomenally strong willpower in the rest of their lives struggle to lose weight. It also explains how someone can resist junk all day but gorge on a bag of chips right before bed. We start the day with a clean slate and the best intentions. It’s fairly easy to resist fatty muffins at breakfast and skip the snickers bar fix after lunch. But each of these decisions—resistances—consumes glucose and lowers our willpower. Eventually, we need to replenish it. But that requires glucose, which creates a catch-22: We need willpower not to eat but in order to have willpower we need to eat.

Tierney continues, “when the brain’s regulatory powers weaken, frustrations seem more irritating than usual. Impulses to eat, drink, spend and say stupid things feel more powerful (and alcohol causes self-control to decline further)…ego-depleted humans become more likely to get into needless fights over turf.”

Although we have no way of knowing, it seems like a fairly safe bet that we make more decisions now than at any point in history. That is, we’re under more decision making strain and we’re starting to show cracks.

The internet and our ability to “multitask” isn’t helping, argues Nicolas Carr, author of The Shallows: What the Internet is Doing to Our Brian: “A growing body of scientific evidence suggests that the Net, with its constant distractions and interruptions, is also turning us into scattered and superficial thinkers.” Carr argues that our continuously connected, constantly distracted lives (read—constantly making decisions) rob us of the opportunity for deep thinking. The kind of thinking that we need to make a lot of decisions. By making thousands of trivial decisions every day, we rob ourselves of the ability to make more difficult contemplative decisions.

There are ways to improve our ability to make better decisions. Social psychologist Roy Baumeister has done research showing that people with the best self-control are the ones who structure their lives to conserve willpower. “They don’t,” Tierney suggests, “schedule endless back-to-back meetings. They avoid temptations like all-you-can-eat buffets, and they establish habits that eliminate the mental effort of making choices. Instead of deciding every morning whether or not to force themselves to exercise, they set up regular appointments to work out with a friend. Instead of counting on willpower to remain robust all day, they conserve it so that it’s available for emergencies and important decisions.” Wise advice we should all follow.

Organizations should start thinking carefully about how their employees actually end up spending their time and what they “waste” their precious mental energy on. If they’re filling out forms, trudging through a bureaucratic morass, or attending more than a few meetings a day, they’re likely using their mental energy on things that add little value to the organization.

Still Curious? John Tierney wrote a book about willpower and decision fatigue, Willpower: Rediscovering the Greatest Human Strength.

The Myth of Multitasking: Why the Key to Better Work is Developing Forgotten Skills

Of course you can do more than one thing at once. The catch is you can’t do more more than one thing at once that requires concentration. You can walk and chew gum, but you can’t write an email and a report at the same time.

When you’re doing two things at once that are cognitively demanding, what you’re really doing is switching back and forth very quickly from one task to another.

The brain is a pretty smooth operator when it knows what direction to head. Transitioning from one task to another, however, dramatically reduces its horsepower.

There is a mental cost—called a switching cost—to all of this multitasking. But that’s not all it makes deep thinking impossible.

Steven Yantis, a professor of psychological and brain sciences at Johns Hopkins, says:

In addition to the switch cost, each time you switch away from a task and back again, you have to recall where you were in that task, what you were thinking about. If the tasks are complex, you may well forget some aspect of what you were thinking about before you switched away, which may require you to revisit some aspect of the task you had already solved (for example, you may have to re-read the last paragraph you’d been reading). Deep thinking about a complex topic can become nearly impossible.

Multitasking ensures the brain is operating sub-optimally.

When you try to do two or more cognitively demanding things at once, it’s as if you have a 1000 h.p. motor (your brain) but you’re throttling is so you only get 100 h.p. of output.

There is a difference between the appearance of being busy—busywork—and actually moving the needle. Attempting to do more things may signal to others how busy you are, but if you care about actual results, you need to do fewer things better.

Our most valuable mental habits—things like deep and focused thought—must be learned through concentrated practice. This is a skill we’re starting to lose as more and more of our time gets fragmented away into small and smaller increments thanks to screens. Nicolas Carr, author of The Shallows: What the Internet Is Doing To Our Brain, comments on his blog:

Our most valuable mental habits – the habits of deep and focused thought – must be learned, and the way we learn them is by practicing them, regularly and attentively. And that’s what our continuously connected, constantly distracted lives are stealing from us: the encouragement and the opportunity to practice reflection, introspection, and other contemplative modes of thought. Even formal research is increasingly taking the form of “power browsing,” according to a 2008 University College London study, rather than attentive and thorough study.

Patricia Greenfield, a professor of developmental psychology at UCLA, warns that our growing use of screens appears to weaken our “higher-order cognitive processes,” including “abstract vocabulary, mindfulness, reflection, inductive problem solving, critical thinking, and imagination.”

Busywork leads to overwork and sub-optimal outputs. Busywork is a psychological signal to others (and ourselves) that we’re needed. That we’re important. After-all, look how busy I am. This delusion is the reason we take on so many things and fail to say no. This delusion keeps us so busy that we’re not learning new things or even getting better at what we already do. This delusion inhibits us from deriving meaning in our work and our lives. This delusion keeps us from understanding how the world works and adapting to the changing reality.

The world is a competitive place. That means we have to learn how to do difficult things — things that other people value, things other people can’t do. This isn’t as easy as it sounds. What we really need to develop is not so much the specific traits that will be valued as much as a system to constantly reinvent ourselves and adapt.

This is the hard part and in an odd way, we need to go backwards to go forward. We need to develop skills we’ve once had and lost: The ability to read better, the ability to make better decisions, the ability to focus, and strategies to make sure we’re not consumed by busywork. These are the skills that will enable us to have quality ideas that other people don’t have. These are the foundational skills that underpin adaptability.

 

Multitasking: The Costs of Switching From One Task to Another

You may think that as you juggle emails, my-book, twitter, google, work, life, the phone and casual web surfing that you’re really doing all of that stuff at once, but what you’re really doing is quickly switching constantly between tasks. And switching carries a cognitive cost.

Steven Yantis, a professor of psychological and brain sciences at Johns Hopkins, says:

In addition to the switch cost, each time you switch away from a task and back again, you have to recall where you were in that task, what you were thinking about. If the tasks are complex, you may well forget some aspect of what you were thinking about before you switched away, which may require you to revisit some aspect of the task you had already solved (for example, you may have to re-read the last paragraph you’d been reading). Deep thinking about a complex topic can become nearly impossible.

What if I told you that some people (singletaskers) claim that our most valuable mental habits—things like deep and focused thought—must be learned through concentrated practice.

Nicolas Carr, author of The Shallows: What the Internet Is Doing To Our Brain, comments on his blog:

The fact that people who fiddle with cell phones drive poorly shouldn’t make us less concerned about the cognitive effects of media distractions; it should make us more concerned.

And then there’s this: “It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people.” Exactly. And that’s another cause for concern. Our most valuable mental habits – the habits of deep and focused thought – must be learned, and the way we learn them is by practicing them, regularly and attentively. And that’s what our continuously connected, constantly distracted lives are stealing from us: the encouragement and the opportunity to practice reflection, introspection, and other contemplative modes of thought. Even formal research is increasingly taking the form of “power browsing,” according to a 2008 University College London study, rather than attentive and thorough study. Patricia Greenfield, a professor of developmental psychology at UCLA, warned in a Science article last year that our growing use of screen-based media appears to be weakening our “higher-order cognitive processes,” including “abstract vocabulary, mindfulness, reflection, inductive problem solving, critical thinking, and imagination.”

12