Category: Writing

Standing on the Shoulders of Giants

Innovation doesn’t occur in a vacuum. Doers and thinkers from Shakespeare to Jobs, liberally “stole” inspiration from the doers and thinkers who came before. Here’s how to do it right.

***

“If I have seen further,” Isaac Newton wrote in a 1675 letter to fellow scientist Robert Hooke, “it is by standing on the shoulders of giants.”

It can be easy to look at great geniuses like Newton and imagine that their ideas and work came solely out of their minds, that they spun it from their own thoughts—that they were true originals. But that is rarely the case.

Innovative ideas have to come from somewhere. No matter how unique or unprecedented a work seems, dig a little deeper and you will always find that the creator stood on someone else’s shoulders. They mastered the best of what other people had already figured out, then made that expertise their own. With each iteration, they could see a little further, and they were content in the knowledge that future generations would, in turn, stand on their shoulders.

Standing on the shoulders of giants is a necessary part of creativity, innovation, and development. It doesn’t make what you do less valuable. Embrace it.

Everyone gets a lift up

Ironically, Newton’s turn of phrase wasn’t even entirely his own. The phrase can be traced back to the twelfth century, when the author John of Salisbury wrote that philosopher Bernard of Chartres compared people to dwarves perched on the shoulders of giants and said that “we see more and farther than our predecessors, not because we have keener vision or greater height, but because we are lifted up and borne aloft on their gigantic stature.”

Mary Shelley put it this way in the nineteenth century, in a preface for Frankenstein: “Invention, it must be humbly admitted, does not consist in creating out of void but out of chaos.”

There are giants in every field. Don’t be intimidated by them. They offer an exciting perspective. As the film director Jim Jarmusch advised, “Nothing is original. Steal from anywhere that resonates with inspiration or fuels your imagination. Devour old films, new films, music, books, paintings, photographs, poems, dreams, random conversations, architecture, bridges, street signs, trees, clouds, bodies of water, light, and shadows. Select only things to steal from that speak directly to your soul. If you do this, your work (and theft) will be authentic. Authenticity is invaluable; originality is non-existent. And don’t bother concealing your thievery—celebrate it if you feel like it. In any case, always remember what Jean-Luc Godard said: ‘It’s not where you take things from—it’s where you take them to.’”

That might sound demoralizing. Some might think, “My song, my book, my blog post, my startup, my app, my creation—surely they are original? Surely no one has done this before!” But that’s likely not the case. It’s also not a bad thing. Filmmaker Kirby Ferguson states in his TED Talk: “Admitting this to ourselves is not an embrace of mediocrity and derivativeness—it’s a liberation from our misconceptions, and it’s an incentive to not expect so much from ourselves and to simply begin.”

There lies the important fact. Standing on the shoulders of giants enables us to see further, not merely as far as before. When we build upon prior work, we often improve upon it and take humanity in new directions. However original your work seems to be, the influences are there—they might just be uncredited or not obvious. As we know from social proof, copying is a natural human tendency. It’s how we learn and figure out how to behave.

In Antifragile: Things That Gain from Disorder, Nassim Taleb describes the type of antifragile inventions and ideas that have lasted throughout history. He describes himself heading to a restaurant (the likes of which have been around for at least 2,500 years), in shoes similar to those worn at least 5,300 years ago, to use silverware designed by the Mesopotamians. During the evening, he drinks wine based on a 6,000-year-old recipe, from glasses invented 2,900 years ago, followed by cheese unchanged through the centuries. The dinner is prepared with one of our oldest tools, fire, and using utensils much like those the Romans developed.

Much about our societies and cultures has undeniably changed and continues to change at an ever-faster rate. But we continue to stand on the shoulders of those who came before in our everyday life, using their inventions and ideas, and sometimes building upon them.

Not invented here syndrome

When we discredit what came before or try to reinvent the wheel or refuse to learn from history, we hold ourselves back. After all, many of the best ideas are the oldest. “Not Invented Here Syndrome” is a term for situations when we avoid using ideas, products, or data created by someone else, preferring instead to develop our own (even if it is more expensive, time-consuming, and of lower quality.)

The syndrome can also manifest as reluctance to outsource or delegate work. People might think their output is intrinsically better if they do it themselves, becoming overconfident in their own abilities. After all, who likes getting told what to do, even by someone who knows better? Who wouldn’t want to be known as the genius who (re)invented the wheel?

Developing a new solution for a problem is more exciting than using someone else’s ideas. But new solutions, in turn, create new problems. Some people joke that, for example, the largest Silicon Valley companies are in fact just impromptu incubators for people who will eventually set up their own business, firm in the belief that what they create themselves will be better.

The syndrome is also a case of the sunk cost fallacy. If a company has spent a lot of time and money getting a square wheel to work, they may be resistant to buying the round ones that someone else comes out with. The opportunity costs can be tremendous. Not Invented Here Syndrome detracts from an organization or individual’s core competency, and results in wasting time and talent on what are ultimately distractions. Better to use someone else’s idea and be a giant for someone else.

Why Steve Jobs stole his ideas

“Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it. They just saw something. It seemed obvious to them after a while; that’s because they were able to connect experiences they’ve had and synthesize new things.” 

— Steve Jobs

In The Runaway Species: How Human Creativity Remakes the World, Anthony Brandt and David Eagleman trace the path that led to the creation of the iPhone and track down the giants upon whose shoulders Steve Jobs perched. We often hail Jobs as a revolutionary figure who changed how we use technology. Few who were around in 2007 could have failed to notice the buzz created by the release of the iPhone. It seemed so new, a total departure from anything that had come before. The truth is a little messier.

The first touchscreen came about almost half a century before the iPhone, developed by E.A. Johnson for air traffic control. Other engineers built upon his work and developed usable models, filing a patent in 1975. Around the same time, the University of Illinois was developing touchscreen terminals for students. Prior to touchscreens, light pens used similar technology. The first commercial touchscreen computer came out in 1983, soon followed by graphics boards, tablets, watches, and video game consoles. Casio released a touchscreen pocket computer in 1987 (remember, this is still a full twenty years before the iPhone.)

However, early touchscreen devices were frustrating to use, with very limited functionality, often short battery lives, and minimal use cases for the average person. As touchscreen devices developed in complexity and usability, they laid down the groundwork for the iPhone.

Likewise, the iPod built upon the work of Kane Kramer, who took inspiration from the Sony Walkman. Kramer designed a small portable music player in the 1970s. The IXI, as he called it, looked similar to the iPod but arrived too early for a market to exist, and Kramer lacked the marketing skills to create one. When pitching to investors, Kramer described the potential for immediate delivery, digital inventory, taped live performances, back catalog availability, and the promotion of new artists and microtransactions. Sound familiar?

Steve Jobs stood on the shoulders of the many unseen engineers, students, and scientists who worked for decades to build the technology he drew upon. Although Apple has a long history of merciless lawsuits against those they consider to have stolen their ideas, many were not truly their own in the first place. Brandt and Eagleman conclude that “human creativity does not emerge from a vacuum. We draw on our experience and the raw materials around us to refashion the world. Knowing where we’ve been, and where we are, points the way to the next big industries.”

How Shakespeare got his ideas

Nothing will come of nothing.”  

— William Shakespeare, King Lear

Most, if not all, of Shakespeare’s plays draw heavily upon prior works—so much so that some question whether he would have survived today’s copyright laws.

Hamlet took inspiration from Gesta Danorum, a twelfth-century work on Danish history by Saxo Grammaticus, consisting of sixteen Latin books. Although it is doubtful whether Shakespeare had access to the original text, scholars find the parallels undeniable and believe he may have read another play based on it, from which he drew inspiration. In particular, the accounts of the plight of Prince Amleth (which has the same letters as Hamlet) involves similar events.

Holinshed’s Chronicles, a co-authored account of British history from the late sixteenth century, tells stories that mimic the plot of Macbeth, including the three witches. Holinshed’s Chronicles itself was a mélange of earlier texts, which transferred their biases and fabrications to Shakespeare. It also likely inspired King Lear.

Parts of Antony and Cleopatra are copied verbatim from Plutarch’s Life of Mark Anthony. Arthur Brooke’s 1562 poem The Tragicall Historye of Romeus and Juliet was an undisguised template for Romeo and Juliet. Once again, there are more giants behind the scenes—Brooke copied a 1559 poem by Pierre Boaistuau, who in turn drew from a 1554 story by Matteo Bandello, who in turn drew inspiration from a 1530 work by Luigi da Porto. The list continues, with Plutarch, Chaucer, and the Bible acting as inspirations for many major literary, theatrical, and cultural works.

Yet what Shakespeare did with the works he sometimes copied, sometimes learned from, is remarkable. Take a look at any of the original texts and, despite the mimicry, you will find that they cannot compare to his plays. Many of the originals were dry, unengaging, and lacking any sort of poetic language. J.J. Munro wrote in 1908 that The Tragicall Historye of Romeus and Juliet “meanders on like a listless stream in a strange and impossible land; Shakespeare’s sweeps on like a broad and rushing river, singing and foaming, flashing in sunlight and darkening in cloud, carrying all things irresistibly to where it plunges over the precipice into a waste of waters below.”

Despite bordering on plagiarism at times, he overhauled the stories with an exceptional use of the English language, bringing drama and emotion to dreary chronicles or poems. He had a keen sense for the changes required to restructure plots, creating suspense and intensity in their stories. Shakespeare saw far further than those who wrote before him, and with their help, he ushered in a new era of the English language.

Of course, it’s not just Newton, Jobs, and Shakespeare who found a (sometimes willing, sometimes not) shoulder to stand upon. Facebook is presumed to have built upon Friendster. Cormac McCarthy’s books often replicate older history texts, with one character coming straight from Samuel Chamberlain’s My Confessions. John Lennon borrowed from diverse musicians, once writing in a letter to the New York Times that though the Beatles copied black musicians, “it wasn’t a rip off. It was a love in.”

In The Ecstasy of Influence, Jonathan Lethem points to many other instances of influences in classic works. In 1916, journalist Heinz von Lichberg published a story of a man who falls in love with his landlady’s daughter and begins a love affair, culminating in her death and his lasting loneliness. The title? Lolita. It’s hard to question that Nabokov must have read it, but aside from the plot and name, the style of language in his version is absent from the original.

The list continues. The point is not to be flippant about plagiarism but to cultivate sensitivity to the elements of value in a previous work, as well as the ability to build upon those elements. If we restrict the flow of ideas, everyone loses out.

The adjacent possible

What’s this about? Why can’t people come up with their own ideas? Why do so many people come up with a brilliant idea but never profit from it? The answer lies in what scientist Stuart Kauffman calls “the adjacent possible.” Quite simply, each new innovation or idea opens up the possibility of additional innovations and ideas. At any time, there are limits to what is possible, yet those limits are constantly expanding.

In Where Good Ideas Come From: The Natural History of Innovation, Steven Johnson compares this process to being in a house where opening a door creates new rooms. Each time we open the door to a new room, new doors appear and the house grows. Johnson compares it to the formation of life, beginning with basic fatty acids. The first fatty acids to form were not capable of turning into living creatures. When they self-organized into spheres, the groundwork formed for cell membranes, and a new door opened to genetic codes, chloroplasts, and mitochondria. When dinosaurs evolved a new bone that meant they had more manual dexterity, they opened a new door to flight. When our distant ancestors evolved opposable thumbs, dozens of new doors opened to the use of tools, writing, and warfare. According to Johnson, the history of innovation has been about exploring new wings of the adjacent possible and expanding what we are capable of.

A new idea—like those of Newton, Jobs, and Shakespeare—is only possible because a previous giant opened a new door and made their work possible. They in turn opened new doors and expanded the realm of possibility. Technology, art, and other advances are only possible if someone else has laid the groundwork; nothing comes from nothing. Shakespeare could write his plays because other people had developed the structures and language that formed his tools. Newton could advance science because of the preliminary discoveries that others had made. Jobs built Apple out of the debris of many prior devices and technological advances.

The questions we all have to ask ourselves are these: What new doors can I open, based on the work of the giants that came before me? What opportunities can I spot that they couldn’t? Where can I take the adjacent possible? If you think all the good ideas have already been found, you are very wrong. Other people’s good ideas open new possibilities, rather than restricting them.

As time passes, the giants just keep getting taller and more willing to let us hop onto their shoulders. Their expertise is out there in books and blog posts, open-source software and TED talks, podcast interviews, and academic papers. Whatever we are trying to do, we have the option to find a suitable giant and see what can be learned from them. In the process, knowledge compounds, and everyone gets to see further as we open new doors to the adjacent possible.

Why Great Writers Write

“Books are never finished.They are merely abandoned.”
— Oscar Wilde

***

Why do great writers write?

The question will never be answered only explored — in the context of culture, time, and ourselves. This is exactly what Meredith Maran probes in Why We Write: 20 Acclaimed Authors on How and Why They Do What They Do. The powerful range of answers reveal quite a bit about the nature of drive, passion, and the creative process.

Maran starts by introducing us to what authors have historically said on the subject. One of the most well-known is from George Orwell, who eloquently answered the question in 1946 in an essay called Why I Write:

From a very early age, perhaps the age of five or six, I knew that when I grew up I should be a writer. Between the ages of about seventeen and twenty-four I tried to abandon this idea, but I did so with the consciousness that i was outraging my true nature and that sooner or later I should have to settle down and write books.

Throughout the book, we encounter this idea that writing is so intrinsically a part of most writers that they couldn’t imagine being anything else. David Baldacci, the writer behind Absolute Power, claims, “If writing were illegal, I’d be in prison. I can’t not write. It’s a compulsion.

Can you claim that about your own work?

Baldacci notes that even before he was a full time author, he was a storyteller in law, his other profession.

Some of the best fiction I ever came up with was as a lawyer.

You know who wins in court? The client whose lawyer tells better stories than the other lawyer does. When you’re making a legal case, you can’t change the facts. You can only rearrange the to make a story that better enhances your client’s position, emphasizing certain things, deemphasizing others. You make sure the facts that you want people to believe are the most compelling ones. The facts that hurt your case are the ones you either explain away or hide away. That’s telling a story.

Baldacci describes the intrinsic fears he feels as an author, where every new project starts with a blank slate and infinite possibilities. (Something Steven Pressfield calls the War of Art.)

Every time I start a project, I sit down scared to death that I won’t be able to bring the magic again.

You’d never want to be on the operating table with a right-handed surgeon who says, ‘Today I’m going to try operating with my left hand.’ But writing is like that. The way you get better is by pushing yourself to do things differently each time. As a writer you’re not constrained by mechanical devices or technology or anything else. You get to play. Which is terrifying.

mary-karr

It’s interesting to note that most of the authors in Maran’s book, despite being successful, had a huge fear of not being able to produce again. We assume that success brings a certain amount of confidence but also brings expectations. Nothing is more scary to a musician than a sophomore album or more scary to an author then starting the next book.

One way to solve the problem is to attempt to be ridiculously prolific, like Issac Asimov. But even prolific authors seem to suffer from trepidation. Sue Grafton, the author of A is for Alibi, has written a mystery novel for every letter of the alphabet up to X, and this is what she had to say:

Most days when I sit down at my computer, I’m scared half out of my mind. I’m always convinced that my last book was my last book, that my career is at an end, that I’ll never be able to pull off another, that my success was a fleeting illusion, and my hopes for the future are already dead. Dang! All this drama and it’s not even nine a.m.

For many this fear seems ubiquitous, a shadow that follows them through the process. This is evidenced in Isabel Allende’s description of her writing methodology. (Allende wrote The House of Spirits.)

I start all my books on January eighth. Can you imagine January seventh? It’s hell.

Every year on January seventh, I prepare my physical space. I clean up everything from my other books. I just leave my dictionaries, and my first editions, and the research materials for the new one. And then on January eight I walk seventeen steps from the kitchen to the little pool house that is my office. It’s like a journey to another world. It’s winter, it’s raining usually. I go with my umbrella and the dog following me. From those seventeen steps on, I am in another world and i am another person.

I go there scared. And excited. And disappointed – because I have a sort of idea that isn’t really an idea. The first two, three, four weeks are wasted. I just show up in front of the computer. Show up, show up, show up, and after a while the muse shows up, too. If she doesn’t show up invited, eventually she just shows up.

Experience has shown these seasoned authors that this fear is to be embraced: If the muse isn’t here right now, she will come eventually, because she always does. That time between then and now is an experience that can be used in their writing.

So why write? Why punish yourself by putting yourself through such a difficult process? Mary Karr (author of The Liar’s Club) had an almost poetic answer.

I write to dream; to connect with other human beings; to record; to clarify; to visit the dead. I have a kind of primitive need to leave a mark on the world. Also, I have a need for money.

I’m almost always anxious when I’m writing. There are those great moments when you forget where you are, when you get your hands on the keys, and you don’t feel anything because you’re somewhere else. But that very rarely happens. Mostly I’m pounding my hands on the corpse’s chest.

With all that said, Mr. Orwell might have summed it up best. In his essay, he listed what he believed to be the four great motives for writing:

Sheer egoism. To be talked about, to be remembered after death, to get your own back on grown-ups in childhood, etc.

Aesthetic enthusiasm. To take pleasure in the impact of one sound on another, in the firmness of good prose or the rhythm of a good story.

Historical impulse. The desire to see things as they are, to find out true facts and store them up for the use of posterity.

Political purposes. The opinion that art should have nothing to do with politics is itself a political attitude.

While every author seemed to have a slightly different motive for writing, they all appear compelled to tell us stories, a burning desire to get something out and share it with the world. Joan Didion once said, “I write entirely to find out what I’m thinking…”

Sometimes we can’t learn without writing. Sometimes we can’t make sense of our feelings unless we talk about them, and for writers that conversation happens in their books.

Whether you are an avid reader or a writer Why We Write is an insightful work which allows you the chance to visit the minds of some of the most successful authors of our time. Complement this with Maran’s other book Why We Write About Ourselves: Twenty Memoirists On Why They Expose Themselves (and others) in the Name of Literature.

Also check out our post on the extremely prolific writer Isaac Asimov and maybe improve your professional writing by taking the advice of Steven Pinker.

Steven Pinker on Why Your Professional Writing Sucks (And What to Do)

When we know a lot about a topic, it can be difficult to write about it in a way that makes sense to the layperson. In order to write better about your professional specialty, you need to avoid using jargon and abstractions, put yourself in the place of the reader, and try asking someone from the intended audience to read it.

***

Harvard’s cognitive psychology giant Steven Pinker has had no shortage of big, interesting topics to write about so far.

Starting in 1994 with his first book aimed at popular audiences, The Language Instinct, Pinker has discussed not only the origins of language, but the nature of human beings, the nature of our minds, the nature of human violence, and a host of related topics.

His most recent book The Sense of Style narrows in on how to write well, but continues to showcase his brilliant synthetical mind. It’s a 21st century version of Strunk & White, a book aimed to help us understand why our writing often sucks, and how we might make it suck a little less.

His deep background in linguistics and cognitive psychology allows him to discuss language and writing more deeply than your average style guide; it’s also funny as hell in parts, which can’t be said of nearly any style guide.

senseofstyle

Please No More “Ese”

In the third chapter, Pinker addresses the familiar problem of academese, legalese, professionalese…all the eses that make one want to throw a book, paper, or article in the trash rather than finish it. What causes them? Is it because we seek to obfuscate, as is commonly thought? Sometimes yes — especially when the author is trying to sell the reader something, be it a product or an idea.

But Pinker’s not convinced that concealment is driving most of our frustration with professional writing:

I have long been skeptical of the bamboozlement theory, because in my experience it does not ring true. I know many scholars who have nothing to hide and no need to impress. They do groundbreaking work on important subjects, reason well about clear ideas, and are honest, down-to-earth people, the kind you’d enjoy having a beer with. Still, their writing stinks.

So, if it’s not that we’re trying to mislead, what’s the problem?

***

Pinker first calls attention to the Curse of Knowledge — the inability to put ourselves in the shoes of a less informed reader.

The curse of knowledge is the single best explanation I know of why good people write bad prose. It simply doesn’t occur to the writer that her readers don’t know what she knows — that they haven’t mastered the patois of her guild, can’t divine the missing steps that seem too obvious to mention, have no way to visualize a scene that to her is as clear as day. And so she doesn’t bother to explain the jargon, or spell out the logic, or supply the necessary detail.

The first, simple, way this manifests itself is one we all encounter too frequently: Over-Abbreviation. It’s when we’re told to look up the date of the SALT conference for MLA sourcing on the HELMET system after our STEM meeting. (I only made one of those up.) Pinker’s easy way out is to recommend we always spell out acronyms the first time we use them, unless we’re absolutely sure readers will know what they mean. (And still maybe even then.)

The second obvious manifestation is our overuse of technical terms which the reader may or may not have encountered before. A simple fix is to add a few words of expository the first time you use the term, as in “Arabidopsis, a flowering mustard plant.” Don’t assume the reader knows all of your jargon.

In addition, the use of examples is so powerful that we might call them a necessary component of persuasive writing. If I give you a long rhetorical argument in favor of some action or another without anchoring it on a concrete example, it’s as if I haven’t explained it at all. Something like: “Reading a source of information that contradicts your existing beliefs is a useful practice, as in the case of a Democrat spending time reading Op-Eds written by Republicans.” The example makes the point far stronger.

Another deeper part of the problem is a little less obvious but a lot more interesting than you might think. Pinker ascribes a big source of messy writing to a mental process called chunking, in which we package groups of concepts into ever further abstraction in order to save space in our brain. Here’s a great example of chunking:

As children we see one person hand a cookie to another, and we remember it as an act of giving. One person gives another one a cookie in exchange for a banana; we chunk the two acts of giving together and think of the sequence as trading. Person 1 trades a banana to Person 2 for a shiny piece of metal, because he knows he can trade it to Person 3 for a cookie; we think of it as selling. Lots of people buying and selling make up a market. Activity aggregated over many markets get chunked into the economy. The economy can now be thought of as an entity which responds to action by central banks; we call that monetary policy. One kind of monetary policy, which involves the central bank buying private assets, is chunked as quantitative easing.

As we read and learn, we master a vast number of these abstractions, and each becomes a mental unit which we can bring to mind in an instant and share with others by uttering its name.

Chunking is an amazing and useful component of higher intelligence, but it gets us in trouble when we write because we assume our readers’ chunks are just like our own. They’re not.

A second issue is something he terms functional fixity. This compounds the problem induced by chunking:

Sometimes wording is maddeningly opaque without being composed of technical terminology from a private clique. Even among cognitive scientists, a “poststimulus event” is not a standard way to refer to a tap on the arm. A financial customer might be reasonably familiar with the world of investments and still have to puzzle over what a company brochure means by “capital charges and rights.” A computer-savvy user trying to maintain his Web site might be mystified by instructions on the maintenance page which refer to “nodes,” “content type” and “attachments.” And heaven help the sleepy traveler trying to set the alarm clock in his hotel room who must interpret “alarm function” and “second display mode.”

Why do writers invent such confusing terminology? I believe the answer lies in another way in which expertise can make our thoughts more idiosyncratic and thus harder to share: as we become familiar with something, we think about it more in terms of the use we put it to and less in terms of what it looks like and what it is made of. This transition, another staple of the cognitive psychology curriculum, is called functional fixity (sometimes functional fixedness).

The opposite of functional fixity would be familiar to those who have bought their dog or cat a toy only to be puzzled to see them playing with the packaging it came in. The animal hasn’t fixated on the function of the objects, to him an object is just an object. The toy and the packaging are not categorized as toy and thing toy comes in the way they are for us. In this case, we have functional fixity and they do not.

And so Pinker continues:

Now, if you combine functional fixity with chunking, and stir in the curse that hides each one from our awareness, you get an explanation of why specialists use so much idiosyncratic terminology, together with abstractions, metaconcepts, and zombie nouns. They are not trying to bamboozle us, that’s just the way they think.

[…]

In a similar way, writers stop thinking — and thus stop writing — about tangible objects and instead refer to them by the role those objects play in their daily travails. Recall the example from chapter 2 in which a psychologist showed people sentences, followed by the label TRUE or FALSE. He explained what he did as “the subsequent presentation of an assessment word,” referring to the [true/false] label as an “assessment word” because that’s why he put it there — so that the participants in the experiment could assess whether it applied to the preceding sentence Unfortunately, he left it up to us to figure out what an “assessment word” is–while saving no characters, and being less rather than more scientifically precise.

In the same way, a tap on the wrist became a “stimulus” and a [subsequent] tap on the elbow become a “post-stimulus event,” because the writer cared about the fact that one event came after the other and no longer cared about the fact that the events were taps on the arm.

As we get deeper into our expertise, we substitute concrete, useful, everyday imagery for abstract, technical fluff that brings nothing to the mind’s eye of a lay reader. We use metaconcepts like levels, issues, contexts, frameworks, and perspectives instead of describing the actual thing in plain language. (Thus does a book become a “tangible thinking framework.”)

Solutions

How do we solve the problem, then? Pinker partially defuses the obvious solution — remembering the reader over your shoulder while you write — because he feels it doesn’t always work. Even when we’re made aware that we need to simplify and clarify for our audience, we find it hard to regress our minds to a time when our professional knowledge was more primitive.

Pinker’s prescription has a few parts:

  1. Get rid of abstractions and use concrete nouns and refer to concrete things. Who did what to whom? Read over your sentences and look for nouns that refer to meta-abstractions and ask yourself whether there’s a way to put a tangible, everyday object or concept in its place. “The phrase ‘on the aspirational level’ adds nothing to ‘aspire,’ nor is a ‘prejudice reduction model’ any more sophisticated than ‘reducing prejudice.'”
  2. When in doubt, assume the reader knows a fair bit less than you about your topic. Clarity is not condescension. You don’t need to prove how smart you are — the reader won’t be impressed. “The key is to assume that your readers are as intelligent and sophisticated as you are, but that they happen not to know something you know.” 
  3. Get someone intelligent and part of your intended audience to read over your work and see if they understand it. You shouldn’t take every last suggestion, but do take seriously when they tell you certain sections are muddy or confusing. “The form in which thoughts occur to a writer is rarely the same as the form in which they can be absorbed by the reader.”
  4. Put your first draft down for enough time that, when you come back to it, you no longer feel deep familiarity with it. In this way, you become your intended audience. Your own fresh eyes will see the text in a new way. Don’t forget to read aloud, even if just under your breath.

Still interested? Check out Pinker’s The Sense of Style for a lot more on good writing, and check out his thoughts on what a broad education should entail.

David Foster Wallace: The Future of Writing In the Age of Information

David Foster Wallace remains both loved and hated. His wisdom shows itself in argumentative writing, ambition, and perfectionism, and perhaps one of the best, most profound, commencement addresses ever. He’s revered, in part, because he makes us think … about ourselves, about society, and about things we don’t generally want to think about.

In this interview from May of 1996 with Charlie Rose, Wallace addresses “the future of fiction in the information age.” His thoughts highlight the difficulties of reading in an age of distraction and are worth considering in a world where we often prefer being entertained to being educated.

On commercial entertainment for the masses and how it changes what we seek, Wallace comments:

Commercial entertainment — its efficiency, its sheer ability to deliver pleasure in large doses — changes people’s relationship to art and entertainment, it changes what an audience is looking for. I would argue that it changes us in deeper ways than that. And that some of the ways that commercial culture and commercial entertainment affects human beings is one of the things that I sort of think that serious fiction ought to be doing right now.

[…]

There’s this part that makes you feel full. There’s this part that is redemptive and instructive, [so that] when you read something, it’s not just delight — you go, “Oh my god, that’s me! I’ve lived like that, I’ve felt like that, I’m not alone in the world …

What’s tricky for me is … It would be one thing if everybody was absolutely delighted watching TV 24/7. But we have, as a culture, not only an enormous daily watching rate but we also have a tremendous cultural contempt for TV … Now TV that makes fun of TV is itself popular TV. There’s a way in which we who are watching a whole lot are also aware that we’re missing something — that there’s something else, there’s something more. While at the same time, because TV is really darn easy, you sit there and you don’t have to do very much.

Commenting on our need for easy fun he elaborates

Because commercial entertainment has conditioned readers to want easy fun, I think that avant garde and art fiction has sort of relinquished the field. Basically I don’t read much avant garde stuff because it’s hilaciously un-fun. … A lot of it is academic and foisted and basically written for critics.

What got him started writing?

Fiction for me, mostly as a reader, is a very weird double-edged sword — on the one hand, it can be difficult and it can be redemptive and morally instructive and all the good stuff we learn in school; on the other hand, it’s supposed to be fun, it’s a lot of fun. And what drew me into writing was mostly memories of really fun rainy afternoons spent with a book. It was a kind of a relationship.

I think part of the fun, for me, was being part of some kind of an exchange between consciousnesses, a way for human beings to talk to each other about stuff we can’t normally talk about.

​​(h/t Brainpickings)

David Foster Wallace on Argumentative Writing and Nonfiction

In December 2004, Bryan A. Garner, who had already struck up a friendship with David Foster Wallace, started interviewing state and federal judges as well as a few key writers. With over a hundred interviews under his belt by January 2006, he called David to suggest they do an interview. So on February 3, 2006, the two finally got together in Los Angeles for an extensive conversation on writing and life that offers a penetrating look into our collective psyche. Their conversation has been captured in Quack This Way: David Foster Wallace & Bryan A. Garner Talk Language and Writing.

Very few things get me more excited than reading one smart person interview another. I mean, we’re not talking TV puff pieces here; we’re talking outright depth with an incisive look at culture.

For context, Garner is the author of a book that, admittedly, I have a hard time not opening every week: Garner’s Modern American Usage, which helps explain some of the insightful banter between the two.

When asked if, before writing a long nonfiction piece, he attempts to understand the structure of the whole before starting, Wallace simply responded, “no.”

Elaborating on this, he goes on to say:

Everybody is different. I don’t discover the structure except by writing sentences because I can’t think structurally well enough. But I know plenty of good nonfiction writers. Some actually use Roman-numeral outlines, and they wouldn’t even know how to begin without it.

If you really ask writers, at least most of the ones I know— and people are always interested and want to know what you do— most of them are habits or tics or superstitions we picked up between the ages of 15 and 25, often in school. I think at a certain point, part of one’s linguistic nervous system gets hardened over that time or something, but it’s all different.

I would think for argumentative writing it would be very difficult, at a certain point, not to put it into some kind of outline form.

Were it me, I see doing it in the third or fourth draft as part of the “Oh my God, is what I’m saying making any sense at all? Can somebody who’s reading it, who can’t read my mind, fit it into some sort of schematic structure of argument?”

I think a more sane person would probably do that at the beginning. But I don’t know that anybody would be able to get away with . . . Put it this way: if you couldn’t do it, if you can’t put . . . If you’re writing an argumentative thing, which I think people in your discipline are, if you couldn’t, if forced, put it into an outline form, you’re in trouble.

Commenting on what constitutes a good opening in argumentative writing, Wallace offers:

A good opener, first and foremost, fails to repel. Right? So it’s interesting and engaging. It lays out the terms of the argument, and, in my opinion, should also in some way imply the stakes. Right? Not only am I right, but in any piece of writing there’s a tertiary argument: why should you spend your time reading this? Right? “So here’s why the following issue might be important, useful, practical.” I would think that if one did it deftly, one could in a one-paragraph opening grab the reader, state the terms of the argument, and state the motivation for the argument. I imagine most good argumentative stuff that I’ve read, you could boil that down to the opener.

Garner, the interviewer, follows this up by asking “Do you think of most pieces as having this, in Aristotle’s terms, a beginning, a middle, and an end—those three parts?”

I think, like most things about writing, the answer lies on a continuum. I think the interesting question is, how much violence do you do to the piece if you reprise it in a three-act . . . a three-part structure.

The middle should work . . . It lays out the argument in steps, not in a robotic way, but in a way that the reader can tell (a) what the distinct steps or premises of the argument are; and (b), this is the tricky one, how they’re connected to each other. So when I teach nonfiction classes, I spend a disproportionate amount of my time teaching the students how to write transitions, even as simple ones as however and moreover between sentences. Because part of their belief that the reader can somehow read their mind is their failure to see that the reader needs help understanding how two sentences are connected to each other— and also transitions between paragraphs.

I’m thinking of the argumentative things that I like the best, and because of this situation the one that pops into my mind is Orwell’s “Politics and the English Language.” If you look at how that’s put together, there’s a transition in almost every single paragraph. Right? Like, “Moreover, not only is this offense common, but it is harmful in this way.” You know where he is in the argument, but you never get the sense that he’s ticking off items on a checklist; it’s part of an organic whole. My guess would be, if I were an argumentative writer, that I would spend one draft on just the freaking argument, ticking it off like a checklist, and then the real writing part would be weaving it and making the transitions between the parts of the argument— and probably never abandoning the opening, never letting the reader forget what the stakes are here. Right? Never letting the reader think that I’ve lapsed into argument for argument’s sake, but that there’s always a larger, overriding purpose.

Why are transitions so important?

[pause] Reading is a very strange thing. We get talked to about it and talk explicitly about it in first grade and second grade and third grade, and then it all devolves into interpretation. But if you think about what’s going on when you read, you’re processing information at an incredible rate.

One measure of how good the writing is is how little effort it requires for the reader to track what’s going on. For example, I am not an absolute believer in standard punctuation at all times, but one thing that’s often a big shock to my students is that punctuation isn’t merely a matter of pacing or how you would read something out loud. These marks are, in fact, cues to the reader for how very quickly to organize the various phrases and clauses of the sentence so the sentence as a whole makes sense.

I believe psycholinguists, as part of neuro-science, spend . . . I mean, they hook little sensors up to readers’ eyes and study this stuff. I don’t know much about that, but I do know that when you’re not punctuating effectively for your genre, or when you fail to supply sufficient transitions, you are upping the amount of effort the reader has to make in order . . . forget appreciate . . . simply to understand what it is that you are communicating. My own guess is that at just about the point where that amount— the amount of time that you’re spending on a sentence, the amount of effort— becomes conscious, when you are conscious that this is hard, is the time when college students’ papers begin getting marked down by the prof. Right?

But one of the things I end up saying to the students is, “Realize your professors are human beings. They’re reading these things really fast, but you’re often being graded down for reasons that the professor isn’t consciously aware of because of an immense amount of reading and an immense amount of evaluation of the quality of a piece of writing, the qualities of the person producing it, occur below, just below, the level of consciousness, which is really the way you want it. And one of the things that really good writing does is that it’s able to get across massive amounts of information and various favorable impressions of the communicator with minimal effort on the part of the reader.”

That’s why people use terms like flow or effortless to describe writing that they regard as really superb. They’re not saying effortless in terms of it didn’t seem like the writer spent any work. It simply requires no effort to read it— the same way listening to an incredible storyteller talk out loud requires no effort to pay attention. Whereas when you’re bored, you’re conscious of how much effort is required to pay attention. Does that make sense?

One of the things that makes a really good writer, according to Wallace, is they “can just kind of feel” when to make transitions and when not to.

Which doesn’t mean such creatures are born, but it does mean that’s why practicing and paying attention never stop being important. Right? It’s because we’re training the same part of us that knows how to swing a golf club or shift a standard transmission, things we want to be able to do automatically. So we have to pay attention and learn how to do them so we can quit thinking about them and just do them automatically.

In case you’re wondering, it was Tense Present, DFW’s review of Garner’s book that sparked their friendship. The full article, before Harper’s cuts, appears in Consider the Lobster and Other Essays.

Quack This Way is an insightful interview with two terrific minds.

“Intelligence to accept or reject what is already presented as knowledge”

“There are some things which cannot be learned quickly and time,
which is all we have, must be paid heavily for their acquiring.”

***

Hemingway has contributed to our wisdom on writing — both general advice on writing and specifics to writing fiction. He even tracked his daily output on a chart:

“so as not to kid myself,” he said. When the writing wasn’t going well, he would often knock off the fiction and answer letters, which gave him a welcome break from “the awful responsibility of writing”

And those letters were beautiful and often contained gems of timeless wisdom. Hemingway’s response to F. Scott Fitzgerald, who asked his friend for an honest opinion on his book, references a trap that we all fall into — “You see well enough. But you stop listening.”

Death in the Afternoon is Hemingway’s exploration of bullfighting, which was much more than a mere sport to him. The mix of athleticism, artistry, and simplicity combined with the need to maintain grace under pressure served as an inspiration to his creative pursuits. Describing the sometimes brutal ritual he wrote: “The emotional and spiritual intensity and pure classic beauty that can be produced by a man, an animal, and a piece of scarlet serge draped on a stick.” Bullfighting, to Hemingway, is not a simple act but rather a magnificent performance.

A classic gem of Hemingway wisdom comes to us through the book. Writing on how sometimes the simplest things are the hardest to learn he says:

A good writer should know as near everything as possible. Naturally he will not. A great enough writer seems to be born with knowledge. But he really is not; he has only been born with the ability to learn in a quicker ratio to the passage of time than other men and without conscious application, and with an intelligence to accept or reject what is already presented as knowledge. There are some things which cannot be learned quickly and time, which is all we have, must be paid heavily for their acquiring. They are the very simplest things and because it takes a man’s life to know them the little new that each man gets from life is very costly and the only heritage he has to leave. Every novel which is truly written contributes to the total knowledge which is there at the disposal of the next writer who comes, but the next writer must pay, always, a certain nominal percentage in experience to be able to understand and assimilate what is available as his birthright and what he must, in turn, take his departure from.

***
Still Curious?

Ernest Hemingway’s 1954 Nobel Acceptance Speech on Working Alone is one of the shortest Nobel acceptance speeches ever.