Tag: Memory

Real vs. Simulated Memories

Blue Brain

Software memory is increasingly doing more and more for us. Yet it lacks one important element of human memory: emotion.

This thought-provoking excerpt comes from Mirror Worlds: or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean, a book recommended by Marc Andreessen.

When an expert remembers a patient, he doesn’t remember a mere list of words. He remembers an experience, a whole galaxy of related perceptions. No doubt he remembers certain words—perhaps a name, a diagnosis, maybe some others. But he also remembers what the patient looked like, sounded like; how the encounter made him feel (confident, confused?) … Clearly these unrecorded perceptions have tremendous information content. People can revisit their experiences, examine their stored perceptions in retrospect. In reducing a “memory” to mere words, and a quick-march parade step of attribute, value, attribute, value at that, we are giving up a great deal. We are reducing a vast mountaintop panorama to a grainy little black-and-white photograph.

There is, too, a huge distance between simulated remembering—pulling cases out of the database—and the real thing. To a human being, an experience means a set of coherent sensations, which are wrapped up and sent back to the storeroom for later recollection. Remembering is the reverse: A set of coherent sensations is trundled out of storage and replayed—those archived sensations are re-experienced. The experience is less vivid on tape (so to speak) than it was in person, and portions of the original may be smudged or completely missing, but nonetheless—the Rememberer gets, in essence, another dose of the original experience. For human beings, in other words, remembering isn’t merely retrieving, it is re-experiencing.

And this fact is important because it obviously impinges (probably in a large way) on how people do their remembering. Why do you “choose” to recall something? Well for one thing, certain memories make you feel good. The original experience included a “feeling good” sensation, and so the tape has “feel good” recorded on it, and when you recall the memory—you feel good. And likewise, one reason you choose (or unconsciously decide) not to recall certain memories is that they have “feel bad” recorded on them, and so remembering them makes you feel bad. (If you don’t believe me check with Freud, who based the better part of a profoundly significant career on this observation, more or less.) It’s obvious that the emotions recorded in a memory have at least something to do with steering your solitary rambles through Memory Woods.

But obviously, the software version of remembering has no emotional compass. To some extent, that’s good: Software won’t suppress, repress or forget some illuminating case because (say) it made a complete fool of itself when the case was first presented. Objectivity is powerful.

On the other hand, we are brushing up here against a limitation that has a distinctly fundamental look. We want our Mirror Worlds to “remember” intelligently—to draw just the right precedent or two from a huge database. But human beings draw on reason and emotion when they perform all acts of remembering. An emotion can be a concise, nuanced shorthand for a whole tangle of facts and perceptions that you never bothered to sort out. How did you feel on your first day at work or school, your child’s second birthday, last year’s first snowfall? Later you might remember that scene; you might be reminded merely by the fact that you now feel the same as you did then. Why do you feel the same? If you think carefully, perhaps you can trace down the objective similarities between the two experiences. But their emotional resemblance was your original clue. And it’s quite plausible that “expertise” works this way also, at least occasionally: I’m reminded of a past case, not because of any objective similarity, but rather because I now feel the same as I did then.

Remember Not to Trust Your Memory


Memories are the stories that we tell ourselves about the past. Sometimes they adjust and leave things out.

In an interesting passage in Think: Why You Should Question Everything, Guy P. Harrison talks about the fallibility of memory.

Did you know that you can’t trust even your most precious memories?

They may come to you in great detail and feel 100 percent accurate, but it doesn’t matter. They easily could be partial or total lies that your brain is telling you. Really, the personal past that your brain is supposed to be keeping safe for you is not what you think it is. Your memories are pieces and batches of information that your brain cobbles together and serves up to you, not to present the past as accurately as possible, but to provide you with information that you will likely find to be useful in the present. Functional value, not accuracy, is the priority. Your brain is like some power-crazed CIA desk jockey who feeds you memories on a need-to-know basis only. Daniel Schacter, a Harvard memory researcher, says that when the brain remembers, it does so in a way that is similar to how an archaeologist reconstructs a past scene relying on an artifact here, an artifact there. The end result might be informative and useful, but don’t expect it to be perfect. This is important because those who don’t know anything about how memory works already have one foot in fantasyland. Most people believe that our memory operates in a way that is similar to a video camera. They think that the sights, sounds, and feelings of our experiences are recorded on something like a hard drive in their heads. Totally wrong. When you remember your past, you don’t get to watch an accurately recorded replay.

To describe to people how memory really works, Harrison puts it this way:

Imagine a very tiny old man sitting by a very tiny campfire somewhere inside your head. He’s wearing a worn and raggedy hat and has a long, scruffy, gray beard. He looks a lot like one of those old California gold prospectors from the 1800s. He can be grumpy and uncooperative at times, but he’s the keeper of your memories and you are stuck with him. When you want or need to remember something from your past, you have to go through the old codger. Let’s say you want to recall that time when you scored the winning goal in a middle-school soccer match. You have to tap the old coot on the shoulder and ask him to tell you about it. He usually responds with something. But he doesn’t read from a faithfully recorded transcript, doesn’t review a comprehensive photo archive to create an accurate timeline, and doesn’t double-check his facts before speaking. He definitely doesn’t play a video recording of the game for you. Typically, he just launches into a tale about your glorious goal that won the big game. He throws up some images for you, so it’s kind of like a lecture or slideshow. Nice and useful, perhaps, but definitely not reliable

Thinking Straight in the Age of Information Overload

The Organized Mind

The Organized Mind: Thinking Straight in the Age of Information Overload, a book by Daniel Levitin, explores “how humans have coped with information and organization from the beginning of civilization. … It’s also the story of how the most successful members of society—from successful artists, athletes, and warriors, to business executives and highly credentialed professionals—have learned to maximize their creativity, and efficiency, by organizing their lives so that they spend less time on the mundane, and more time on the inspiring, comforting, and rewarding things in life.”


Memory is fallible. More than just remembering things wrongly, “we don’t even know we’re remembering them wrongly.”

The first humans who figured out how to write things down around 5,000 years ago were in essence trying to increase the capacity of their hippocampus, part of the brain’s memory system. They effectively extended the natural limits of human memory by preserving some of their memories on clay tablets and cave walls, and later, papyrus and parchment. Later, we developed other mechanisms —such as calendars, filing cabinets, computers, and smartphones— to help us organize and store the information we’ve written down. When our computer or smartphone starts to run slowly, we might buy a larger memory card. That memory is both a metaphor and a physical reality. We are off-loading a great deal of the processing that our neurons would normally do to an external device that then becomes an extension of our own brains, a neural enhancer.

These external memory mechanisms are generally of two types, either following the brain’s own organizational system or reinventing it, sometimes overcoming its limitations. Knowing which is which can enhance the way we use these systems, and so improve our ability to cope with information overload.

And once memory became external (written down and stored) our attention systems “were freed up to focus on something else.”

But we need a place (and a system) to organize all of this information.

The indexing problem is that there are several possibilities about where you store this report, based on your needs: It could be stored with other writings about plants, or with writings about family history, or with writings about cooking, or with writings about how to poison an enemy.

This brings us to two aspects of the human brain that are not given their due: richness and associative access.

Richness refers to the theory that a large number of the things you’ve ever thought or experienced are still in there, somewhere. Associative access means that your thoughts can be accessed in a number of different ways by semantic or perceptual associations— memories can be triggered by related words , by category names, by a smell, an old song or photograph, or even seemingly random neural firings that bring them up to consciousness.

Being able to access any memory regardless of where it is stored is what computer scientists call random access. DVDs and hard drives work this way; videotapes do not. You can jump to any spot in a movie on a DVD or hard drive by “pointing” at it. But to get to a particular point in a videotape, you need to go through every previous point first (sequential access). Our ability to randomly access our memory from multiple cues is especially powerful. Computer scientists call it relational memory. You may have heard of relational databases— that’s effectively what human memory is.


Having relational memory means that if I want to get you to think of a fire truck, I can induce the memory in many different ways. I might make the sound of a siren, or give you a verbal description (“ a large red truck with ladders on the side that typically responds to a certain kind of emergency”).

We categorize objects in a seemingly infinite number of ways. Each of those ways “has its own route to the neural node that represents fire truck in your brain.” Take a look at one way we can think of a firetruck.


Thinking about one memory or association activates more. This can be both a strength and a weakness.

If you are trying to retrieve a particular memory, the flood of activations can cause competition among different nodes, leaving you with a traffic jam of neural nodes trying to get through to consciousness, and you end up with nothing.

Organizing our Lives

The ancients Greeks came up with memory palaces and the method of loci to improve memory. The Egyptians became experts at externalizing information, inventing perhaps the biggest pre-google repository of knowledge, the library.

We don’t know why these simultaneous explosions of intellectual activity occurred when they did (perhaps daily human experience had hit a certain level of complexity). But the human need to organize our lives, our environment, even our thoughts, remains strong. This need isn’t simply learned, it is a biological imperative— animals organize their environments instinctively.

But the odd thing about the mind is that it doesn’t, on its own, organize things the way you might want it to. It’s largely an unconscious process.

It comes preconfigured, and although it has enormous flexibility, it is built on a system that evolved over hundreds of thousands of years to deal with different kinds and different amounts of information than we have today. To be more specific: The brain isn’t organized the way you might set up your home office or bathroom medicine cabinet. You can’t just put things anywhere you want to. The evolved architecture of the brain is haphazard and disjointed, and incorporates multiple systems, each of which has a mind of its own (so to speak). Evolution doesn’t design things and it doesn’t build systems— it settles on systems that, historically, conveyed a survival benefit (and if a better way comes along, it will adopt that). There is no overarching, grand planner engineering the systems so that they work harmoniously together. The brain is more like a big, old house with piecemeal renovations done on every floor, and less like new construction.

Consider this, then, as an analogy: You have an old house and everything is a bit outdated, but you’re satisfied. You add a room air conditioner during one particularly hot summer. A few years later, when you have more money, you decide to add a central air-conditioning system. But you don’t remove that room unit in the bedroom— why would you ? It might come in handy and it’s already there, bolted to the wall. Then a few years later, you have a catastrophic plumbing problem—pipes burst in the walls. The plumbers need to break open the walls and run new pipes, but your central air-conditioning system is now in the way, where some of their pipes would ideally go. So they run the pipes through the attic, the long way around. This works fine until one particularly cold winter when your uninsulated attic causes your pipes to freeze. These pipes wouldn’t have frozen if you had run them through the walls, which you couldn’t do because of the central air-conditioning. If you had planned all this from the start, you would have done things differently, but you didn’t— you added things one thing at a time, as and when you needed them.

Or you can use Sherlock Holmes’ analogy of a memory attic. As Holmes tells Watson, “I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as your choose.”

Levitin argues that we should learn “how our brain organizes information so that we can use what we have, rather than fight against it.” We do this primarily through the key processes of encoding and retrieval.

(Our brains are) built as a hodgepodge of different systems, each one solving a particular adaptive problem. Occasionally they work together, occasionally they’re in conflict, and occasionally they aren’t even talking to one another. Two of the key ways that we can control and improve the process are to pay special attention to the way we enter information into our memory— encoding—and the way we pull it out— retrieval.

We’re busier than ever. That’s not to say that it’s information overload, as there are arguments to why that doesn’t exist. Our internal to-do list is never satisfied. We’re overwhelmed with things disguised as wisdom or even information and we’re forced to sort through the nonsense. Levitin implies that one consequence to this approach is that we’re losing things. Our keys. Our driver’s licenses. Our iPhone. And it’s not just physical things. “we also forget things we were supposed to remember, important things like the password to our e-mail or a website, the PIN for our cash cards— the cognitive equivalent of losing our keys.”

These are important and hard to replace things.

We don’t tend to have general memory failures; we have specific, temporary memory failures for one or two things. During those frantic few minutes when you’re searching for your lost keys, you (probably) still remember your name and address, where your television set is, and what you had for breakfast —it’s just this one memory that has been aggravatingly lost. There is evidence that some things are typically lost far more often than others: We tend to lose our car keys but not our car, we lose our wallet or cell phone more often than the stapler on our desk or soup spoons in the kitchen, we lose track of coats and sweaters and shoes more often than pants. Understanding how the brain’s attentional and memory systems interact can go a long way toward minimizing memory lapses.

These simple facts about the kinds of things we tend to lose and those that we don’t can tell us a lot about how our brains work, and a lot about why things go wrong.

The way this works is fascinating. Levitin also hits on a topic that has long interested me. “Companies,” he writes, “are like expanded brains, with individual workers functioning something like neurons.”

Companies tend to be collections of individuals united to a common set of goals, with each worker performing a specialized function. Businesses typically do better than individuals at day-to-day tasks because of distributed processing. In a large business, there is a department for paying bills on time (accounts payable), and another for keeping track of keys (physical plant or security). Although the individual workers are fallible, systems and redundancies are usually in place, or should be, to ensure that no one person’s momentary distraction or lack of organization brings everything to a grinding halt. Of course, business organizations are not always prefectly organized, and occasionally, through the same cognitive blocks that cause us to lose our car keys, businesses lose things, too— profits, clients, competitive positions in the marketplace.

In today’s world it’s hard to keep up. We have pin numbers, phone numbers, email addresses, multiple to-do lists, small physical objects to keep track of, kids to pick up, books to read, videos to watch, nearly infinite websites to browse, and so on. Most of us, however, are still largely using the systems to organize and maintain this knowledge that were put into place in a less informatic time.

The Organized Mind: Thinking Straight in the Age of Information Overload shows us how to organize our time better, “not just so we can be more efficient but so we can find more time for fun, for play, for meaningful relationships, and for creativity.”

Harold Macmillan: The Fragility of Memory

Geoffrey Madan

Harold Macmillan beautifully describes the fragility of human memory in the foreword to Geoffrey Madan’s Notebooks, an early 1980s commonplace book.

Those of us who have reached extreme old age become gradually reconciled to increasing infirmities, mental and physical. The body develops, with each passing year, fresh weaknesses. Our legs no longer carry us; eyesight begins to fail, and hearing becomes feebler. Even with the mind, the process of thought seems largely to decrease in its power and intensity; and if we are wise we come to accept these frailties and develop, like all invalids, our own particular skills in avoiding or minimizing them. But there is one aspect of the mind which seems to operate in a peculiar fashion. While memory becomes gradually weaker in respect of recent happenings and even of the leading events of middle age, yet it appears to become increasingly strong as regards the years of childhood and youth. It is as if the new entries played into an ageing computer become gradually less effective while the original stores remain as strong as ever.

This phenomenon has the result that as the memory of so many much more important matters begins to fade, those of many years ago become sharper than before. The recent writings on the tablets of the mind grow quickly weak as if made by a light brush or soft pencil. Those of the earliest years become more and more deeply etched. The pictures which they recall are as fresh as ever. Indeed they seem to strengthen with each passing year.

He goes on to offer insight on how people with special temperaments often fall through the cracks of history.

In every age there have been men whose memory will always be recorded for outstanding achievements in war, in art, in politics, or in literature. There will be other figures — more shadowy, more difficult to reconstruct, and yet as important in their contribution to the social and intellectual life of their time. Those who have not known them, or heard them speak, find it difficult to realize why the memory of such figures is cherished by their contemporaries.

Geoffrey Madan’s Notebooks then serves two purposes. To his friends it makes it easy to recall his peculiarities. To others it gives us a glimpse into the commonplace book of an interesting bibliophile with a peculiar sense of humour.

Geoffrey Madan 2

Daniel Kahneman Explains The Machinery of Thought

Daniel Kahneman

Israeli-American psychologist and Nobel Laureate Daniel Kahneman is the founding father of modern behavioral economics. His work has influenced how we see thinking, decisions, risk, and even happiness.

In Thinking, Fast and Slow, his “intellectual memoir,” he shows us in his own words some of his enormous body of work.

Part of that body includes a description of the “machinery of … thought,” which divides the brain into two agents, called System 1 and System 2, which “respectively produce fast and slow thinking.” For our purposes these can also be thought of as intuitive and deliberate thought.

The Two Systems

Psychologists have been intensely interested for several decades in the two modes of thinking evoked by the picture of the angry woman and by the multiplication problem, and have offered many labels for them. I adopt terms originally proposed by the psychologists Keith Stanovich and Richard West, and will refer to two systems in the mind, System 1 and System 2.

  • System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
  • System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.

If asked to pick which thinker we are, we pick system 2. However, as Kahneman points out:

The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps . I also describe circumstances in which System 2 takes over, overruling the freewheeling impulses and associations of System 1. You will be invited to think of the two systems as agents with their individual abilities, limitations, and functions.

System One
These vary by individual and are often “innate skills that we share with other animals.”

We are born prepared to perceive the world around us, recognize objects, orient attention, avoid losses, and fear spiders. Other mental activities become fast and automatic through prolonged practice. System 1 has learned associations between ideas (the capital of France?); it has also learned skills such as reading and understanding nuances of social situations. Some skills, such as finding strong chess moves, are acquired only by specialized experts. Others are widely shared. Detecting the similarity of a personality sketch to an occupational stereotype requires broad knowledge of the language and the culture, which most of us possess. The knowledge is stored in memory and accessed without intention and without effort.

System Two
This is when we do something that does not come naturally and requires some sort of continuous exertion.

In all these situations you must pay attention, and you will perform less well, or not at all, if you are not ready or if your attention is directed inappropriately.

Paying attention is not really the answer as that is mentally expensive and can make people “effectively blind, even to stimuli that normally attract attention.” This is the point of Christopher Chabris and Daniel Simons in their book The Invisible Gorilla. Not only are we blind to what is plainly obvious when someone points it out but we fail to see that we are blind in the first place.

The Division of Labour

Systems 1 and 2 are both active whenever we are awake. System 1 runs automatically and System 2 is normally in a comfortable low-effort mode, in which only a fraction of its capacity is engaged. System 1 continuously generates suggestions for System 2: impressions, intuitions, intentions, and feelings. If endorsed by System 2, impressions and intuitions turn into beliefs, and impulses turn into voluntary actions. When all goes smoothly, which is most of the time, System 2 adopts the suggestions of System 1 with little or no modification. You generally believe your impressions and act on your desires, and that is fine— usually.

When System 1 runs into difficulty, it calls on System 2 to support more detailed and specific processing that may solve the problem of the moment. System 2 is mobilized when a question arises for which System 1 does not offer an answer, as probably happened to you when you encountered the multiplication problem 17 × 24. You can also feel a surge of conscious attention whenever you are surprised. System 2 is activated when an event is detected that violates the model of the world that System 1 maintains. In that world, lamps do not jump, cats do not bark, and gorillas do not cross basketball courts. The gorilla experiment demonstrates that some attention is needed for the surprising stimulus to be detected. Surprise then activates and orients your attention: you will stare, and you will search your memory for a story that makes sense of the surprising event. System 2 is also credited with the continuous monitoring of your own behavior—the control that keeps you polite when you are angry, and alert when you are driving at night. System 2 is mobilized to increased effort when it detects an error about to be made. Remember a time when you almost blurted out an offensive remark and note how hard you worked to restore control. In summary, most of what you (your System 2) think and do originates in your System 1, but System 2 takes over when things get difficult, and it normally has the last word.

The division of labor between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance. The arrangement works well most of the time because System 1 is generally very good at what it does: its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate. System 1 has biases, however, systematic errors that it is prone to make in specified circumstances. As we shall see, it sometimes answers easier questions than the one it was asked, and it has little understanding of logic and statistics. One further limitation of System 1 is that it cannot be turned off.


Conflict between an automatic reaction and an intention to control it is common in our lives. We are all familiar with the experience of trying not to stare at the oddly dressed couple at the neighboring table in a restaurant. We also know what it is like to force our attention on a boring book, when we constantly find ourselves returning to the point at which the reading lost its meaning. Where winters are hard, many drivers have memories of their car skidding out of control on the ice and of the struggle to follow well-rehearsed instructions that negate what they would naturally do: “Steer into the skid, and whatever you do, do not touch the brakes!” And every human being has had the experience of not telling someone to go to hell. One of the tasks of System 2 is to overcome the impulses of System 1. In other words, System 2 is in charge of self-control.


The question that is most often asked about cognitive illusions is whether they can be overcome. The message of these examples is not encouraging. Because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent. Biases cannot always be avoided, because System 2 may have no clue to the error. Even when cues to likely errors are available, errors can be prevented only by the enhanced monitoring and effortful activity of System 2. As a way to live your life, however, continuous vigilance is not necessarily good, and it is certainly impractical. Constantly questioning our own thinking would be impossibly tedious, and System 2 is much too slow and inefficient to serve as a substitute for System 1 in making routine decisions. The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high. The premise of this book is that it is easier to recognize other people’s mistakes than our own.

Still Curious? Thinking, Fast and Slow is a tour-de-force when it comes to thinking.

(image source)

Commonplace Books as a Source for Networked Knowledge and Combinatorial Creativity

Common Place Book

There is an old saying that the truest form of poverty is “when you have occasion for anything, you can’t use it, because you know not where it is laid.”

The flood of information is nothing new.

“In fact,” the Harvard historian Ann Blair writes in her book Too Much to Know: Managing Scholarly Information Before the Modern Age, “many of our current ways of thinking about and handling information descend from patterns of thought and practices that extend back for centuries.” Her book explores “the history of one of the longest-running traditions of information management — the collection and arrangement of textual excerpts designed for consultation.” She calls them reference books.

Large collections of textual material, consisting typically of quotations, examples, or bibliographical references, were used in many times and places as a way of facilitating access to a mass of texts considered authoritative. Reference books have sometimes been mined for evidence about commonly held views on specific topics or the meanings of words, and some (encyclopedias especially) have been studied for the genre they formed.


No doubt we have access to and must cope with a much greater quantity of information than earlier generations on almost every issue, and we use technologies that are subject to frequent change and hence often new. Nonetheless, the basic methods we deploy are largely similar to those devised centuries ago in early reference books. Early compilations involved various combinations of four crucial operations: storing, sorting, selecting, and summarizing, which I think of as the four S’s of text management. We too store, sort, select, and summarize information, but now we rely not only on human memory, manuscript, and print, as in earlier centuries, but also on computer chips, search functions, data mining, and Wikipedia, along with other electronic techniques.


The Florilegium and Commonplace Books

One of the original methods to keep, share, and remix ideas was the florilegium, which was a compilation of excerpts from other writings taken mostly from religion, philosophy, and sometimes classical texts. The word florilegium literally means a gathering of flowers — flos (flowers) and legere (to gather).

The leading Renaissance humanists, who experienced perhaps the first wave of information overload, were fans of commonplace books as a method of study and note-taking. Generally, these notebooks were kept private and filled with the likes of the classical Roman authors such as Cicero, Virgil, and Seneca.

“In his influential De Copia (1512),” writes professor Richard Yeo, “Erasmus advised that an abundant stock of quotations and maxims from classical texts be entered under various loci (places) to assist free-flowing oratory.”

Arranged under ‘Heads’ and recorded as ‘common-places’ (loci communes), these commonplace books could be consulted for speeches and written compositions designed for various situations — in the law court, at ceremonial occasions, or in the dedication of a book to a patron. Typical headings included the classical topics of honour, virtue, beauty, friendship, and Christian ones such as God, Creation, faith, hope, or the names of the virtues and vices.

The aim of these books wasn’t regurgitation but rather combinatorial creativity, as people were encouraged to improvise on themes and topics. Gathering raw material alone — in this case, information — is not enough. We must transform it into something new. It is in this light that Seneca advised copying the bee and Einstein advised combinatorial play.


A Move Away From Memory

Theologian Jean Le Clerc, writing about John Locke’s use of commonplace books, said:

In all sorts of learning, and especially in the study of languages, the memory is the treasury or store-house, … but lest the memory should be oppressed or over-burthen’d by too many things, order and method are to be called into its assistance. So that when we extract any thing out of an author, which is like to be of future use, we may be able to find it without any trouble. For it would be of little purpose to spend our time in [the] reading of books, if we could not apply what we read to our use.

Commonplace books, during the Renaissance, were used to enhance the memory. Yeo writes,

This reflected the ancient Greek and Roman heritage. In his Topica, Aristotle formulated a doctrine of ‘places’ (topoi or loci) that incorporated his ten categories. A link was soon drawn between this doctrine of ‘places’ (which were, for Aristotle, ‘seats of arguments’, not quotations from authors) and the art of memory. Cicero built on this in De Oratore, explaining that ‘it is chiefly order that gives distinctness to memory’; and Quintilian’s Institutio Oratoria became an influential formulation. This stress on order and sequence was the crux of what came to be known as ‘topical memory’, cultivated by mnemonic techniques (‘memoria technica’) involving the association of ideas with visual images. These ideas, forms of argument, or literary tropes were ‘placed’ in the memory, conceived in spatial terms as a building, a beehive, or a set of pigeon holes. This imagined space was then searched for the images and ideas it contained…. In the ancient world, the practical application of this art was training in oratory; yet Cicero stressed that the good orator needed knowledge, not just rhetorical skill, so that memory had to be trained to store and retrieve illustrations and arguments of various kinds. Although Erasmus distrusted the mnemonic arts, like all the leading Renaissance humanists, he advocated the keeping of commonplace books as an aid to memory.

While calling memory “the store-house of our ideas,” John Locke recognized its limitations. On the one hand, it was an incredible source of knowledge. On the other hand, it was weak and fragile. He knew that over time, memory faded and became harder to retrieve, which made it less valuable. In something the internet age would be proud of, Locke’s focus was retrieval, not recall. His system was a form of pre-industrial Google.

Locke saw commonplace books not as a means to improve memory but as an aid in recollecting complex information gathered over years from multidisciplinary subjects. If only Farnam Street existed in his day.

Yeo writes:

Locke sometimes refers to his bad memory. This might seem to endorse the humanist conception of commonplace books as memory aids, but Locke does not believe that memory can be trained in ways that guarantee transfer across subjects and situations. This separates him from many of his near contemporaries for whom the commonplace book was still a stimulus in training memory to recall and recite selected quotations.



In his essay “Extraordinary Commonplaces,” Robert Darnton comments on the practice at the time, which was to copy pithy passages into notebooks, “adding observations made in the course of daily life.”

Unlike modern readers, who follow the flow of a narrative from beginning to end, early modern Englishmen read in fits and starts and jumped from book to book. They broke texts into fragments and assembled them into new patterns by transcribing them in different sections of their notebooks. Then they reread the copies and rearranged the patterns while adding more excerpts. Reading and writing were therefore inseparable activities. They belonged to a continuous effort to make sense of things, for the world was full of signs: you could read your way through it; and by keeping an account of your readings, you made a book of your own, one stamped with your personality. … The era of the commonplace book reached its peak in the late Renaissance, although commonplacing as a practice probably began in the twelfth century and remained widespread among the Victorians. It disappeared long before the advent of the sound bite.

Commonplace books are thus to be mined for information, not only on how people thought but also as a source of creativity. Darnton continues:

By selecting and arranging snippets from a limitless stock of literature, early modern Englishmen gave free play to a semi-conscious process of ordering experience. The elective affinities that bound their selection into patterns reveal an epistemology — a process of knowing — at work below the surface.


The Art of Putting Things in Order

As for what to write in the commonplace books themselves, Le Clerc advised that we: (1) extract only those things which are “choice and excellent,” for either the substance or the expression; and (2) don’t write out too much, and mark the place where we found it so we can come back to it:

At the entrance indeed upon any study, when the judgment is not sufficiently confirm’d, nor the stock of knowledge over large, so that the students are not very well acquainted with what is worth collecting, scarce anything is extracted, but what will be useful but for a little while, because as the judgment grows ripe, the things are despis’d which before were had in esteem. Yet it is of service to have collections of this kind, both that students may learn the art of putting things in order, as also the better to retain what they read.

But here are two things carefully to be observed; the first is, that we extract only those things which are choice and excellent, either for the matter itself or else for the elegancy of the expression, and not what comes next; for that labour would abate our desire to go on with our readings; neither are we to think that all those things are to be writ out which are called … sentences. Those things alone are to be picked out, which we cannot so readily call to mind, or for which we should want proper words and expressions.

The second thing which I would have taken notice of, is, that you don’t write out too much, but only what is most worthy of observation, and to mark the place of the author from whence you extracted it, for otherwise it will cause the loss of too much time.

Neither ought anything to be collected whilst you are busied in reading; if by taking the pen in hand the thread of your reading be broken off, for that will make the reading both tedious and unpleasant.

The places we design to extract from are to be marked upon a piece of paper, that we may do it after we have read the book out; neither is it to be done just after the first reading of the book, but when we have read it a second time.

These things it’s likely may seem minute and trivial, but without ’em great things cannot subsist; and these being neglected cause very great confusion both of memory and judgment, and that which above all things is most to be valued, loss of time.

Some who otherwise were men of most extraordinary parts, by the neglect of these things have committed great errors, which if they had been so happy as to have avoided, they would have been much more serviceable to the learned world, and so consequently to mankind.

And in good truth, they who despise such things, do it not so much from any greater share of wit that they have than their neighbours, as from what of judgment; whence it is that they do not well understand how useful things order and method are.

Locke also advised “to take notice of a place in an author, from whom I quote something, I make use of this method: before I write anything, I put the name of the author in my commonplace book, and under that name the title of the treatise, the size of the volume, and the time and place of its edition, and the number of pages that the whole book contains.”

This number of pages serves me for the future to mark the particular treatise and the edition I made use of. I have no need to mark the place, otherwise than in setting down the number of the page from whence I have drawn what I have wrote, just above the number of pages contained in the whole volume.

(Image source)