Category: Learning

Finding Truth in History

If we are to learn from the past, does the account of it have to be true? One would like to think so. Otherwise you might be preparing for the wrong battle. There you are, geared up for mountains, and instead you find swamps. You’ve done a bunch of reading, trying to understand the terrain you are about to enter, only to find it useless. The books must have been written by crazy people. You are upset and confused. Surely there must be some reliable, objective account of the past. How are you supposed to prepare for the possibilities of the future if you can’t trust the accuracy of the reports on anything that has come before?

For why do we study history, anyway? Why keep a record of things that have happened? We fear that if we don’t, we are doomed to repeat history; but often that doesn’t seem to stop us from repeating it. And we have an annoying tendency to remember only the things which don’t really challenge or upset us. But still we try to capture what we can, through museums and ceremonies and study, because somehow we believe that eventually we will come to learn something about why things happen the way they do. And armed with this knowledge, we might even be able to shape our future.

This “problem of historical truth” is explored by Isaiah Berlin in The Hedgehog and the Fox: An Essay on Tolstoy’s View of History. He explains that Tolstoy was driven by a “desire to penetrate to first causes, to understand how and why things happen as they do and not otherwise.” We can understand this goal – because if we know how the world really works, we know everything.

Of course, it’s not that simple, and — spoiler alert — Tolstoy never figured it out. But Berlin’s analysis can illuminate the challenges we face with history and help us find something to learn from.

Tolstoy’s main problem with historical efforts at the time was that they were “nothing but a collection of fables and useless trifles. … History does not reveal causes; it presents only a blank succession of unexplained events.” Seen like this, the study of history is a waste of time, other than for trivia games or pub quizzes. Being able to recite what happened is supremely uninteresting if you can’t begin to understand why it happened in the first place.

But Tolstoy was also an expert at tearing down the theories of anyone who attempted to make sense of history and provide the why. He thought that they “must be imposters, since no theories can possibly fit the immense variety of possible human behavior, the vast multiplicity of minute, undiscoverable causes and effects which form that interplay of men and nature which history purports to record.”

History is more than just factoids, but its complexity makes it difficult for us to learn exactly why things happened the way they did.

And therein lies the spectrum of the problem for Tolstoy. History is more than just factoids, but its complexity makes it difficult for us to learn exactly why things happened the way they did. A battle is more than dates and times, but trying to trace the real impact of the decisions of Napoleon or Churchill is a fool’s errand. There is too much going on – too many decisions and interactions happening in every moment – for us to be able to conclude cause and effect with any certainty. After leaving an ice cube to melt on a table, you can’t untangle exactly what happened with each molecule from the puddle. That doesn’t mean we can’t learn from history; it means only that we need to be careful with the lessons we draw and the confidence we have in them.

Berlin explains:

There is a particularly vivid simile [in War and Peace] in which the great man is likened to the ram whom the shepherd is fattening for slaughter. Because the ram duly grows fatter, and perhaps is used as a bellwether for the rest of the flock, he may easily imagine that he is the leader of the flock, and that the other sheep go where they go solely in obedience to his will. He thinks this and the flock may think it too. Nevertheless the purpose of his selection is not the role he believes himself to play, but slaughter – a purpose conceived by beings whose aims neither he nor the other sheep can fathom. For Tolstoy, Napoleon is just such a ram, and so to some degree is Alexander, and indeed all the great men of history.

Arguing against this view of history was N. I. Kareev, who said:

…it is men, doubtless, who make social forms, but these forms – the ways in which men live – in their turn affect those born into them; individual wills may not be all-powerful, but neither are they totally impotent, and some are more effective than others. Napoleon may not be a demigod, but neither is he a mere epiphenomenon of a process which would have occurred unaltered without him.

This means that studying the past is important for making better decisions in the future. If we can’t always follow the course of cause and effect, we can at least discover some very strong correlations and act accordingly.

We have a choice between these two perspectives: Either we can treat history as an impenetrable fog, or we can figure out how to use history while accepting that each day might reveal more and we may have to update our thinking.

Sound familiar? Sounds a lot like the scientific method to me – a preference for updating the foundation of knowledge versus being adrift in chaos or attached to a raft that cannot be added to.

Berlin argues that Tolstoy spent his life trying to find a theory strong enough to unify everything. A way to build a foundation so strong that all arguments would crumble against it. Although that endeavor was ambitious, we don’t need to fully understand the why of history in order to be able to learn from it. We don’t need the foundation of the past to be solid and fixed in order to gain some insight into our future. We can still find some truth in history.


Funnily enough, Berlin clarifies that Tolstoy “believed that only by patient empirical observation could any knowledge be obtained.” But he also believed “that simple people often know the truth better than learned men, because their observation of men and nature is less clouded by empty theories.”

Unhelpfully, Tolstoy’s position amounts to “the more you know, the less you learn.”

The answer to finding truth in history is not to be found in Tolstoy’s writing. He was looking for “something too indivisibly simple and remote from normal intellectual processes to be assailable by the instruments of reason, and therefore, perhaps, offering a path to peace and salvation.” He never was able to conclude what that might be.

But there might be an answer in how Berlin interprets Tolstoy’s major dissonance in life, the discrepancy that drove him and was never resolved. Tolstoy “tried to resolve the glaring contradiction between what he believed about men and events, and what he thought he believed, or ought to believe.”

Finding truth in history is about understanding that this truth is not absolute. In this sense, truth is based on perspective. The perspective of the person who captured it and the person interpreting it. And the perspective of the translators and editors and primary sources. We don’t get to be invisible observers of moments in the past, and we don’t get to go into other minds. The best we can do is keep our eyes open and keep our biases in check. And what history can teach us is found not just in the moments it tries to describe, but also in what we choose to look at and how we choose to represent it.

Loops of Progress, or How Modern Are You?

On your way to work, you grab breakfast from one of the dozen coffee shops you pass. Most of the goods you buy get delivered right to your door. If you live in a large city and have a car, you barely use it, preferring Uber or ride-sharing services. You feel modern. Your parents didn’t do any of this. Most of their meals were consumed at home, and they took their cars everywhere, in particular to purchase all the stuff they needed.

You think of your life as being so different from theirs. It is. You think of this as progress. It isn’t.

We tend to consider social development as occurring in a straight line: we progressed from A to B to C, with each step being more advanced and, we assume, better than the one before. This perception isn’t always accurate, though. Part of learning from the past is appreciating that we humans have tried many different ways to organize ourselves, with lots of repetitions. If we want success now, we need to understand our past efforts in order to see what changes might be needed this time around.

Would you be surprised to learn that in Victorian London (the nineteenth century), the vast majority of people ate their food on the run? That ride sharing was common? Or that you could purchase everything you needed without ever leaving your house?

To be fair, these situations didn’t exist in the exact instantiations that they do today. Obviously, there was no back then. But while the parallels are not exact, they are worth exploring, if only to remind us that no matter the array of pressures we face as a society, there are only so many ways we can organize ourselves.

To start with, street food was the norm. All classes except the very wealthy (thus, essentially, anyone who worked) ate on the run. At outdoor stalls or indoor counters. Food purchased from street vendors or chophouses (the Victorian equivalent of fast-food outlets). Food was purchased and consumed outside of the home, on the commute to or from work.

Why? Why would everyone from the middle classes to the working poor eat out?

Unlike today, eating out was cheaper then. As Judith Flanders explains in The Victorian City:

Today, eating out is more expensive than cooking at home, but in the nineteenth century the situation was reversed. Most of the working class lived in rooms, not houses. They might have had access to a communal kitchen, but more often they cooked in their own fireplace: to boil a kettle before going to work, leaving the fire to burn when there was no one home, was costly, time-consuming and wasteful. … Several factors — the lack of storage space, routine infestations of vermin and being able, because of the cost, to buy food only in tiny quantities — meant that storing any foodstuff, even tea, overnight was unusual.

Even food delivery isn’t new.

Every eating place expected to deliver meals, complete with cutlery, dishes and even condiments, which were brought by waiters who then stayed on, if wanted, to serve. Endless processions of meals passed through the streets daily. … Large sums of money were not necessary for this service.

People need to eat. It’s fundamental. No matter what living conditions we find ourselves in, the drive away from starvation means that we are willing to experiment in how we organize to get our food.

Public transportation took hold in Victorian London and is another interesting point of comparison. Then, its use was not due to a sense of civic responsibility or concerns about the environment. Public transportation succeeded because it was faster. Most cities had grown organically, and streets were not designed for the volume they had to carry in the nineteenth century. There was no flow, and there were no traffic rules. The population was swelling and road surfaces would be devastating to today’s SUVs. It was simply painful to get anywhere.

Thus the options exploded. Buses and cabs to get about the city. Stagecoaches and the railroad for longer excursions (and commutes!). And the Underground. Buses “increased the average speed of travel to nearly six miles an hour; with the railway this figure rose to over twelve, sometimes double that.” Public transportation allowed people to move faster, and “therefore, areas that had traditionally been on the edges of London now housed commuters.”

As a direct consequence of the comparable efficiency of the public transportation system, “most people could not imagine ever owning a private carriage. It was not just the cost of the carriage itself, of the horse and its accoutrements — harnesses and so on — but the running costs: the feed and care of the horse, the stabling, as well as the taxes that were imposed on carriages throughout the century.” As well as the staff. A driver, footmen, their salaries and uniforms.

A form of ride-sharing was also common then. For travel outside of the city, one could hire a post-chaise. “A post-chaise was always hired privately, to the passenger’s own schedule, but the chaise, horses, driver and postboys all belonged to the coaching inn or a local proprietor.”

Aside from the cost of owning your own transportation, neither the work day nor the city infrastructure was designed for reliance on individual transport. London in the nineteenth century (and to a large extent today) functioned better with an extensive public transport system.

There was no social safety net. You worked or you died.

Finally, living in London in the nineteenth century was very much about survival. There was no social safety net. You worked or you died. And given the concentration of wealth in the top tier of society, there was a lot of competition among the working poor for a slight edge that would mean the difference between living another day and starvation.

This situation is likely part of the reason that sellers went to buyers, rather than the other way around. Unlike today, when so many bookstores are owned by the same company or when a conglomerate makes multiple brands of “unique” luxury goods, a watercress girl owned and sold only the watercress she could carry. And this watercress was no different from the bundles the girl one street over had. The competition to sell was fierce.

And so, as Flanders describes, in the first half of the nineteenth century, street vendors in all neighborhoods sold an astonishing array of goods and services. First chimney sweeps, then milkmaids; “the next sellers were the watercress girls, followed by the costermongers, then the fishmongers’, the butchers’ and the bakers’ boys to take the daily orders.” Next came the guy selling horsemeat.

Other goods regularly available from itinerant sellers in the suburbs included: footstools; embroidery frames; clothes horses, clothes-pegs and clothes line; sponges, chamois leathers, brushes and brooms; kitchen skewers, toasting-forks and other tinware; razors and penknives; trays, keyrings, and small items of jewellery; candlesticks, tools, trivets, pots and pans; bandboxes and hatboxes; blackleading for kitchen ranges and grates, matches and glue; china ornaments and crockery; sheets, shirts, laces, thread, ribbons, artificial flowers, buttons, studs, handkerchiefs; pipes, tobacco, snuff, cigars; spectacles, hats, combs and hairbrushes; firewood and sawdust.

You didn’t have to leave your house to purchase items for meeting your daily needs.

This is not to say that Victorian London had everything figured out or that progress is always a loop. For example, there is no time in history in which it was better to be a woman than it is now, and modern medicine and the scientific method are significant steps up over what has come before. But reading these accounts of how London functioned almost two hundred years ago hints that a lot of what we consider modern innovations have been tried before.

Maybe ways of organizing come and go depending on time and place. When things are useful, they appear; as needs change, those things disappear. There really is no new way of doing business.

But we can look at the impact of social progress, how it shapes communities, and what contributes to its ebb and flow. Flanders notes that in the second half of the nineteenth century, there was a shift to going out to shop in stores. What changes did this give rise to? And how did those changes contribute to the loop we are experiencing and to our current desire to have everything brought to us?

Zero — Invented or Discovered?

It seems almost a bizarre question. Who thinks about whether zero was invented or discovered? And why is it important?

Answering this question, however, can tell you a lot about yourself and how you see the world.

Let’s break it down.

“Invented” implies that humans created the zero and that without us, the zero and its properties would cease to exist.

“Discovered” means that although the symbol is a human creation, what it represents would exist independently of any human ability to label it.

So do you think of the zero as a purely mathematical function, and by extension think of all math as a human construct like, say, cheese or self-driving cars? Or is math, and the zero, a symbolic language that describes the world, the content of which exists completely independently of our descriptions?

The zero is now a ubiquitous component of our understanding.

The concept is so basic it is routinely mastered by the pre-kindergarten set. Consider the equation 3-3=0. Nothing complicated about that. It is second nature to us that we can represent “nothing” with a symbol. It makes perfect sense now, in 2017, and it’s so common that we forget that zero was a relatively late addition to the number scale.

Here’s a fact that’s amazing to most people: the zero is actually younger than mathematics. Pythagoras’s famous conclusion — that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides — was achieved without a zero. As was Euclid’s entire Elements.

How could this be? It seems surreal, given the importance the zero now has to mathematics, computing, language, and life. How could someone figure out the complex geometry of triangles, yet not realize that nothing was also a number?

Tobias Dantzig, in Number: The Language of Science, offers this as a possible explanation: “The concrete mind of the ancient Greeks could not conceive the void as a number, let alone endow the void with a symbol.” This gives us a good direction for finding the answer to the original question because it hints that you must first understand the concept of the void before you can name it. You need to see that nothingness still takes up space.

It was thought, and sometimes still is, that the number zero was invented in the pursuit of ancient commerce. Something was needed as a placeholder; otherwise, 65 would be indistinguishable from 605 or 6050. The zero represents “no units” of the particular place that it holds. So for that last number, we have six thousands, no hundreds, five tens, and no singles.

A happy accident of no great original insight, zero then made its way around the world. In addition to being convenient for keeping track of how many bags of grain you were owed, or how many soldiers were in your army, it turned our number scale into an extremely efficient decimal system. More so than any numbering system that preceded it (and there were many), the zero transformed the power of our other numerals, propelling mathematics into fantastic equations that can explain our world and fuel incredible scientific and technological advances.

But there is, if you look closely, a missing link in this story.

What changed in humanity that made us comfortable with confronting the void and giving it a symbol? And is it reasonable to imagine creating the number without understanding what it represented? Given its properties, can we really think that it started as a placeholder? Or did it contain within it, right from the beginning, the notion of defining the void, of giving it space?

In Finding Zero, Amir Aczel offers some insight. Basically, he claims that the people who discovered the zero must have had an appreciation of the emptiness that it represented. They were labeling a concept with which they were already familiar.

He rediscovered the oldest known zero, on a stone tablet dating from 683 CE in what is now Cambodia.

On his quest to find this zero, Aczel realized that it was far more natural for the zero to first appear in the Far East, rather than in Western or Arab cultures, due to the philosophical and religious understandings prevalent in the region.

Western society was, and still is in many ways, a binary culture. Good and evil. Mind and body. You’re either with us or against us. A patriot or a terrorist. Many of us naturally try to fit our world into these binary understandings. If something is “A,” then it cannot be “not A.” The very definition of “A” is that it is not “not A.” Something cannot be both.

Aczel writes that this duality is not at all reflected in much Eastern thought. He describes the catuskoti, found in early Buddhist logic, that presents four possibilities, instead of two, for any state: that something is, is not, is both, or is neither.

At first, a typical Western mind might rebel against this kind of logic. My father is either bald or not bald. He cannot be both and he cannot be neither, so what is the use of these two other almost nonsensical options?

A closer examination of our language, though, reveals that the expression of the non-binary is understood, and therefore perhaps more relevant than we think. Take, for example, “you’re either with us or against us.” Is it possible to say “I’m both with you and against you”? Yes. It could mean that you are for the principles but against the tactics. Or that you are supportive in contrast to your values. And to say “I’m neither with you nor against you” could mean that you aren’t supportive of the tactic in question, but won’t do anything to stop it. Or that you just don’t care.

Feelings, in particular, are a realm where the binary is often insufficient. Watching my children, I know that it’s possible to be both happy and sad, a traditional binary, at the same time. And the zero itself defies binary categorization. It is something and nothing simultaneously.

Aczel reflects on a conversation he had with a Buddhist monk. “Everything is not everything — there is always something that lies outside of what you may think covers all creation. It could be a thought, or a kind of void, or a divine aspect. Nothing contains everything inside it.”

He goes on to conclude that “Here was the intellectual source of the number zero. It came from Buddhist meditation. Only this deep introspection could equate absolute nothingness with a number that had not existed until the emergence of this idea.”

Which is to say, certain properties of the zero likely were understood conceptually before the symbol came about — nothingness was a thing that could be represented. This idea fits with how we treat the zero today; it may represent nothing, but that nothing still has properties. And investigating those properties demonstrates that there is power in the void — it has something to teach us about how our universe operates.

Further contemplation might illuminate that the zero has something to teach us about existence as well. If we accept zero, the symbol, as being discovered as part of our realization about the existence of nothingness, then trying to understand the zero can teach us a lot about moving beyond the binary of alive/not alive to explore other ways of conceptualizing what it means to be.

Let Go of the Learning Baggage

We all want to learn better. That means retaining information, processing it, being able to use it when needed. More knowledge means better instincts; better insights into opportunities for both you and your organization. You will ultimately produce better work if you give yourself the space to learn. Yet often organizations get in the way of learning.

How do we learn how to learn? Usually in school, combined with instructions from our parents, we cobble together an understanding that allows us to move forward through the school years until we matriculate into a job. Then because most initial learning comes from doing, less from books, we switch to an on-the-fly approach.

Which is usually an absolute failure. Why? In part, because we layer our social values on top and end up with a hot mess of guilt and fear that stymies the learning process.

Learning is necessary for our success and personal growth. But we can’t maximize the time we spend learning because our feelings about what we ‘should’ be doing get in the way.

We are trained by our modern world to organize our day into mutually exclusive chunks called ‘work’, ‘play’, and ‘sleep’. One is done at the office, the other two are not. We are not allowed to move fluidly between these chunks, or combine them in our 24 hour day. Lyndon Johnson got to nap at the office in the afternoon, likely because he was President and didn’t have to worry about what his boss was going to think. Most of us don’t have this option. And now in the open office debacle we can’t even have a quiet 10 minutes of rest in our cubicles.

We have become trained to equate working with doing. Thus the ‘doing’ has value. We deserve to get paid for this. And, it seems, only this.

What does this have to do with learning?

It’s this same attitude that we apply to the learning process when we are older, with similarly unsatisfying results.

If we are learning for work, then in our brains learning = work. So we have to do it during the day. At the office. And if we are not learning, then we are not working. We think that walking is not learning. It’s ‘taking a break’. We instinctively believe that reading is learning. Having discussions about what you’ve read, however, is often not considered work, again it’s ‘taking a break’.

To many, working means sitting at your desk for eight hours a day. Being physically present, mental engagement is optional. It means pushing out emails and rushing to meetings and generally getting nothing done. We’ve looked at the focus aspect of this before. But what about the learning aspect?

Can we change how we approach learning, letting go of the guilt associated with not being visibly active, and embrace what seems counter-intuitive?

Thinking and talking are useful elements of learning. And what we learn in our ‘play’ time can be valuable to our ‘work’ time, and there’s nothing wrong with moving between the two (or combining them) during our day.

When mastering a subject, our brains actually use different types of processing. Barbara Oakley explains in A Mind for Numbers: How to Excel at Math and Science (even if you flunked algebra) that our brain has two general modes of thinking – ‘focused’ and ‘diffuse’ – and both of these are valuable and required in the learning process.

The focused mode is what we traditionally associate with learning. Read, dive deep, absorb. Eliminate distractions and get into the material. Oakley says “the focused mode involves a direct approach to solving problems using rational, sequential, analytical approaches. … Turn your attention to something and bam – the focused mode is on, like the tight, penetrating beam of a flashlight.”

But the focused mode is not the only one required for learning because we need time to process what we pick up, to get this new information integrated into our existing knowledge. We need time to make new connections. This is where the diffuse mode comes in.

Diffuse-mode thinking is what happens when you relax your attention and just let your mind wander. This relaxation can allow different areas of the brain to hook up and return valuable insights. … Diffuse-mode insights often flow from preliminary thinking that’s been done in the focused mode.

Relying solely on the focused mode to learn is a path to burnout. We need the diffuse mode to cement our ideas, put knowledge into memory and free up space for the next round of focused thinking. We need the diffuse mode to build wisdom. So why does diffuse mode thinking at work generally involve feelings of guilt?

Oakley’s recommendations for ‘diffuse-mode activators’ are: go to the gym, walk, play a sport, go for a drive, draw, take a bath, listen to music (especially without words), meditate, sleep. Um, aren’t these all things to do in my ‘play’ time? And sleep? It’s a whole time chunk on its own.

Most organizations do not promote a culture that allow these activities to be integrated into the work day. Go to the gym on your lunch. Sleep at home. Meditate on a break. Essentially do these things while we are not paying you.

We ingest this way of thinking, associating the value of getting paid with the value of executing our task list. If something doesn’t directly contribute, it’s not valuable. If it’s not valuable I need to do it in my non-work time or not at all. This is learned behavior from our organizational culture, and it essentially communicates that our leaders would rather see us do less than trust in the potential payoff of pursuits that aren’t as visible or ones that don’t pay off as quickly. The ability to see something is often a large component of trust. So if we are doing any of these ‘play’ activities at work, which are invisible in terms of their contribution to the learning process, we feel guilty because we don’t believe we are doing what we get paid to do.

If you aren’t the CEO or the VP of HR, you can’t magic a policy that says ‘all employees shall do something meaningful away from their desks each day and won’t be judged for it’, so what can you do to learn better at work? Find a way to let go of the guilt baggage when you invest in proven, effective learning techniques that are out of sync with your corporate culture.

How do you let go of the guilt? How do you not feel it every time you stand up to go for a walk, close your email and put on some headphones, or have a coffee with a colleague to discuss an idea you have? Because sometimes knowing you are doing the right thing doesn’t translate into feeling it, and that’s where guilt comes in.

Guilt is insidious. Not only do we usually feel guilt, but then we feel guilty about feeling guilty. Like, I go to visit my grandmother in her old age home mostly because I feel guilty about not going, and then I feel guilty because I’m primarily motivated by guilt! Like if I were a better person I would be doing it out of love, but I’m not, so that makes me terrible.

Breaking this cycle is hard. Like anything new, it’s going to feel unnatural for a while but it can be done.

How? Be kind to yourself.

This may sound a bit touchy-feely, but it is really a just a cognitive-behavioral approach with a bit of mindfulness thrown in. Dennis Tirch has done a lot of research into the positive benefits of compassion for yourself on worry, panic and fear. And what is guilt but worry that you aren’t doing the right thing, fear that you’re not a good person, and panic about what to do about it?

In his book, The Compassionate-Mind Guide to Overcoming Anxiety, Tirch writes:

the compassion focused model is based on research showing that some of the ways in which we instinctively regulate our response to threats have evolved from the attachment system that operates between infant and mother and from other basic relationships between mutually supportive people. We have specific systems in our brains that are sensitive to the kindness of others, and the experience of this kindness has a major impact on the way we process these threats and the way we process anxiety in particular.

The Dalai Lama defines compassion as “a sensitivity to the suffering of others, with a commitment to do something about it,” and Tirch also explains that we are greatly impacted by our compassion to ourselves.

In order to manage and overcome emotions like guilt that can prevent us from learning and achieving, we need to treat ourselves the same way we would the person we love most in the world. “We can direct our attention to inner images that evoke feelings of kindness, understanding, and support,” writes Tirch.

So the next time you look up from that proposal on the new infrastructure schematics and see that the sun is shining, go for a walk, notice where you are, and give your mind a chance to go into diffuse-mode and process what you’ve been focusing on all morning. And give yourself a hug for doing it.

Language: Why We Hear More Than Words

It’s a classic complaint in relationships, especially romantic ones: “She said she was okay with me forgetting her birthday! Then why is she throwing dishes in the kitchen? Are the two things related? I wish I had a translator for my spouse. What is going on?”

The answer: Extreme was right, communication is more than words. It’s how those words are said, the tone, the order, even the choice of a particular word. It’s multidimensional.

In their book, Meaning and Relevance, Deirdre Wilson and Dan Sperber explore the aspects of communication that are beyond the definitions of the words that we speak but are still encoded in the words themselves.

Consider the following example:

Peter got angry and Mary left.

Mary left and Peter got angry.

We can instantly see that these two sentences, despite having exactly the same words, do not mean the same thing. The first one has us thinking, wow, Peter must get angry often if Mary leaves to avoid his behavior. Maybe she’s been the recipient of one too many tantrums and knows that there’s nothing she can do to diffuse his mood. The second sentence suggests that Peter wants more from Mary. He might have a crush on her! Same words – totally different context.

Human language is not a code. True codes have a one-to-one relationship with meaning. One sound, one definition. This is what we see with animals.

Wilson and Sperber explain that “coded communication works best when emitter and receiver share exactly the same code. Any difference between the emitter’s and receiver’s codes is a possible source of error in the communication process.” For animals, any evolutionary mutations that affected the innate code would be counter-adaptive. A song-bird one note off key is going to have trouble finding a mate.

Not so for humans. We communicate more than the definitions of our words would suggest. (Steven Pinker argues language itself as a DNA-level instinct.) And we decode more than the words spoken to us. This is inferential communication, and it means that we understand not only the words spoken, but the context in which they are spoken. Contrary to the languages of other animals, which are decidedly less ambiguous, human language requires a lot of subjective interpretation.

This is probably why we can land in a country where we don’t speak the language and can’t read the alphabet, yet get the gist of what the hotel receptionist is telling us. We can find our room, and know where the breakfast is served in the morning. We may not understand her words, but we can comprehend her tone and make inferences based on the context.

Wilson and Sperber argue that mutations in our inferential abilities do not negatively impact communication and potentially even enhance it. Essentially, because our human language is not simply a one-to-one code because more can be communicated beyond the exact representations of certain words, we can easily adapt to changes in communication and interpretation that may evolve in our communities.

For one thing, we can laugh at more than physical humor. Words can send us into stitches. Depending on how they are conveyed, the tone, the timing, the expressions that come along with them, we can find otherwise totally innocuous words hysterical.

Remember Abbott and Costello?

Who’s on first.”
“No, what’s on second.”

Consider Irony

Irony is a great example of how powerfully we can communicate context with a few simple words.

I choose my words as indicators of a more complex thought that may include emotions, opinions, biases, and these words will help you infer this entire package. And one of my goals as the communicator is to make it as easy as possible for you to get the meaning I’m intending to convey.

Irony is more than just stating the opposite. There must be an expectation of that opposite in at least some of the population. And choosing irony is more of a commentary on that group. Wilson and Sperber argue that “what irony essentially communicates is neither the proposition literally expressed not the opposite of that proposition, but an attitude to this proposition and to those who might hold or have held it.”

For example

When Mary says, after a boring party, ‘That was fun’, she is neither asserting literally that the party was fun nor asserting ‘ironically’ that the party was boring. Rather, she is expressing an attitude of scorn towards (say) the general expectation among the guests that the party would be fun.

This is a pretty complex linguistic structure. It allows us to communicate our feelings on cultural norms fairly succinctly. Mary says ‘That was fun’. Three little words. And I understand that she hated the party, couldn’t wait to get out of there, feels distant from the other party-goers and is rejecting that whole social scene. Very powerful!

Irony works because it is efficient. To communicate the same information without irony involves more sentences. And my desire as a communicator is always to express myself in the most efficient way possible to my listener.

Wilson and Sperber conclude that human language developed and became so powerful because of two unique cognitive abilities of humans, language and the power to attribute mental states to others. We look for context for the words we hear. And we are very proficient at absorbing this context to infer meaning.

The lesson? If you want to understand reality, don’t be pedantic.

Friedrich Nietzsche on Making Something Worthwhile of Ourselves

Friedrich Nietzsche (1844-1900) explored many subjects, perhaps the most important was himself.

A member of our learning community directed me to the passage below, written by Richard Schacht in the introduction to Nietzsche: Human, All Too Human: A Book for Free Spirits.

​If we are to make something worthwhile of ourselves, we have to take a good hard look at ourselves. And this, for Nietzsche, means many things. It means looking at ourselves in the light of everything we can learn about the world and ourselves from the natural sciences — most emphatically including evolutionary biology, physiology and even medical science. It also means looking at ourselves in the light of everything we can learn about human life from history, from the social sciences, from the study of arts, religions, literatures, mores and other features of various cultures. It further means attending to human conduct on different levels of human interaction, to the relation between what people say and seem to think about themselves and what they do, to their reactions in different sorts of situations, and to everything about them that affords clues to what makes them tick. All of this, and more, is what Nietzsche is up to in Human, All Too Human. He is at once developing and employing the various perspectival techniques that seem to him to be relevant to the understanding of what we have come to be and what we have it in us to become. This involves gathering materials for a reinterpretation and reassessment of human life, making tentative efforts along those lines and then trying them out on other human phenomena both to put them to the test and to see what further light can be shed by doing so.

Nietzsche realized that mental models were the key to not only understanding the world but understanding ourselves. Understanding how the world works is the key making more effective decisions and gaining insights. However, its through the journey of discovery of these ideas, that we learn about ourselves. Most of us want to skip the work, so we skim the surface of not only knowledge but ourselves.