Tag: Plato

Deductive vs Inductive Reasoning: Make Smarter Arguments, Better Decisions, and Stronger Conclusions

You can’t prove truth, but using deductive and inductive reasoning, you can get close. Learn the difference between the two types of reasoning and how to use them when evaluating facts and arguments.

***

As odd as it sounds, in science, law, and many other fields, there is no such thing as proof — there are only conclusions drawn from facts and observations. Scientists cannot prove a hypothesis, but they can collect evidence that points to its being true. Lawyers cannot prove that something happened (or didn’t), but they can provide evidence that seems irrefutable.

The question of what makes something true is more relevant than ever in this era of alternative facts and fake news. This article explores truth — what it means and how we establish it. We’ll dive into inductive and deductive reasoning as well as a bit of history.

“Contrariwise,” continued Tweedledee, “if it was so, it might be; and if it were so, it would be; but as it isn’t, it ain’t. That’s logic.”

— Lewis Carroll, Through the Looking-Glass

The essence of reasoning is a search for truth. Yet truth isn’t always as simple as we’d like to believe it is.

For as far back as we can imagine, philosophers have debated whether absolute truth exists. Although we’re still waiting for an answer, this doesn’t have to stop us from improving how we think by understanding a little more.

In general, we can consider something to be true if the available evidence seems to verify it. The more evidence we have, the stronger our conclusion can be. When it comes to samples, size matters. As my friend Peter Kaufman says:

What are the three largest, most relevant sample sizes for identifying universal principles? Bucket number one is inorganic systems, which are 13.7 billion years in size. It’s all the laws of math and physics, the entire physical universe. Bucket number two is organic systems, 3.5 billion years of biology on Earth. And bucket number three is human history….

In some areas, it is necessary to accept that truth is subjective. For example, ethicists accept that it is difficult to establish absolute truths concerning whether something is right or wrong, as standards change over time and vary around the world.

When it comes to reasoning, a correctly phrased statement can be considered to have objective truth. Some statements have an objective truth that we cannot ascertain at present. For example, we do not have proof for the existence or non-existence of aliens, although proof does exist somewhere.

Deductive and inductive reasoning are both based on evidence.

Several types of evidence are used in reasoning to point to a truth:

  • Direct or experimental evidence — This relies on observations and experiments, which should be repeatable with consistent results.
  • Anecdotal or circumstantial evidence — Overreliance on anecdotal evidence can be a logical fallacy because it is based on the assumption that two coexisting factors are linked even though alternative explanations have not been explored. The main use of anecdotal evidence is for forming hypotheses which can then be tested with experimental evidence.
  • Argumentative evidence — We sometimes draw conclusions based on facts. However, this evidence is unreliable when the facts are not directly testing a hypothesis. For example, seeing a light in the sky and concluding that it is an alien aircraft would be argumentative evidence.
  • Testimonial evidence — When an individual presents an opinion, it is testimonial evidence. Once again, this is unreliable, as people may be biased and there may not be any direct evidence to support their testimony.

“The weight of evidence for an extraordinary claim must be proportioned to its strangeness.”

— Laplace, Théorie analytique des probabilités (1812)

Reasoning by Induction

The fictional character Sherlock Holmes is a master of induction. He is a careful observer who processes what he sees to reach the most likely conclusion in the given set of circumstances. Although he pretends that his knowledge is of the black-or-white variety, it often isn’t. It is true induction, coming up with the strongest possible explanation for the phenomena he observes.

Consider his description of how, upon first meeting Watson, he reasoned that Watson had just come from Afghanistan:

“Observation with me is second nature. You appeared to be surprised when I told you, on our first meeting, that you had come from Afghanistan.”
“You were told, no doubt.”

“Nothing of the sort. I knew you came from Afghanistan. From long habit the train of thoughts ran so swiftly through my mind, that I arrived at the conclusion without being conscious of intermediate steps. There were such steps, however. The train of reasoning ran, ‘Here is a gentleman of a medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan.’ The whole train of thought did not occupy a second. I then remarked that you came from Afghanistan, and you were astonished.”

(From Sir Arthur Conan Doyle’s A Study in Scarlet)

Inductive reasoning involves drawing conclusions from facts, using logic. We draw these kinds of conclusions all the time. If someone we know to have good literary taste recommends a book, we may assume that means we will enjoy the book.

Induction can be strong or weak. If an inductive argument is strong, the truth of the premise would mean the conclusion is likely. If an inductive argument is weak, the logic connecting the premise and conclusion is incorrect.

There are several key types of inductive reasoning:

  • Generalized — Draws a conclusion from a generalization. For example, “All the swans I have seen are white; therefore, all swans are probably white.”
  • Statistical — Draws a conclusion based on statistics. For example, “95 percent of swans are white” (an arbitrary figure, of course); “therefore, a randomly selected swan will probably be white.”
  • Sample — Draws a conclusion about one group based on a different, sample group. For example, “There are ten swans in this pond and all are white; therefore, the swans in my neighbor’s pond are probably also white.”
  • Analogous — Draws a conclusion based on shared properties of two groups. For example, “All Aylesbury ducks are white. Swans are similar to Aylesbury ducks. Therefore, all swans are probably white.”
  • Predictive — Draws a conclusion based on a prediction made using a past sample. For example, “I visited this pond last year and all the swans were white. Therefore, when I visit again, all the swans will probably be white.”
  • Causal inference — Draws a conclusion based on a causal connection. For example, “All the swans in this pond are white. I just saw a white bird in the pond. The bird was probably a swan.”

The entire legal system is designed to be based on sound reasoning, which in turn must be based on evidence. Lawyers often use inductive reasoning to draw a relationship between facts for which they have evidence and a conclusion.

The initial facts are often based on generalizations and statistics, with the implication that a conclusion is most likely to be true, even if that is not certain. For that reason, evidence can rarely be considered certain. For example, a fingerprint taken from a crime scene would be said to be “consistent with a suspect’s prints” rather than being an exact match. Implicit in that statement is the assertion that it is statistically unlikely that the prints are not the suspect’s.

Inductive reasoning also involves Bayesian updating. A conclusion can seem to be true at one point until further evidence emerges and a hypothesis must be adjusted. Bayesian updating is a technique used to modify the probability of a hypothesis’s being true as new evidence is supplied. When inductive reasoning is used in legal situations, Bayesian thinking is used to update the likelihood of a defendant’s being guilty beyond a reasonable doubt as evidence is collected. If we imagine a simplified, hypothetical criminal case, we can picture the utility of Bayesian inference combined with inductive reasoning.

Let’s say someone is murdered in a house where five other adults were present at the time. One of them is the primary suspect, and there is no evidence of anyone else entering the house. The initial probability of the prime suspect’s having committed the murder is 20 percent. Other evidence will then adjust that probability. If the four other people testify that they saw the suspect committing the murder, the suspect’s prints are on the murder weapon, and traces of the victim’s blood were found on the suspect’s clothes, jurors may consider the probability of that person’s guilt to be close enough to 100 percent to convict. Reality is more complex than this, of course. The conclusion is never certain, only highly probable.

One key distinction between deductive and inductive reasoning is that the latter accepts that a conclusion is uncertain and may change in the future. A conclusion is either strong or weak, not right or wrong. We tend to use this type of reasoning in everyday life, drawing conclusions from experiences and then updating our beliefs.

A conclusion is either strong or weak, not right or wrong.

Everyday inductive reasoning is not always correct, but it is often useful. For example, superstitious beliefs often originate from inductive reasoning. If an athlete performed well on a day when they wore their socks inside out, they may conclude that the inside-out socks brought them luck. If future successes happen when they again wear their socks inside out, the belief may strengthen. Should that not be the case, they may update their belief and recognize that it is incorrect.

Another example (let’s set aside the question of whether turkeys can reason): A farmer feeds a turkey every day, so the turkey assumes that the farmer cares for its wellbeing. Only when Thanksgiving rolls around does that assumption prove incorrect.

The issue with overusing inductive reasoning is that cognitive shortcuts and biases can warp the conclusions we draw. Our world is not always as predictable as inductive reasoning suggests, and we may selectively draw upon past experiences to confirm a belief. Someone who reasons inductively that they have bad luck may recall only unlucky experiences to support that hypothesis and ignore instances of good luck.

In The 12 Secrets of Persuasive Argument, the authors write:

In inductive arguments, focus on the inference. When a conclusion relies upon an inference and contains new information not found in the premises, the reasoning is inductive. For example, if premises were established that the defendant slurred his words, stumbled as he walked, and smelled of alcohol, you might reasonably infer the conclusion that the defendant was drunk. This is inductive reasoning. In an inductive argument the conclusion is, at best, probable. The conclusion is not always true when the premises are true. The probability of the conclusion depends on the strength of the inference from the premises. Thus, when dealing with inductive reasoning, pay special attention to the inductive leap or inference, by which the conclusion follows the premises.

… There are several popular misconceptions about inductive and deductive reasoning. When Sherlock Holmes made his remarkable “deductions” based on observations of various facts, he was usually engaging in inductive, not deductive, reasoning.

In Inductive Reasoning, Aiden Feeney and Evan Heit write:

…inductive reasoning … corresponds to everyday reasoning. On a daily basis we draw inferences such as how a person will probably act, what the weather will probably be like, and how a meal will probably taste, and these are typical inductive inferences.

[…]

[I]t is a multifaceted cognitive activity. It can be studied by asking young children simple questions involving cartoon pictures, or it can be studied by giving adults a variety of complex verbal arguments and asking them to make probability judgments.

[…]

[I]nduction is related to, and it could be argued is central to, a number of other cognitive activities, including categorization, similarity judgment, probability judgment, and decision making. For example, much of the study of induction has been concerned with category-based induction, such as inferring that your next door neighbor sleeps on the basis that your neighbor is a human animal, even if you have never seen your neighbor sleeping.

“A very great deal more truth can become known than can be proven.”

— Richard Feynman

Reasoning by Deduction

Deduction begins with a broad truth (the major premise), such as the statement that all men are mortal. This is followed by the minor premise, a more specific statement, such as that Socrates is a man. A conclusion follows: Socrates is mortal. If the major premise is true and the minor premise is true the conclusion cannot be false.

Deductive reasoning is black and white; a conclusion is either true or false and cannot be partly true or partly false. We decide whether a deductive statement is true by assessing the strength of the link between the premises and the conclusion. If all men are mortal and Socrates is a man, there is no way he can not be mortal, for example. There are no situations in which the premise is not true, so the conclusion is true.

In science, deduction is used to reach conclusions believed to be true. A hypothesis is formed; then evidence is collected to support it. If observations support its truth, the hypothesis is confirmed. Statements are structured in the form of “if A equals B, and C is A, then C is B.” If A does not equal B, then C will not equal B. Science also involves inductive reasoning when broad conclusions are drawn from specific observations; data leads to conclusions. If the data shows a tangible pattern, it will support a hypothesis.

For example, having seen ten white swans, we could use inductive reasoning to conclude that all swans are white. This hypothesis is easier to disprove than to prove, and the premises are not necessarily true, but they are true given the existing evidence and given that researchers cannot find a situation in which it is not true. By combining both types of reasoning, science moves closer to the truth. In general, the more outlandish a claim is, the stronger the evidence supporting it must be.

We should be wary of deductive reasoning that appears to make sense without pointing to a truth. Someone could say “A dog has four paws. My pet has four paws. Therefore, my pet is a dog.” The conclusion sounds logical but isn’t, because the initial premise is too specific.

The History of Reasoning

The discussion of reasoning and what constitutes truth dates back to Plato and Aristotle.

Plato (429–347 BC) believed that all things are divided into the visible and the intelligible. Intelligible things can be known through deduction (with observation being of secondary importance to reasoning) and are true knowledge.

Aristotle took an inductive approach, emphasizing the need for observations to support knowledge. He believed that we can reason only from discernable phenomena. From there, we use logic to infer causes.

Debate about reasoning remained much the same until the time of Isaac Newton. Newton’s innovative work was based on observations, but also on concepts that could not be explained by a physical cause (such as gravity). In his Principia, Newton outlined four rules for reasoning in the scientific method:

  1. “We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.” (We refer to this rule as Occam’s Razor.)
  2. “Therefore, to the same natural effects we must, as far as possible, assign the same causes.”
  3. “The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.”
  4. “In experimental philosophy, we are to look upon propositions collected by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, ’till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.”

In 1843, philosopher John Stuart Mill published A System of Logic, which further refined our understanding of reasoning. Mill believed that science should be based on a search for regularities among events. If a regularity is consistent, it can be considered a law. Mill described five methods for identifying causes by noting regularities. These methods are still used today:

  • Direct method of agreement — If two instances of a phenomenon have a single circumstance in common, the circumstance is the cause or effect.
  • Method of difference — If a phenomenon occurs in one experiment and does not occur in another, and the experiments are the same except for one factor, that is the cause, part of the cause, or the effect.
  • Joint method of agreement and difference — If two instances of a phenomenon have one circumstance in common, and two instances in which it does not occur have nothing in common except the absence of that circumstance, then that circumstance is the cause, part of the cause, or the effect.
  • Method of residue — When you subtract any part of a phenomenon known to be caused by a certain antecedent, the remaining residue of the phenomenon is the effect of the remaining antecedents.
  • Method of concomitant variations — If a phenomenon varies when another phenomenon varies in a particular way, the two are connected.

Karl Popper was the next theorist to make a serious contribution to the study of reasoning. Popper is well known for his focus on disconfirming evidence and disproving hypotheses. Beginning with a hypothesis, we use deductive reasoning to make predictions. A hypothesis will be based on a theory — a set of independent and dependent statements. If the predictions are true, the theory is true, and vice versa. Popper’s theory of falsification (disproving something) is based on the idea that we cannot prove a hypothesis; we can only show that certain predictions are false. This process requires vigorous testing to identify any anomalies, and Popper does not accept theories that cannot be physically tested. Any phenomenon not present in tests cannot be the foundation of a theory, according to Popper. The phenomenon must also be consistent and reproducible. Popper’s theories acknowledge that theories that are accepted at one time are likely to later be disproved. Science is always changing as more hypotheses are modified or disproved and we inch closer to the truth.

Conclusion

In How to Deliver a TED Talk, Jeremey Donovan writes:

No discussion of logic is complete without a refresher course in the difference between inductive and deductive reasoning. By its strictest definition, inductive reasoning proves a general principle—your idea worth spreading—by highlighting a group of specific events, trends, or observations. In contrast, deductive reasoning builds up to a specific principle—again, your idea worth spreading—through a chain of increasingly narrow statements.

Logic is an incredibly important skill, and because we use it so often in everyday life, we benefit by clarifying the methods we use to draw conclusions. Knowing what makes an argument sound is valuable for making decisions and understanding how the world works. It helps us to spot people who are deliberately misleading us through unsound arguments. Understanding reasoning is also helpful for avoiding fallacies and for negotiating.

FS Members can discuss this article on the Learning Community Forum.

The Fairness Principle: How the Veil of Ignorance Helps Test Fairness

“But the nature of man is sufficiently revealed for him to know something of himself and sufficiently veiled to leave much impenetrable darkness, a darkness in which he ever gropes, forever in vain, trying to understand himself.”

— Alexis de Tocqueville, Democracy in America

The Basics

If you could redesign society from scratch, what would it look like?

How would you distribute wealth and power?

Would you make everyone equal or not? How would you define fairness and equality?

And — here’s the kicker — what if you had to make those decisions without knowing who you would be in this new society?

Philosopher John Rawls asked just that in a thought experiment known as “the Veil of Ignorance” in his 1971 book, Theory of Justice.

Like many thought experiments, the Veil of Ignorance could never be carried out in the literal sense, nor should it be. Its purpose is to explore ideas about justice, morality, equality, and social status in a structured manner.

The Veil of Ignorance, a component of social contract theory, allows us to test ideas for fairness.

Behind the Veil of Ignorance, no one knows who they are. They lack clues as to their class, their privileges, their disadvantages, or even their personality. They exist as an impartial group, tasked with designing a new society with its own conception of justice.

As a thought experiment, the Veil of Ignorance is powerful because our usual opinions regarding what is just and unjust are informed by our own experiences. We are shaped by our race, gender, class, education, appearance, sexuality, career, family, and so on. On the other side of the Veil of Ignorance, none of that exists. Technically, the resulting society should be a fair one.

In Ethical School Leadership, Spencer J. Maxcy writes:

Imagine that you have set for yourself the task of developing a totally new social contract for today’s society. How could you do so fairly? Although you could never actually eliminate all of your personal biases and prejudices, you would need to take steps at least to minimize them. Rawls suggests that you imagine yourself in an original position behind a veil of ignorance. Behind this veil, you know nothing of yourself and your natural abilities, or your position in society. You know nothing of your sex, race, nationality, or individual tastes. Behind such a veil of ignorance all individuals are simply specified as rational, free, and morally equal beings. You do know that in the “real world,” however, there will be a wide variety in the natural distribution of natural assets and abilities, and that there will be differences of sex, race, and culture that will distinguish groups of people from each other.

“The Fairness Principle: When contemplating a moral action, imagine that you do not know if you will be the moral doer or receiver, and when in doubt err on the side of the other person.”

— Michael Shermer, The Moral Arc: How Science and Reason Lead Humanity Toward Truth, Justice, and Freedom

The Purpose of the Veil of Ignorance

Because people behind the Veil of Ignorance do not know who they will be in this new society, any choice they make in structuring that society could either harm them or benefit them.

If they decide men will be superior, for example, they must face the risk that they will be women. If they decide that 10% of the population will be slaves to the others, they cannot be surprised if they find themselves to be slaves. No one wants to be part of a disadvantaged group, so the logical belief is that the Veil of Ignorance would produce a fair, egalitarian society.

Behind the Veil of Ignorance, cognitive biases melt away. The hypothetical people are rational thinkers. They use probabilistic thinking to assess the likelihood of their being affected by any chosen measure. They possess no opinions for which to seek confirmation. Nor do they have any recently learned information to pay undue attention to. The sole incentive they are biased towards is their own self-preservation, which is equivalent to the preservation of the entire group. They cannot stereotype any particular group as they could be members of it. They lack commitment to their prior selves as they do not know who they are.

So, what would these people decide on? According to Rawls, in a fair society all individuals must possess the following:

  • Rights and liberties (including the right to vote, the right to hold public office, free speech, free thought, and fair legal treatment)
  • Power and opportunities
  • Income and wealth sufficient for a good quality of life (Not everyone needs to be rich, but everyone must have enough money to live a comfortable life.)
  • The conditions necessary for self-respect

For these conditions to occur, the people behind the Veil of Ignorance must figure out how to achieve what Rawls regards as the two key components of justice:

  • Everyone must have the best possible life which does not cause harm to others.
  • Everyone must be able to improve their position, and any inequalities must be present solely if they benefit everyone.

However, the people behind the Veil of Ignorance cannot be completely blank slates or it would be impossible for them to make rational decisions. They understand general principles of science, psychology, politics, and economics. Human behavior is no mystery to them. Neither are key economic concepts, such as comparative advantage and supply and demand. Likewise, they comprehend the deleterious impact of social entropy, and they have a desire to create a stable, ordered society. Knowledge of human psychology leads them to be cognizant of the universal desire for happiness and fulfillment. Rawls considered all of this to be the minimum viable knowledge for rational decision-making.

Ways of Understanding the Veil of Ignorance

One way to understand the Veil of Ignorance is to imagine that you are tasked with cutting up a pizza to share with friends. You will be the last person to take a slice. Being of sound mind, you want to get the largest possible share, and the only way to ensure this is to make all the slices the same size. You could cut one huge slice for yourself and a few tiny ones for your friends, but one of them might take the large slice and leave you with a meager share. (Not to mention, your friends won’t think very highly of you.)

Another means of appreciating the implications of the Veil of Ignorance is by considering the social structures of certain species of ants. Even though queen ants are able to form colonies alone, they will band together to form stronger, more productive colonies. Once the first group of worker ants reaches maturity, the queens fight to the death until one remains. When they first form a colony, the queen ants are behind a Veil of Ignorance. They do not know if they will be the sole survivor or not. All they know, on an instinctual level, is that cooperation is beneficial for their species. Like the people behind the Veil of Ignorance, the ants make a decision which, by necessity, is selfless.

The Veil of Ignorance, as a thought experiment, shows us that ignorance is not always detrimental to a society. In some situations, it can create robust social structures. In the animal kingdom, we see many examples of creatures that cooperate even though they do not know if they will suffer or benefit as a result. In a paper entitled “The Many Selves of Social Insects,” Queller and Strassmann write of bees:

…social insect colonies are so tightly integrated that they seem to function as single organisms, as a new level of self. The honeybees’ celebrated dance about food location is just one instance of how their colonies integrate and act on information that no single individual possesses. Their unity of purpose is underscored by the heroism of workers, whose suicidal stinging attacks protect the single reproducing queen.

We can also consider the Tragedy of the Commons. Introduced by ecologist Garrett Hardin, this mental model states that shared resources will be exploited if no system for fair distribution is implemented. Individuals have no incentive to leave a share of free resources for others. Hardin’s classic example is an area of land which everyone in a village is free to use for their cattle. Each person wants to maximize the usefulness of the land, so they put more and more cattle out to graze. Yet the land is finite and at some point will become too depleted to support livestock. If the people behind the Veil of Ignorance had to choose how the common land should be shared, the logical decision would be to give each person an equal part and forbid them from introducing too many cattle.

As N. Gregory Mankiw writes in Principles of Microeconomics:

The Tragedy of the Commons is a story with a general lesson: when one person uses a common resource, he diminishes other people’s enjoyment of it. Because of this negative externality, common resources tend to be used excessively. The government can solve the problem by reducing use of the common resource through regulation or taxes. Alternatively, the government can sometimes turn the common resource into a private good.

This lesson has been known for thousands of years. The ancient Greek philosopher Aristotle pointed out the problem with common resources: “What is common to many is taken least care of, for all men have greater regard for what is their own than for what they possess in common with others.”

In The Case for Meritocracy, Michael Faust uses other thought experiments to support the Veil of Ignorance:

Let’s imagine another version of the thought experiment. If inheritance is so inherently wonderful — such an intrinsic good — then let’s collect together all of the inheritable money in the world. We shall now distribute this money in exactly the same way it would be distributed in today’s world… but with one radical difference. We are going to distribute it by lottery rather than by family inheritance, i.e, anyone in the world can receive it. So, in these circumstances, how many people who support inheritance would go on supporting it? Note that the government wouldn’t be getting the money… just lucky strangers. Would the advocates of inheritance remain as fiercely committed to their cherished principle? Or would the entire concept instantly be exposed for the nonsense it is?

If inheritance were treated as the lottery it is, no one would stand by it.

[…]

In the world of the 1% versus the 99%, no one in the 1% would ever accept a lottery to decide inheritance because there would be a 99% chance they would end up as schmucks, exactly like the rest of us.

And a further surrealistic thought experiment:

Imagine that on a certain day of the year, each person in the world randomly swaps bodies with another person, living anywhere on earth. Well, for the 1%, there’s a 99% chance that they will be swapped from heaven to hell. For the 99%, 1% might be swapped from hell to heaven, while the other 98% will stay the same as before. What kind of constitution would the human race adopt if annual body swapping were a compulsory event?! They would of course choose a fair one.

“In the immutability of their surroundings the foreign shores, the foreign faces, the changing immensity of life, glide past, veiled not by a sense of mystery but by a slightly disdainful ignorance.”

— Joseph Conrad, Heart of Darkness

The History of Social Contract Theory

Although the Veil of Ignorance was first described by Rawls in 1971, many other philosophers and writers have discussed similar concepts in the past. Philosophers discussed social contract theory as far back as ancient Greece.

In Crito, Plato describes a conversation in which Socrates discusses the laws of Athens and how they are responsible for his existence. Finding himself in prison and facing the death penalty, Socrates rejects Crito’s suggestion that he should escape. He states that further injustice is not an appropriate response to prior injustice. Crito believes that by refusing to escape, Socrates is aiding his enemies, as well as failing to fulfil his role as a father. But Socrates views the laws of Athens as a single entity that has always protected him. He describes breaking any of the laws as being like injuring a parent. Having lived a long, fulfilling life as a result of the social contract he entered at birth, he has no interest in now turning away from Athenian law. Accepting death is essentially a symbolic act that Socrates intends to use to illustrate rationality and reason to his followers. If he were to escape, he would be acting out of accord with the rest of his life, during which he was always concerned with justice.

Social contract theory is concerned with the laws and norms a society decides on and the obligation individuals have to follow them. Socrates’ dialogue with Plato has similarities with the final scene of Arthur Miller’s The Crucible. At the end of the play, John Proctor is hung for witchcraft despite having the option to confess and avoid death. In continuing to follow the social contract of Salem and not confessing to a crime he obviously did not commit, Proctor believes that his death will redeem his earlier mistakes. We see this in the final dialogue between Reverend Hale and Elizabeth (Proctor’s wife):

HALE: Woman, plead with him! […] Woman! It is pride, it is vanity. […] Be his helper! What profit him to bleed? Shall the dust praise him? Shall the worms declare his truth? Go to him, take his shame away!

 

ELIZABETH: […] He have his goodness now. God forbid I take it from him!

In these two situations, individuals allow themselves to be put to death in the interest of following the social contract they agreed upon by living in their respective societies. Earlier in their lives, neither person knew what their ultimate fate would be. They were essentially behind the Veil of Ignorance when they chose (consciously or unconsciously) to follow the laws enforced by the people around them. Just as the people behind the Veil of Ignorance must accept whatever roles they receive in the new society, Socrates and Proctor followed social contracts. To modern eyes, the decision both men make to abandon their children in the interest of proving a point is not easily defensible.

Immanuel Kant wrote about justice and freedom in the late 1700s. Kant believed that fair laws should not be based on making people happy or reflecting the desire of individual policymakers, but should be based on universal moral principles:

Is it not of the utmost necessity to construct a pure moral philosophy which is completely freed from everything that may be only empirical and thus belong to anthropology? That there must be such a philosophy is self-evident from the common idea of duty and moral laws. Everyone must admit that a law, if it is to hold morally, i.e., as a ground of obligation, must imply absolute necessity; he must admit that the command, “Then shalt not lie,” does not apply to men only, as if other rational beings had no need to observe it. The same is true for all other moral laws properly so called. He must concede that the ground of obligation here must not be sought in the nature of man or in the circumstances in which he is placed, but sought a priori solely in the concepts of pure reason, and that every other precept which is in certain respects universal, so far as it leans in the least on empirical grounds (perhaps only in regard to the motive involved), may be called a practical rule but never a moral law.

How We Can Apply This Concept

We can use the Veil of Ignorance to test whether a certain issue is fair.

When my kids are fighting over the last cookie, which happens more often than you’d imagine, I ask them to determine who will spilt the cookie. The other person picks. This is the old playground rule, “you split, I pick.” Without this rule, one of them would surely give the other a smaller portion. With it, the halves are as equal as they would be with sensible adults.

When considering whether we should endorse a proposed law or policy, we can ask: if I did not know if this would affect me or not, would I still support it? Those who make big decisions that shape the lives of large numbers of people are almost always those in positions of power. And those in positions of power are almost always members of privileged groups. As Benjamin Franklin once wrote: “Justice will not be served until those who are unaffected are as outraged as those who are.”

Laws allowing or prohibiting abortion have typically been made by men, for example. As the issue lacks real significance in their personal lives, they are free to base decisions on their own ideological views, rather than consider what is fair and sane. However, behind the Veil of Ignorance, no one knows their sex. Anyone deciding on abortion laws would have to face the possibility that they themselves will end up as a woman with an unwanted pregnancy.

In Justice as Fairness: A Restatement, Rawls writes:

So what better alternative is there than an agreement between citizens themselves reached under conditions that are fair for all?

[…]

[T]hreats of force and coercion, deception and fraud, and so on must be ruled out.

And:

Deep religious and moral conflicts characterize the subjective circumstances of justice. Those engaged in these conflicts are surely not in general self-interested, but rather, see themselves as defending their basic rights and liberties which secure their legitimate and fundamental interests. Moreover, these conflicts can be the most intractable and deeply divisive, often more so than social and economic ones.

 

In Ethics: Studying the Art of Moral Appraisal, Ronnie Littlejohn explains:

We must have a mechanism by which we can eliminate the arbitrariness and bias of our “situation in life” and insure that our moral standards are justified by the one thing all people share in common: reason. It is the function of the veil of ignorance to remove such bias.

When we have to make decisions that will affect other people, especially disadvantaged groups (such as when a politician decides to cut benefits or a CEO decides to outsource manufacturing to a low-income country), we can use the Veil of Ignorance as a tool for making fair choices.

As Robert F. Kennedy (the younger brother of John F. Kennedy) said in the 1960s:

Few will have the greatness to bend history itself, but each of us can work to change a small portion of events. It is from numberless diverse acts of courage and belief that human history is shaped. Each time a man stands up for an ideal, or acts to improve the lot of others, or strikes out against injustice, he sends forth a tiny ripple of hope, and crossing each other from a million different centers of energy and daring, those ripples build a current which can sweep down the mightiest walls of oppression and resistance.

When we choose to position ourselves behind the Veil of Ignorance, we have a better chance of creating one of those all-important ripples.

Discuss on Twitter | Comment on Facebook

Thought Experiment: How Einstein Solved Difficult Problems

Mental Model of Thought Experiment

The Basics of Thought Experiment

Imagine a small town with a hard-working barber. The barber shaves everyone in the town who does not shave themselves. He does not shave anyone who shaves themselves. So, who shaves the barber?

The ‘impossible barber’ is one classic example of a thought experiment — a means of exploring a concept, hypothesis or idea through extensive thought. When finding empirical evidence is impossible, we turn to thought experiments to unspool complex concepts.

In the case of the impossible barber, setting up an experiment to figure out who shaves him would not be feasible or even desirable. After all, the barber cannot exist. Thought experiments are usually rhetorical. No particular answer can or should be found.

The purpose is to encourage speculation, logical thinking and to change paradigms. Thought experiments push us outside our comfort zone by forcing us to confront questions we cannot answer with ease. They reveal that we do not know everything and some things cannot be known.

“All truly wise thoughts have been thought already thousands of times; but to make them truly ours, we must think them over again honestly, until they take root in our personal experience.”

— Johann Wolfgang von Goethe

In a paper entitled Thought Experimentation of Presocratic Philosophy, Nicholas Rescher writes:

Homo sapiens is an amphibian who can live and function in two very different realms- the domain of actual facts which we can investigate in observational inquiry, and the domain of the imaginative projection which we can explore in thought through reasoning…A thought experiment is an attempt to draw instruction from a process of hypothetical reasoning that proceeding by eliciting the consequences of a hypothesis which, for anything that one actually knows to the contrary, may be false. It consists in reasoning from a supposition that is not accepted as true- perhaps even known to be false but is assumed provisionally in the interests of making a point or resolving a conclusion.

As we know from the narrative fallacy, complex information is best digested in the form of narratives and analogies. Many thought experiments make use of this fact to make them more accessible. Even those who are not knowledgeable about a particular field can build an understanding through thought experiments. The aim is to condense first principles into a form which can be understood through analysis and reflection. Some incorporate empirical evidence, looking at it from an alternative perspective.

The benefit of thought experiments (as opposed to aimless rumination) is their structure. In an organized manner, thought experiments allow us to challenge intellectual norms, move beyond the boundaries of ingrained facts, comprehend history, make logical decisions, foster innovative ideas, and widen our sphere of reference.

Despite being improbable or impractical, thought experiments should be possible, in theory.

The History of Thought Experiments

Thought experiments have a rich and complex history, stretching back to the ancient Greeks and Romans. As a mental model, they have enriched many of our greatest intellectual advances, from philosophy to quantum mechanics.

An early example of a thought experiment is Zeno’s narrative of Achilles and the tortoise, dating to around 430 BC. Zeno’s thought experiments aimed to deduce first principles through the elimination of untrue concepts.

In one instance, the Greek philosopher used it to ‘prove’ motion is an illusion. Known as the dichotomy paradox, it involves Achilles racing a tortoise. Out of generosity, Achilles gives the tortoise a 100m head start. Once Achilles begins running, he soon catches up on the head start. However, by that point, the tortoise has moved another 10m. By the time he catches up again, the tortoise will have moved further. Zeno claimed Achilles could never win the race as the distance between the pair would constantly increase.

In the 17th century, Galileo further developed the concept by using thought experiments to affirm his theories. One example is his thought experiment involving two balls (one heavy, one light) which are dropped from the Leaning Tower of Pisa. Prior philosophers had theorized the heavy ball would land first. Galileo claimed this was untrue, as mass does not influence acceleration. We will look at Galileo’s thought experiments in more detail later on in this post.

In 1814, Pierre Laplace explored determinism through ‘Laplace’s demon.’ This is a theoretical ‘demon’ which has an acute awareness of the location and movement of every single particle in existence. Would Laplace’s demon know the future? If the answer is yes, the universe must be linear and deterministic. If no, the universe is nonlinear and free will exists.

In 1897, the German term ‘Gedankenexperiment’ passed into English and a cohesive picture of how thought experiments are used worldwide began to form.

Albert Einstein used thought experiments for some of his most important discoveries. The most famous of this thought experiments was on a beam of light, which was made into a brilliant children’s book. What would happen if you could catch up to a beam of light as it moved he asked himself? The answers led him down a different path toward time, which led to the special theory of relativity.

In On Thought Experiments, 19th-century Philosopher and physicist Ernst Mach writes that curiosity is an inherent human quality. We see this in babies, as they test the world around them and learn the principle of cause and effect. With time, our exploration of the world becomes more and more in depth. We reach a point where we can no longer experiment through our hands alone. At that point, we move into the realm of thought experiments.

Thought experiments are a structured manifestation of our natural curiosity about the world.

Mach writes:

Our own ideas are more easily and readily at our disposal than physical facts. We experiment with thought, so as to say, at little expense. This it shouldn’t surprise us that, oftentime, the thought experiment precedes the physical experiment and prepares the way for it… A thought experiment is also a necessary precondition for a physical experiment. Every inventor and every experimenter must have in his mind the detailed order before he actualizes it. Even if Stephenson knew the train, the rails and the steam engine from experience, he must have, nonetheless, have preconceived in his thoughts the combination of a train on wheels, driven by a steam engine, before he could have proceeded to the realization. No less did Galileo have to envisage, in his imagination, the arrangements for the investigation of gravity, before these were actualized. Even the beginner learns in experimenting than as insufficient preliminary estimate, or nonobservance of sources of error has for him no less tragic comic results than the proverbial ‘look before you leap’ does in practical life.

Mach compares thought experiments to the plans and images we form in our minds before commencing an endeavor. We all do this — rehearsing a conversation before having it, planning a piece of work before starting it, figuring out every detail of a meal before cooking it. Mach views this as an integral part of our ability to engage in complex tasks and to innovate creatively.

According to Mach, the results of some thought experiments can be so certain that it is unnecessary to physically perform it. Regardless of the accuracy of the result, the desired purpose has been achieved.

We will look at some key examples of thought experiments throughout this post, which will show why Mach’s words are so important. He adds:

It can be seen that the basic method of the thought experiment is just like that of a physical experiment, namely, the method of variation. By varying the circumstances (continuously, if possible) the range of validity of an idea (expectation) related to these circumstances is increased.

Although some people view thought experiments as pseudo-science, Mach saw them as valid and important for experimentation.

“Can’t you give me brains?” asked the Scarecrow.
You do not need them. You are learning something every day. A baby has brains, but it does not know much. Experience is the only thing that brings knowledge, and the longer you are on earth the more experience you are sure to get.”

— L. Frank Baum, The Wonderful Wizard of Oz

Types of Thought Experiment

Several key types of thought experiment have been identified:

  • Prefactual – Involving potential future outcomes. E.g. ‘What will X cause to happen?’
  • Counterfactual – Contradicting known facts. E.g. ‘If Y happened instead of X, what would be the outcome?’
  • Semi-factual – Contemplating how a different past could have lead to the same present. E.g. ‘If Y had happened instead of X, would the outcome be the same?’
  • Prediction– Theorising future outcomes based on existing data. Predictions may involve mental or computational models. E.g. ‘If X continues to happen, what will the outcome be in one year?’
  • Hindcasting– Running a prediction in reverse to see if it forecasts an event which has already happened. E.g. ‘X happened, could Y have predicted it?’
  • Retrodiction– Moving backwards from an event to discover the root cause. Retrodiction is often used for problem solving and prevention purposes. E.g. ‘What caused X? How can we prevent it from happening again?’
  • Backcasting – Considering a specific future outcome, then working forwards from the present to deduce its causes. E.g. ‘If X happens in one year, what would have caused it?’

“With our limited senses and consciousness, we only glimpse a small portion of reality. Furthermore, everything in the universe is in a state of constant flux. Simple words and thoughts cannot capture this flux or complexity. The only solution for an enlightened person is to let the mind absorb itself in what it experiences, without having to form a judgment on what it all means. The mind must be able to feel doubt and uncertainty for as long as possible. As it remains in this state and probes deeply into the mysteries of the universe, ideas will come that are more dimensional and real than if we had jumped to conclusions and formed judgments early on.”

— Robert Greene, Mastery

Thought Experiments in Philosophy

Thoughts experiments have been an integral part of philosophy since ancient times. This is in part due to philosophical hypotheses often being subjective and impossible to prove through empirical evidence.

Philosophers use thought experiments to convey theories in an accessible manner. With the aim of illustrating a particular concept (such as free will or mortality), philosophers explore imagined scenarios. The goal is not to uncover a ‘correct’ answer, but to spark new ideas.

An early example of a philosophical thought experiment is Plato’s Allegory of the Cave, which centers around a dialogue between Socrates and Glaucon (Plato’s brother.)

A group of people are born and live within a dark cave. Having spent their entire lives seeing nothing but shadows on the wall, they lack a conception of the world outside. Knowing nothing different, they do not even wish to leave the cave. At some point, they are lead outside and see a world consisting of much more than shadows.

“The frog in the well knows nothing of the mighty ocean.”

— Japanese Proverb

Plato used this to illustrate the incomplete view of reality most us have. Only by learning philosophy, Plato claimed, can we see more than shadows.

Upon leaving the cave, the people realize the outside world is far more interesting and fulfilling. If a solitary person left, they would want to others to do the same. However, if they return to the cave, their old life will seem unsatisfactory. This discomfort would become misplaced, leading them to resent the outside world. Plato used this to convey his (almost compulsively) deep appreciation for the power of educating ourselves. To take up the mantle of your own education and begin seeking to understand the world is the first step on the way out of the cave.

Moving from caves to insects, let’s take a look at a fascinating thought experiment from 20th-century philosopher Ludwig Wittgenstein. Imagine

Imagine a world where each person has a beetle in a box. In this world, the only time anyone can see a beetle is when they look in their own box. As a consequence, the conception of a beetle each individual has is based on their own. It could be that everyone has something different, or that the boxes are empty, or even that the contents are amorphous.

Wittgenstein uses the ‘Beetle in a Box’ thought experiment to convey his work on the subjective nature of pain. We can each only know what pain is to us, and we cannot feel another person’s agony. If people in the hypothetical world were to have a discussion on the topic of beetles, each would only be able to share their individual perspective. The conversation would have little purpose because each person can only convey what they see as a beetle. In the same way, it is useless for us to describe our pain using analogies (‘it feels like a red hot poker is stabbing me in the back’) or scales (‘the pain is 7/10.’)

Thought Experiments in Science

Although empirical evidence is usually necessary for science, thought experiments may be used to develop a hypothesis or to prepare for experimentation. Some hypotheses cannot be tested (e.g string theory) – at least, not given our current capabilities.

Theoretical scientists may turn to thought experiments to develop a provisional answer, often informed by Occam’s razor.

Nicholas Rescher writes:

In natural science, thought experiments are common. Think, for example, of Einstein’s pondering the question of what the world would look like if one were to travel along a ray of light. Think too of physicists’ assumption of a frictionlessly rolling body or the economists’ assumption of a perfectly efficient market in the interests of establishing the laws of descent or the principles of exchange, respectively…Ernst Mach [mentioned in the introduction] made the sound point that any sensibly designed real experiment should be preceded by a thought experiment that anticipates at any rate the possibility of its outcome.

In a paper entitled Thought Experiments in Scientific Reasoning, Andrew D. Irvine explains that thought experiments are a key part of science. They are in the same realm as physical experiments. Thought experiments require all assumptions to be supported by empirical evidence. The context must be believable, and it must provide useful answers to complex questions. A thought experiment must have the potential to be falsified.

Irvine writes:

Just as a physical experiment often has repercussions for its background theory in terms of confirmation, falsification or the like, so too will a thought experiment. Of course, the parallel is not exact; thought experiments…no do not include actual interventions within the physical environment.

In  Do All Rational Folks Think As We Do? Barbara D. Massey writes:

Often critique of thought experiments demands the fleshing out or concretizing of descriptions so that what would happen in a given situation becomes less a matter of guesswork or pontification. In thought experiments we tend to elaborate descriptions with the latest scientific models in mind…The thought experiment seems to be a close relative of the scientist’s laboratory experiment with the vital difference that observations may be made from perspectives which are in reality impossible, for example, from the perspective of moving at the speed of light…The thought experiment seems to discover facts about how things work within the laboratory of the mind.

One key example of a scientific thought experiment is Schrodinger’s cat.

Developed in 1935 by Edwin Schrodinger, Schrodinger’s cat seeks to illustrate the counterintuitive nature of quantum mechanics in a more understandable manner.

Although difficult to present in a simplified manner, the idea is that of a cat which is neither alive nor dead, encased within a box. Inside the box is a Geiger counter and a small quantity of decaying radioactive material. The amount of radioactive material is small, and over a period time, it is equally probable it will decay or not. If it does decay, a tube of acid will smash and poison the cat. Without opening the box, it is impossible to know if the cat is alive or not.

Let’s ignore the ethical implications and the fact that, if this were performed, the angry meowing of the cat would be a clue. Like most thought experiments, the details are arbitrary – it is irrelevant what animal it is, what kills it, or the time frame.

Schrodinger’s point was that quantum mechanics are indeterminate. When does a quantum system switch from one state to a different one? Can the cat be both alive and dead, and is that conditional on it being observed? What about the cat’s own observation of itself?

In Search of Schrodinger’s Cat, John Gribbin writes:

Nothing is real unless it is observed…there is no underlying reality to the world. “Reality,” in the everyday sense, is not a good way to think about the behavior of the fundamental particles that make up the universe; yet at the same time those particles seem to be inseparably connected into some invisible whole, each aware of what happens to the others.

Schrodinger himself wrote in Nature and The Greeks:

We do not belong to this material world that science constructs for us. We are not in it; we are outside. We are only spectators. The reason why we believe that we are in it that we belong to the picture, is that our bodies are in the picture. Our bodies belong to it. Not only my own body, but those of my friends, also of my dog and cat and horse, and of all the other people and animals. And this is my only means of communicating with them.

Another important early example of a scientific thought experiment is Galileo’s Leaning Tower of Pisa Experiment.

Galileo sought to disprove the prevailing belief that gravity is influenced by the mass of an object. Since the time of Aristotle, people had assumed that a 10g object would fall at 1/10th the speed of a 100g object. Oddly, no one is recorded as having tested this.

According to Galileo’s early biography (written in 1654), he dropped two objects from the Leaning Tower of Pisa to disprove the gravitational mass relation hypothesis. Both landed at the same time, ushering in a new understanding of gravity. It is unknown if Galileo performed the experiment itself, so it is regarded as a thought experiment, not a physical one. Galileo reached his conclusion through the use of other thought experiments.

“We live not only in a world of thoughts, but also in a world of things. Words without experience are meaningless.”

— Vladimir Nabokov

Biologists use thought experiments, often of the counterfactual variety. In particular, evolutionary biologists question why organisms exist as they do today. For example, why are sheep not green? As surreal as the question is, it is a valid one. A green sheep would be better camouflaged from predators. Another thought experiment involves asking: why don’t organisms (aside from certain bacteria) have wheels? Again, the question is surreal but is still a serious one. We know from our vehicles that wheels are more efficient for moving at speed than legs, so why do they not naturally exist beyond the microscopic level?

Psychology and Ethics — The Trolley Problem

Picture the scene. You are a lone passerby in a street where a tram is running along a track. The driver has lost control of it. If the tram continues along its current path, the five passengers will die in the ensuing crash. You notice a switch which would allow the tram to move to a different track, where a man is standing. The collision would kill him but would save the five passengers. Do you press the switch?

This thought experiment has been discussed in various forms since the early 1900s. Psychologists and ethicists have discussed the trolley problem at length, often using it in research. It raises many questions, such as:

  • Is a casual observer required to intervene?
  • Is there a measurable value to human life? I.e. is one life less valuable than five?
  • How would the situation differ if the observer were required to actively push a man onto the tracks rather than pressing the switch?
  • What if the man being pushed were a ‘villain’? Or a loved one of the observer? How would this change the ethical implications?
  • Can an observer make this choice without the consent of the people involved?

Research has shown most people are far more willing to press a switch than to push someone onto the tracks. This changes if the man is a ‘villain’- people are then far more willing to push him. Likewise, they are reluctant if the person being pushed is a loved one. In

In Incognito: The Secret Lives of The Brain, David Eagleman writes that our brains have a distinctly different response to the idea of pushing someone and the idea of pushing a switch. When confronted with a switch, brain scans show that our rational thinking areas are activated. Changing pushing a switch to pushing a person and our emotional areas activate. Eagleman summarizes that:

People register emotionally when they have to push someone; when they only have to tip a lever, their brain behaves like Star Trek’s Mr. Spock.

The trolley problem is theoretical, but it does have real world implications. For example, the majority of people who eat meat would not be content to kill the animal themselves- they are happy to press the switch but not to push the man. Even those who do not consume meat tend to ignore the fact they are indirectly contributing to the deaths of animals due to production quotas, which mean the meat they would have eaten ends up getting wasted. They feel morally superior as they are not actively pushing anyone onto the tracks, yet are still like an observer who does not intervene in anyway. As we move towards autonomous vehicles, there may be real life instances of similar situations. Vehicles may be required to make utilitarian choices – such as swerving into a ditch and killing the driver to avoid a group of children.

Although psychology and ethics are separate fields, they often make use of the same thought experiments.

““Ford!” he said, “there’s an infinite number of monkeys outside who want to talk to us about this script for Hamlet they’ve worked out.”

— Douglas Adams, The Hitchhiker's Guide to the Galaxy

The Infinite Monkey Theorem and Mathematics

The infinite monkey theorem is a mathematical thought experiment. The premise is that infinite monkeys with typewriters will, eventually, type the complete works of Shakespeare. Some versions involve infinite monkeys or a single work. Mathematicians use the monkey(s) as a representation of a device which produces letters at random.

In Fooled By Randomness, Nassim Taleb writes:

If one puts an infinite number of monkeys in front of (strongly built) typewriters, and lets them clap away, there is a certainty that one of them will come out with an exact version of the ‘Iliad.’ Upon examination, this may be less interesting a concept than it appears at first: Such probability is ridiculously low. But let us carry the reasoning one step beyond. Now that we have found that hero among monkeys, would any reader invest his life’s savings on a bet that the monkey would write the ‘Odyssey’ next?

The infinite monkey theorem is intended to illustrate the idea that any issue can be solved through enough random input, in the manner a drunk person arriving home will eventually manage to fit their key in the lock even if they do it without much finesse. It also represents the nature of probability and the idea that any scenario is workable, given enough time and resources.

To learn more about thought experiments, consider reading The Pig That Wants to Be Eaten, The Infinite Tortoise or The Laboratory of the Mind.

Lee Kuan Yew’s Rule

Lee Kuan Yew, the “Father of Modern Singapore”, who took a nation from “Third World to First” in his own lifetime, has a simple idea about using theory and philosophy. Here it is: Does it work?

He isn’t throwing away big ideas or theories, or even discounting them per se. They just have to meet the simple, pragmatic standard.

Does it work?

Try it out the next time you study a philosophy, a value, an approach, a theory, an ideology…it doesn’t matter if the source is a great thinker of antiquity or your grandmother. Has it worked? We’ll call this Lee Kuan Yew’s Rule, to make it easy to remember.

Here’s his discussion of it in The Grand Master’s Insights on China, the United States, and the World:

My life is not guided by philosophy or theories. I get things done and leave others to extract the principles from my successful solutions. I do not work on a theory. Instead, I ask: what will make this work? If, after a series of solutions, I find that a certain approach worked, then I try to find out what was the principle behind the solution. So Plato, Aristotle, Socrates, I am not guided by them…I am interested in what works…Presented with the difficulty or major problem or an assortment of conflicting facts, I review what alternatives I have if my proposed solution does not work. I choose a solution which offers a higher probability of success, but if it fails, I have some other way. Never a dead end.

We were not ideologues. We did not believe in theories as such. A theory is an attractive proposition intellectually. What we faced was a real problem of human beings looking for work, to be paid, to buy their food, their clothes, their homes, and to bring their children up…I had read the theories and maybe half believed in them.

But we were sufficiently practical and pragmatic enough not to be cluttered up and inhibited by theories. If a thing works, let us work it, and that eventually evolved into the kind of economy that we have today. Our test was: does it work? Does it bring benefits to the people?…The prevailing theory then was that multinationals were exploiters of cheap labor and cheap raw materials and would suck a country dry…Nobody else wanted to exploit the labor. So why not, if they want to exploit our labor? They are welcome to it…. We were learning how to do a job from them, which we would never have learnt… We were part of the process that disproved the theory of the development economics school, that this was exploitation. We were in no position to be fussy about high-minded principles.

***

Want More? Check out our prior posts on Lee Kuan Yew, or check out the short book of his insights from where this clip came. If you really want to dive deep, check out his wonderful autobiography, the amazing story of Singapore’s climb.

Spring 2016 Reading List — More Curated Recommendations For a Curious Mind

We hear a lot from people who want to read more. That’s a great sentiment. But it won’t actually happen until you decide what you’re going to do less of. We all get 24 hours a day and 7 days a week. It’s up to you how you’ll spend that time.

For those who want to spend it reading, we’ve come across a lot of great books so far this year. Here are seven recommendations across a variety of topics. Some are newer, some are older — true knowledge has no expiration date.

1. The Evolution of Everything

Matt Ridley is a longtime favorite. Originally a PhD zoologist, Ridley went on to write great books like The Red Queen and The Rational Optimist, and wrote for The Economist for a while. This book makes the argument for how trial-and-error style evolution occurs across a wide range of phenomena. I don’t know that I agree with all of it, but he’s a great thinker and a lot of people will really enjoy the book.

2. A Powerful Mind: The Self-Education of George Washington

What a cool book idea by Adrienne Harrison. There are a zillion biographies of GW out there, with Chernow’s getting a lot of praise recently. But Harrison narrows in on Washington’s self-didactic nature. Why did he read so much? How did he educate himself? Any self-motivated learner is probably going to enjoy this. We’ll certainly cover it here at some point.

3. The Tiger

A Ryan Holiday recommendation, The Tiger is the story of a man-eating tiger in Siberia. Like, not that long ago. Pretty damn scary, but John Vaillant is an amazing writer who not only tells the tale of the tiger-hunt, but weaves in Russian history, natural science, the relationship between man and predator over time, and a variety of other topics in a natural and interesting way. Can’t wait to read his other stuff. I read this in two flights.

4. The Sense of Style

This is such a great book on better writing, by the incomparable Steven Pinker. We have a post about it here, but it’s worth re-recommending. If you’re trying to understand great syntax in a non-dry and practical way — Pinker is careful to show that great writing can take many forms but generally shares a few underlying principles — this is your book. He weaves in some cognitive science, which must be a first for a style guide.

5. Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration

I really loved this book. It’s written by Ed Catmull, who along with John Lasseter built the modern Pixar, which is now part of Disney. Catmull talks about the creative process at Pixar and how their movies go from a kernel of an idea to a beautiful and moving finished product. (Hint: It takes a long time.) Pixar is one of the more brilliant modern companies, and Bob Iger’s decision to buy it when he was named CEO of Disney ten years ago was a masterful stroke. I suspect Catmull and Lasseter are hugely responsible for the resurgence of Disney animation.

6. The Song Machine

This is a tough recommendation because it simultaneously fascinates and horrors me. The book is about the development of modern glossy pop music. I suspect anyone with an interest in music will be interested to see how this goes, with some people reading out of morbid curiosity and some because they want to learn more about the music they actually listen to. Pursue at your peril. I pulled out my old ’90s rock music to soothe myself.

7. Plato at the Googleplex

Does philosophy still matter? Rebecca Goldstein, who is a modern analytical philosopher, goes after this topic in a pretty interesting way by exploring what it’d be like if Plato were interacting with the modern world. Very quirky subject matter and approach, but I actually appreciated that. There’s a lot of cookie-cutter writing going on and Goldstein breaks out as she explores a timeless topic. Probably most reserved for those actually interested in philosophy, but even if you’re not, it might stretch your brain a bit.

Bonus Bestseller

Alexander Hamilton

Farnam Street related travel has brought me to quite a few airports recently. I make a habit of checking out the airport bookstores because bookstores are awesome. Recently, I noticed that Chernow’s biography of Hamilton was suddenly sitting amongst the bestsellers. Chernow’s books are amazing, but airport bestsellers? It wasn’t until I realized that Hamilton’s life had been turned into a massive smash hit Broadway play, based on the book, that everything clicked. In any case, if you want to learn about an amazing American life and also be “part of the conversation,” check out Hamilton.

Four Reasons Why Plato Matters

1024px-Akropolis_by_Leo_von_Klenze
Plato devoted his life to one goal: helping people reach a state of fulfillment. To this day, his ideas remain deeply relevant, provocative, and fascinating. Philosophy, to Plato, was a tool to help us change the world.

In this short video Alain de Botton reminds us of the four big ideas that Plato had for making life more fulfilled.

Transcribed highlights below.

1. Think More

We rarely give ourselves time to think carefully and logically about our lives and how to lead them. Sometimes we just go along with what the Greeks called Doxa, or common sense. In the thirty-six books he wrote, Plato showed this common sense to be riddled with errors, prejudice, and superstition. … The problem is that popular opinions edge us toward the wrong values. … Plato’s answer is know yourself. (This) means doing a special kind of therapy: Philosophy. This means subjecting your ideas to examination rather than acting on impulse. … This kind of examination is called a Socratic discussion.

2. Let Your Lover Change You

That sounds weird if you think that love means finding someone who wants you just the way you are. In his play, the symposium, … Plato says true love is admiration. In other words, the person you need to get together with should have very good qualities, which you yourself lack. … By getting close to this person you can become a little like they are. The right person for us helps us grow to our full potential. … For Plato ‘a couple shouldn’t love each other exactly as they are right now,’ rather they should be committed to educating each other and enduring the stormy passages that inevitably involves. Each person should want to seduce the other into becoming a better version of themselves.

3. Decode the Message of Beauty
Everyone pretty much likes beautiful things but Plato was the first to ask why do we like them? He found a fascinating reason: beautiful objects are whispering important truths to us about the good life. We find things beautiful when we sense qualities in them that we need but are constantly missing in our lives: gentleness; harmony; balance; peace; (and) strength. Beautiful objects therefore have a really important function: they help to educate our souls.

4. Reform Society

Plato spent a lot of time thinking about how the government and society should ideally be. He was the world’s first utopian thinker.

In this, he was inspired by Athens’s great rival: Sparta. This was a city-sized machine for turning out great soldiers. Everything the Spartans did – how they raised their children, how their economy was organised, whom they admired, how they had sex, what they ate – was tailored to that one goal. And Sparta was hugely successful, from a military point of view.

But that wasn’t Plato’s concern. He wanted to know: how could a society get better at producing not military power but eudaimonia? How could it reliably help people towards fulfillment?

In his book, The Republic, Plato identifies a number of changes that should be made:

We need new heroes

Athenian society was very focused on the rich, like the louche aristocrat Alcibiades, and sports celebrities, like the boxer Milo of Croton. Plato wasn’t impressed: it really matters who we admire, for celebrities influence our outlook, ideas and conduct. And bad heroes give glamour to flaws of character.

Plato therefore wanted to give Athens new celebrities, replacing the current crop with ideally wise and good people he called Guardians: models for everyone’s good development. These people would be distinguished by their record of public service, their modesty and simple habits, their dislike of the limelight and their wide and deep experience. They would be the most honoured and admired people in society.

End Democracy

He also wanted to end democracy in Athens. He wasn’t crazy he just observed how few people think properly before they vote. Therefore we get very substandard rulers. He didn’t want to replace democracy with a dictatorship, but he wanted to prevent people from voting until they’d started to think rationally. That is, until they became philosophers. … To help the process Plato started a school: The Academy.

Still curious? So where do you go from here? The Great Books program at St. John’s College in Annapolis recommends this edition of Plato’s Complete Works. Another place to start, is this, slightly more detailed introduction to Plato.