Category: People

Henry Ford and the Actual Value of Education

“The object of education is not to fill a man’s mind with facts;
it is to teach him how to use his mind in thinking.”
— Henry Ford

***

In his memoir My Life and Work, written in 1934, the brilliant (but flawed) Henry Ford (1863-1947) offers perhaps the best definition you’ll find of the value of an education, and a useful warning against the mere accumulation of information for the sake of its accumulation. A  devotee of lifelong learning need not be a Jeopardy contestant, accumulating trivia to spit back as needed. In the Age of Google, that sort of knowledge is increasingly irrelevant.

A real life-long learner seeks to learn and apply the world’s best knowledge to create a more constructive and more useful life for themselves and those around them. And to do that, you have to learn how to think on your feet. The world does not offer up no-brainers every day; more frequently, we’re presented with a lot of grey options. Unless your studies are improving your ability to handle reality as it is and get a fair result, you’re probably wasting your time.

From Ford’s memoir:

An educated man is not one whose memory is trained to carry a few dates in history—he is one who can accomplish things. A man who cannot think is not an educated man however many college degrees he may have acquired. Thinking is the hardest work anyone can do—which is probably the reason why we have so few thinkers. There are two extremes to be avoided: one is the attitude of contempt toward education, the other is the tragic snobbery of assuming that marching through an educational system is a sure cure for ignorance and mediocrity. You cannot learn in any school what the world is going to do next year, but you can learn some of the things which the world has tried to do in former years, and where it failed and where it succeeded. If education consisted in warning the young student away from some of the false theories on which men have tried to build, so that he may be saved the loss of the time in finding out by bitter experience, its good would be unquestioned.

An education which consists of signposts indicating the failure and the fallacies of the past doubtless would be very useful. It is not education just to possess the theories of a lot of professors. Speculation is very interesting, and sometimes profitable, but it is not education. To be learned in science today is merely to be aware of a hundred theories that have not been proved. And not to know what those theories are is to be “uneducated,” “ignorant,” and so forth. If knowledge of guesses is learning, then one may become learned by the simple expedient of making his own guesses. And by the same token he can dub the rest of the world “ignorant” because it does not know what his guesses are.

But the best that education can do for a man is to put him in possession of his powers, give him control of the tools with which destiny has endowed him, and teach him how to think. The college renders its best service as an intellectual gymnasium, in which mental muscle is developed and the student strengthened to do what he can. To say, however, that mental gymnastics can be had only in college is not true, as every educator knows. A man’s real education begins after he has left school. True education is gained through the discipline of life.

[…]

Men satisfy their minds more by finding out things for themselves than by heaping together the things which somebody else has found out. You can go out and gather knowledge all your life, and with all your gathering you will not catch up even with your own times. You may fill your head with all the “facts” of all the ages, and your head may be just an overloaded fact−box when you get through. The point is this: Great piles of knowledge in the head are not the same as mental activity. A man may be very learned and very useless. And then again, a man may be unlearned and very useful.

The object of education is not to fill a man’s mind with facts; it is to teach him how to use his mind in thinking. And it often happens that a man can think better if he is not hampered by the knowledge of the past.

Ford is probably wrong in his very last statement, study of the past is crucial to understand the human condition, but the sentiment offered in the rest of the piece should be read and re-read frequently.

This brings to mind a debate you’ll hear that almost all debaters get wrong: What’s more valuable, to be educated in the school of life, or in the school of books? Which is it?

It’s both!

This is what we call a false dichotomy. There is absolutely no reason to choose between the two. We’re all familiar with the algebra. If A and B have positive value, then A+B must be greater than A or B alone! You must learn from your life as it goes along, but since we have the option to augment that by studying the lives of others, why would we not take advantage? All it takes is the will and the attitude to study the successes and failures of history, add them to your own experience, and get an algebra-style A+B result.

So, resolve to use your studies to learn to think, to learn to handle the world better, to be more useful to those around you. Don’t worry about the facts and figures for their own sake. We don’t need another human encyclopedia.

***

Still Interested? Check out all of Ford’s interesting memoir, or try reading up on what a broad education should contain. 

Hares, Tortoises, and the Trouble with Genius

“Geniuses are dangerous.”
— James March

The Trouble with Genius

How many organizations would deny that they want more creativity, more genius, and more divergent thinking among their constituents? The great genius leaders of the world are fawned over breathlessly and a great amount of lip service is given to innovation; given the choice between “mediocrity” and “innovation,” we all choose innovation hands-down.

So why do we act the opposite way?

Stanford’s James March might have some insight. His book On Leadership (see our earlier notes here) is a collection of insights derived mostly from the study of great literature, from Don Quixote to Saint Joan to War & Peace. In March’s estimation, we can learn more about human nature (of which leadership is merely a subset) from studying literature than we can from studying leadership literature.

March discusses the nature of divergent thinking and “genius” in a way that seems to reflect true reality. We don’t seek to cultivate genius, especially in a mature organization, because we’re more afraid of the risks than appreciative of the benefits. A classic case of loss aversion. Tolerating genius means tolerating a certain amount of disruption; the upside of genius sounds pretty good until we start understanding its dark side:

Most original ideas are bad ones. Those that are good, moreover, are only seen as such after a long learning period; they rarely are impressive when first tried out. As a result, an organization is likely to discourage both experimentation with deviant ideas and the people who come up with them, thereby depriving itself, in the name of efficient operation, of its main source of innovation.

[…]

Geniuses are dangerous. Othello’s instinctive action makes him commit an appalling crime, the fine sentiments of Pierre Bezukhov bring little comfort to the Russian peasants, and Don Quixote treats innocent people badly over and over again. A genius combines the characteristics that produce resounding failures (stubbornness, lack of discipline, ignorance), a few ingredients of success (elements of intelligence, a capacity to put mistakes behind him or her, unquenchable motivation), and exceptional good luck. Genius therefore only appears as a sub-product of a great tolerance for heresy and apparent craziness, which is often the result of particular circumstances (over-abundant resources, managerial ideology, promotional systems) rather than deliberate intention. “Intelligent” organizations will therefore try to create an environment that allows genius to flourish by accepting the risks of inefficiency or crushing failures…within the limits of the risks that they can afford to take.

We’ve bolded an important component: Exceptional good luck. The kind of genius that rarely surfaces but we desperately pursue needs great luck to make an impact. Truthfully, genius is always recognized in hindsight, with the benefit of positive results in mind. We “cherrypick” the good results of divergent thinkers, but forget that we use the results to decide who’s a genius and who isn’t. Thus, tolerating divergent, genius-level thinking requires an ability to tolerate failure, loss, and change if it’s to be applied prospectively.

Sounds easy enough, in theory. But as Daniel Kahneman and Charlie Munger have so brilliantly pointed out, we become very risk averse when we possess anything, including success; we feel loss more acutely than gain, and we seek to keep the status quo intact. (And it’s probably good that we do, on average.)

Compounding the problem, when we do recognize and promote genius, some of our exalting is likely to be based on false confidence, almost by definition:

Individuals who are frequently promoted because they have been successful will have confidence in their own abilities to beat the odds. Since in a selective, and therefore increasingly homogenous, management group the differences in performance that are observed are likely to be more often due to chance events than to any particular individual capacity, the confidence is likely to be misplaced. Thus, the process of selecting on performance results in exaggerated self-confidence and exaggerated risk-taking.

Let’s use a current example: Elon Musk. Elon is (justifiably) recognized as a modern genius, leaping tall buildings in a single bound. Yet as Ashlee Vance makes clear in his biography, Musk teetered on the brink several times. It’s a near miracle that his businesses have survived (and thrived) to where they’re at today. The press would read much differently if SpaceX or Tesla had gone under — he might be considered a brilliant but fatally flawed eccentric rather than a genius. Luck played a fair part in that outcome (which is not to take away from Musk’s incredible work).

***

Getting back to organizations, the failure to appropriately tolerate genius is also a problem of homeostasis: The tendency of systems to “stay in place” and avoid disruption of strongly formed past habits. Would an Elon Musk be able to rise in a homeostatic organization? It generally does not happen.

James March has a solution, though, and it’s one we’ve heard echoed by other thinkers like Nassim Taleb and seems to be used fairly well in some modern technology organizations. As with most organizational solutions, it requires realigning incentives, which is the job of a strong and selfless leader.

An analogy of the hare and the tortoise illustrates the solution:

Although one particular hare (who runs fast but sleeps too long) has every chance or being beaten by one particular tortoise, an army of hares in competition with an army of tortoises will almost certainly result in one of the hares crossing the finish line first. The choices of an organization therefore depend on the respective importance that it attaches to its mean performance (in which case it should recruit tortoises) and the achievement of a few dazzling successes (an army of hares, which is inefficient as a whole, but contains some outstanding individuals.)

[…]

In a simple model, a tortoise advances with a constant speed of 1 mile/hour while a hare runs at 5 miles/hour, but in each given 5-minute period a hare has a 90 percent chance of sleeping rather than running. A tortoise will cover the mile of the test in one hour exactly and a hare will have only about an 11 percent chance of arriving faster (the probability that he will be awake for at least three of the 5-minute periods.) If there is a race between the tortoise and one hare, the probability that the hare will win is only 0.11. However, if there are 100 tortoises and 100 hares in the race, the probability that at least one hare will arrive before any tortoise (and thus the race will be won by a hare) is 1– ((0.89)^100), or greater than 0.9999.

The analogy holds up well in the business world. Any one young, aggressive “hare” is unlikely to beat the lumbering “tortoise” that reigns king, but put 100 hares out against 100 tortoises and the result is much different.

This means that any organization must conduct itself in such a way that hares have a chance to succeed internally. It means becoming open to divergence and allowing erratic genius to rise, while keeping the costs of failure manageable. It means having the courage to create an “army of hares” inside of your own organization rather than letting tortoises have their way, as they will if given the opportunity.

For a small young organization, the cost of failure isn’t all that high, comparatively speaking — you can’t fall far off a pancake. So hares tend to get a lot more leash. But for a large organization, the cost of failure tends to increase to such a pain point that it stops becoming tolerated! At this point, real innovation ceases.

But, if we have the will and ability to create small teams and projects with “hare-like” qualities, in ways that allow the “talent + luck” equation to surface truly better and different work, necessarily tolerating (and encouraging) failure and disruption, then we might have a shot at overcoming homeostasis in the same way that a specific combination of engineering and fuel allow rockets to overcome the equally strong force of gravity.

***

Still Interested? Check out our notes on James March’s books On Leadership and The Ambiguities of Experience, and an interview March did on the topic of leadership.

Our Genes and Our Behavior

“But now we are starting to show genetic influence on individual differences using DNA. DNA is a game changer; it’s a lot harder to argue with DNA than it is with a twin study or an adoption study.”
— Robert Plomin

***

It’s not controversial to say that our genetics help explain our physical traits. Tall parents will, on average, have tall children. Overweight parents will, on average, have overweight children. Irish parents have Irish looking kids. This is true to the point of banality and only a committed ignorant would dispute it.

It’s slightly more controversial to talk about genes influencing behavior. For a long time, it was denied entirely. For most of the 20th century, the “experts” in human behavior had decided that “nurture” beat “nature” with a score of 100-0. Particularly influential was the child’s early life — the way their parents treated them in the womb and throughout early childhood. (Thanks Freud!)

So, where are we at now?

Genes and Behavior

Developmental scientists and behavioral scientists eventually got to work with twin studies and adoption studies, which tended to show that certain traits were almost certainly heritable and not reliant on environment, thanks to the natural controlled experiments of twins separated at birth. (This eventually provided fodder for Judith Rich Harris’s wonderful work on development and personality.)

All throughout, the geneticists, starting with Gregor Mendel and his peas, kept on working. As behavioral geneticist Robert Plomin explains, the genetic camp split early on. Some people wanted to understand the gene itself in detail, using very simple traits to figure it out (eye color, long or short wings, etc.) and others wanted to study the effect of genes on complex behavior, generally:

People realized these two views of genetics could come together. Nonetheless, the two worlds split apart because Mendelians became geneticists who were interested in understanding genes. They would take a convenient phenotype, a dependent measure, like eye color in flies, just something that was easy to measure. They weren’t interested in the measure, they were interested in how genes work. They wanted a simple way of seeing how genes work.

By contrast, the geneticists studying complex traits—the Galtonians—became quantitative geneticists. They were interested in agricultural traits or human traits, like cardiovascular disease or reading ability, and would use genetics only insofar as it helped them understand that trait. They were behavior centered, while the molecular geneticists were gene centered. The molecular geneticists wanted to know everything about how a gene worked. For almost a century these two worlds of genetics diverged.

Eventually, the two began to converge. One camp (the gene people) figured out that once we could sequence the genome, they might be able to understand more complicated behavior by looking directly at genes in specific people with unique DNA, and contrasting them against one another.

The reason why this whole gene-behavior game is hard is because, as Plomin makes clear, complex traits like intelligence are not like eye color. There’s no “smart gene” — it comes from the interaction of thousands of different genes and can occur in a variety of combinations. Basic Mendel-style counting (the sort of dominant/recessive eye color gene thing you learned in high school biology) doesn’t work in analyzing the influence of genes on complex traits:

The word gene wasn’t invented until 1903. Mendel did his work in the mid-19th century. In the early 1900s, when Mendel was rediscovered, people finally realized the impact of what he did, which was to show the laws of inheritance of a single gene. At that time, these Mendelians went around looking for Mendelian 3:1 segregation ratios, which was the essence of what Mendel showed, that inheritance was discreet. Most of the socially, behaviorally, or agriculturally important traits aren’t either/or traits, like a single-gene disorder. Huntington’s disease, for example, is a single-gene dominant disorder, which means that if you have that mutant form of the Huntington’s gene, you will have Huntington’s disease. It’s necessary and sufficient. But that’s not the way complex traits work.

The importance of genetics is hard to understate, but until the right technology came along, we could only observe it indirectly. A study might have shown that 50% of the variance in cognitive ability was due to genetics, but we had no idea which specific genes, in which combinations, actually produced smarter people.

But the Moore’s law style improvement in genetic testing means that we can cheaply and effectively map out entire genomes for a very low cost. And with that, the geneticists have a lot of data to work with, a lot of correlations to begin sussing out. The good thing about finding strong correlations between genes and human traits is that we know which one is causative: The gene! Obviously, your reading ability doesn’t cause you to have certain DNA; it must be the other way around. So “Big Data” style screening is extremely useful, once we get a little better at it.

***

The problem is that, so far, the successes have been a bit minimal. There are millions of “ATCG” base pairs to check on.  As Plomin points out, we can only pinpoint about 20% of the specific genetic influence for something simple like height, which we know is about 90% heritable. Complex traits like schizophrenia are going to take a lot of work:

We’ve got to be able to figure out where the so-called missing heritability is, that is, the gap between the DNA variants that we are able to identify and the estimates we have from twin and adoption studies. For example, height is about 90 percent heritable, meaning, of the differences between people in height, about 90 percent of those differences can be explained by genetic differences. With genome-wide association studies, we can account for 20 percent of the variance of height, or a quarter of the heritability of height. That’s still a lot of missing heritability, but 20 percent of the variance is impressive.

With schizophrenia, for example, people say they can explain 15 percent of the genetic liability. The jury is still out on how that translates into the real world. What you want to be able to do is get this polygenic score for schizophrenia that would allow you to look at the entire population and predict who’s going to become schizophrenic. That’s tricky because the studies are case-control studies based on extreme, well-diagnosed schizophrenics, versus clean controls who have no known psychopathology. We’ll know soon how this polygenic score translates to predicting who will become schizophrenic or not.

It brings up an interesting question that gets us back to the beginning of the piece: If we know that genetics have an influence on some complex behavioral traits (and we do), and we can with the continuing progress of science and technology, sequence a baby’s genome and predict to a certain extent their reading level, facility with math, facility with social interaction, etc., do we do it?

Well, we can’t until we get a general recognition that genes do indeed influence behavior and do have predictive power as far as how children perform. So far, the track record on getting educators to see that it’s all quite real is pretty bad. Like the Freudians before, there’s a resistance to the “nature” aspect of the debate, probably influenced by some strong ideologies:

If you look at the books and the training that teachers get, genetics doesn’t get a look-in. Yet if you ask teachers, as I’ve done, about why they think children are so different in their ability to learn to read, and they know that genetics is important. When it comes to governments and educational policymakers, the knee-jerk reaction is that if kids aren’t doing well, you blame the teachers and the schools; if that doesn’t work, you blame the parents; if that doesn’t work, you blame the kids because they’re just not trying hard enough. An important message for genetics is that you’ve got to recognize that children are different in their ability to learn. We need to respect those differences because they’re genetic. Not that we can’t do anything about it.

It’s like obesity. The NHS is thinking about charging people to be fat because, like smoking, they say it’s your fault. Weight is not as heritable as height, but it’s highly heritable. Maybe 60 percent of the differences in weight are heritable. That doesn’t mean you can’t do anything about it. If you stop eating, you won’t gain weight, but given the normal life in a fast-food culture, with our Stone Age brains that want to eat fat and sugar, it’s much harder for some people.

We need to respect the fact that genetic differences are important, not just for body mass index and weight, but also for things like reading disability. I know personally how difficult it is for some children to learn to read. Genetics suggests that we need to have more recognition that children differ genetically, and to respect those differences. My grandson, for example, had a great deal of difficulty learning to read. His parents put a lot of energy into helping him learn to read. We also have a granddaughter who taught herself to read. Both of them now are not just learning to read but reading to learn.

Genetic influence is just influence; it’s not deterministic like a single gene. At government levels—I’ve consulted with the Department for Education—I don’t think they’re as hostile to genetics as I had feared, they’re just ignorant of it. Education just doesn’t consider genetics, whereas teachers on the ground can’t ignore it. I never get static from them because they know that these children are different when they start. Some just go off on very steep trajectories, while others struggle all the way along the line. When the government sees that, they tend to blame the teachers, the schools, or the parents, or the kids. The teachers know. They’re not ignoring this one child. If anything, they’re putting more energy into that child.

It’s frustrating for Plomin because he knows that eventually DNA mapping will get good enough that real, and helpful, predictions will be possible. We’ll be able to target kids early enough to make real differences — earlier than problems actually manifest — and hopefully change the course of their lives for the better. But so far, no dice.

Education is the last backwater of anti-genetic thinking. It’s not even anti-genetic. It’s as if genetics doesn’t even exist. I want to get people in education talking about genetics because the evidence for genetic influence is overwhelming. The things that interest them—learning abilities, cognitive abilities, behavior problems in childhood—are the most heritable things in the behavioral domain. Yet it’s like Alice in Wonderland. You go to educational conferences and it’s as if genetics does not exist.

I’m wondering about where the DNA revolution will take us. If we are explaining 10 percent of the variance of GCSE scores with a DNA chip, it becomes real. People will begin to use it. It’s important that we begin to have this conversation. I’m frustrated at having so little success in convincing people in education of the possibility of genetic influence. It is ignorance as much as it is antagonism.

Here’s one call for more reality recognition.

***

Still Interested? Check out a book by John Brookman of Edge.org with a curated collection of articles published on genetics.

J.K. Rowling On People’s Intolerance of Alternative Viewpoints

At the PEN America Literary Gala & Free Expression Awards, J.K. Rowling, of Harry Potter fame, received the 2016 PEN/Allen Foundation Literary Service Award. Embedded in her acceptance speech is some timeless wisdom on tolerance and acceptance:

Intolerance of alternative viewpoints is spreading to places that make me, a moderate and a liberal, most uncomfortable. Only last year, we saw an online petition to ban Donald Trump from entry to the U.K. It garnered half a million signatures.

Just a moment.

I find almost everything that Mr. Trump says objectionable. I consider him offensive and bigoted. But he has my full support to come to my country and be offensive and bigoted there. His freedom to speak protects my freedom to call him a bigot. His freedom guarantees mine. Unless we take that absolute position without caveats or apologies, we have set foot upon a road with only one destination. If my offended feelings can justify a travel ban on Donald Trump, I have no moral ground on which to argue that those offended by feminism or the fight for transgender rights or universal suffrage should not oppress campaigners for those causes. If you seek the removal of freedoms from an opponent simply on the grounds that they have offended you, you have crossed the line to stand alongside tyrants who imprison, torture and kill on exactly the same justification.

Too often we look at the world through our own eyes and fail to acknowledge the eyes of others. In so doing we often lose touch with reality.

The quick reaction our brains have to people who disagree with us is often that they are idiots. They shouldn’t be allowed to talk or have a platform. They should lose.

This reminds me of Kathryn Schulz’s insightful view on what we do when someone disagrees with us.

As a result we dismiss the views of others, failing to even consider that our view of the world might be wrong.

It’s easy to be dismissive and intolerant of others. It’s easy to say they’re idiots and wish they didn’t have the same rights you have. It’s harder to map that to the very freedoms we enjoy and relate it to the world we want to live in.

Warren Berger’s Three-Part Method for More Creativity

“A problem well stated is a problem half-solved.”
— Charles “Boss” Kettering

***

The whole scientific method is built on a very simple structure: If I do this, then what will happen? That’s the basic question on which more complicated, intricate, and targeted lines of inquiry are built, across a wide variety of subjects. This simple form helps us push deeper and deeper into knowledge of the world. (On a sidenote, science has become such a loaded, political word that this basic truth of how it works frequently seems to be lost!)

Individuals learn this way too. From the time you were a child, you were asking why (maybe even too much), trying to figure out all the right questions to ask to get better information about how the world works and what to do about it.

Because question-asking is such an integral part of how we know things about the world, both institutionally and individually, it seems worthy to understand how creative inquiry works, no? If we want to do things that haven’t been done or learn things that have never been learned — in short, be more creative — we must learn to ask the right questions, ones so good that they’re half-answered in the asking. And to do that, it might help to understand the process, no?

Warren Berger proposes a simple method in his book A More Beautiful Questionan interesting three-part system to help (partially) solve the problem of inquiry. He calls it The Why, What If, and How of Innovative Questioning, and reminds us why it’s worth learning about.

Each stage of the problem solving process has distinct challenges and issues–requiring a different mind-set, along with different types of questions. Expertise is helpful at certain points, not so helpful at others; wide-open, unfettered divergent thinking is critical at one stage, discipline and focus is called for at another. By thinking of questioning and problem solving in a more structured way, we can remind ourselves to shift approaches, change tools, and adjust our questions according to which stage we’re entering.

Three-Part Method for More Creativity

Why?

It starts with the Why?

A good Why? seeks true understanding. Why are things the way they are currently? Why do we do it that way? Why do we believe what we believe?

This start is essential because it gives us permission to continue down a line of inquiry fully equipped. Although we may think we have a brilliant idea in our heads for a new product, or a new answer to an old question, or a new way of doing an old thing, unless we understand why things are the way they are, we’re not yet on solid ground. We never want to operate from a position of ignorance, wasting our time on an idea that hasn’t been pushed and fleshed out. Before we say “I already know” the answer, maybe we need to step back and look for the truth.

At the same time, starting with a strong Why also opens up the idea that the current way (whether it’s our way or someone else’s) might be wrong, or at least inefficient. Let’s say a friend proposes you go to the same restaurant you’ve been to a thousand times. It might be a little agitating, but a simple “Why do we always go there?” allows two things to happen:

A. Your friend can explain why, and this gives him/her a legitimate chance at persuasion. (If you’re open minded.)

B. The two of you may agree you only go there out of habit, and might like to go somewhere else.

This whole Why? business is the realm of contrarian thinking, which not everyone enjoys doing. But Berger cites the case of George Lois:

George Lois, the renowned designer of iconic magazine covers and celebrated advertising campaigns, was also known for being a disruptive force in business meetings. It wasn’t just that he was passionate in arguing for his ideas; the real issue, Lois recalls, was that often he was the only person in the meeting willing to ask why. The gathered business executives would be anxious to proceed on a course of action assumed to be sensible. While everyone else nodded in agreement, “I would be the only guy raising his hand to say, ‘Wait a minute, this thing you want to do doesn’t make any sense. Why the hell are you doing it this way?”

Others in the room saw Lois to be slowing the meeting and stopping the group from moving forward. But Lois understood that the group was apt to be operating on habit–trotting out an idea or approach similar to what had been done in similar situations before, without questioning whether it was the best idea or the right approach in this instance. The group needed to be challenged to “step back” by someone like Lois–who had a healthy enough ego to withstand being the lone questioner in the room.

The truth is that a really good Why? type question tends to be threatening. That’s also what makes it useful. It challenges us to step back and stop thinking on autopilot. It also requires what Berger calls a step back from knowing — that recognizable feeling of knowing something but not knowing how you know it. This forced perspective is, of course, as valuable a thing as you can do.

Berger describes a valuable exercise that’s sometimes used to force perspective on people who think they already have a complete answer. After showing a drawing of a large square (seemingly) divided into 16 smaller squares, the questioner asks the audience “How many squares do you see?”

The easy answer is sixteen. But the more observant people in the group are apt to notice–especially after Srinivas allows them to have a second, longer, look–that you can find additional squares by configuring them differently. In addition to the sixteen single squares, there are nine two-by-two squares, four three-by-three squares, and one large four-by-four square, which brings the total to thirty squares.

“The squares were always there, but you didn’t find them until you looked for them.”

Point being, until you step back, re-examine, and look a little harder, you might not have seen all the damn squares yet!

What If?

The second part is where a good questioner, after using Why? to understand as deeply as possible and open a new line of inquiry, proposes a new type of solution, usually an audacious one — all great ideas tend to be, almost by definition — by asking What If…?

Berger illustrates this one well with the story of Pandora Music. The founder Tim Westergren wanted to know why good music wasn’t making it out to the masses. His search didn’t lead to a satisfactory answer, so he eventually asked himself, What if we could map the DNA of music? The result has been pretty darn good, with something close to 80 million listeners at present:

The Pandora story, like many stories of inquiry-driven startups, started with someone’s wondering about an unmet need. It concluded with the questioner, Westergren, figuring out how to bring a fully realized version of the answer into the world.

But what happened in between? That’s when the lightning struck. In Westergren’s case, ideas and influences began to come together; he combined what he knew about music with what he was learning about technology. Inspiration was drawn from a magazine article, and from a seemingly unrelated world (biology). A vision of the new possibility began to form in the mind. It all resulted in an audacious hypothetical question that might or might not have been feasible–but was exciting enough to rally people to the challenge of trying to make it work.

The What If stage is the blue-sky moment of questioning, when anything is possible. Those possibilities may not survive the more practical How stage; but it’s critical to innovation that there be time for wild, improbable ideas to surface and to inspire.

If the word Why has penetrative power, enabling the questioner to get past assumptions and dig deep into problems, the words What if have a more expansive effect–allowing us to think without limits or constraints, firing the imagination.

Clearly, Westergren had engaged in serious combinatorial creativity pulling from multiple disciplines, which led him to ask the right kind of questions. This seems to be a pretty common feature at this stage of the game, and an extremely common feature of all new ideas:

Smart recombinations are all around us. Pandora, for example, is a combination of a radio station and search engine; it also takes the biological method of genetic coding and transfers it to the domain of music […] In today’s tech world, many of the most successful products–Apple’s iPhone being just one notable example–are hybrids, melding functions and features in new ways.

Companies, too, can be smart recombinations. Netflix was started as a video-rental business that operated like a monthly membership health club (and how it has added “TV production studio” to the mix). Airbnb is a combination of an online travel agency, a social media platform, and a good old-fashioned bed-and-breakfast (the B&B itself is a smart combination from way back.)

It may be that the Why? –> What if? line of inquiry is common to all types of innovative thinking because it engages the part of our brain that starts turning over old ideas in new ways by combining them with other unrelated ideas, much of them previously sitting idle in our subconscious. That churning is where new ideas really arise.

The idea then has to be “reality-tested”, and that’s where the last major question comes in.

How?

Once we think we’ve hit on a brilliant new idea, it’s time to see if the thing actually works. Usually and most frequently, the answer is no. But enough times to make it worth our while, we discover that the new idea has legs.

The most common problem here is that we try to perfect a new idea all at once, leading to stagnation and paralysis. That’s usually the wrong approach.

Another, often better, way is to try the idea quickly and start getting feedback. As much as possible. In the book, Berger describes a fun little experiment that drives home the point, and serves as a fairly useful business metaphor besides:

A software designer shared a story about an interesting experiment in which the organizers brought together a group of kindergarten children who were divided into small teams and given a challenge: Using uncooked spaghetti sticks, string, tape, and a marshmallow, they had to assemble the tallest structure they could, within a time limit (the marshmallow was supposed to be placed on top of the completed structure.)

Then, in a second phase of the experiment, the organizers added a new wrinkle. They brought in teams of Harvard MBA grad students to compete in the challenge against the kindergartners. The grad students, I’m told, took it seriously. They brought a highly analytical approach to the challenge, debating among themselves about how best to combine the sticks, the string, and the tape to achieve maximum altitude.

Perhaps you’ll have guessed this already, but the MBA students were no match for the kindergartners. For all their planning and discussion, the structures they carefully conceived invariably fell apart–and then they were out of time before they could get in more attempts.

The kids used their time much more efficiently by constructing right away. They tried one way of building, and if it didn’t work, they quickly tried another. They got in a lot more tries. They learned from their mistakes as they went along, instead of attempting to figure out everything in advance.

This little experiment gets run in the real world all the time by startups looking to outcompete ponderous old bureaucracies. They simply substitute velocity for scale and see what happens — it often works well.

The point is to move along the axis of Why?–>What If–>How? without too much self-censoring in the last phase. Being afraid to fail can often mean a great What If? proposition gets stuck there forever. Analysis paralysis, as it’s sometimes called. But if you can instead enter the testing of the How? stage quickly, even by showing that an idea won’t work, then you can start the loop over again, either asking a new Why? or proposing a new What If? to an existing Why?

Thus moving your creative engine forward.

***

Berger’s point is that there is an intense practical end to understanding productive inquiry. Just like “If I do this, then what will happen?” is a basic structure on which all manner of complex scientific questioning and testing is built, so can a simple Why, What If, and How structure catalyze a litany of new ideas.

Still Interested? Check out the book, or check out some related posts: Steve Jobs on CreativitySeneca on Gathering Ideas And Combinatorial Creativity, or for some fun with question-asking, What If? Serious Scientific Answers to Absurd Hypothetical Questions.

Atul Gawande and the Mistrust of Science

Continuing on with Commencement Season, Atul Gawande gave an address to the students of Cal Tech last Friday, delivering a message to future scientists, but one that applies equally to all of us as thinkers:

“Even more than what you think, how you think matters.”

Gawande addresses the current growing mistrust of “scientific authority” — the thought that because science creaks along one mistake at a time, that it isn’t to be trusted. The misunderstanding of what scientific thinking is and how it works is at the root of much problematic ideology, and it’s up to those who do understand it to promote its virtues.

It’s important to realize that scientists, singular, are as fallible as the rest of us. Thinking otherwise only sets you up for a disappointment. The point of science is the collective, the forward advance of the hive, not the bee. It’s sort of a sausage-making factory when seen up close, but when you pull back the view, it looks like a beautifully humming engine, steadily giving us more and more information about ourselves and the world around us. Science is, above all, a method of thought. A way of figuring out what’s true and what we’re just fooling ourselves about.

So explains Gawande:

Few working scientists can give a ground-up explanation of the phenomenon they study; they rely on information and techniques borrowed from other scientists. Knowledge and the virtues of the scientific orientation live far more in the community than the individual. When we talk of a “scientific community,” we are pointing to something critical: that advanced science is a social enterprise, characterized by an intricate division of cognitive labor. Individual scientists, no less than the quacks, can be famously bull-headed, overly enamored of pet theories, dismissive of new evidence, and heedless of their fallibility. (Hence Max Planck’s observation that science advances one funeral at a time.) But as a community endeavor, it is beautifully self-correcting.

Beautifully organized, however, it is not. Seen up close, the scientific community—with its muddled peer-review process, badly written journal articles, subtly contemptuous letters to the editor, overtly contemptuous subreddit threads, and pompous pronouncements of the academy— looks like a rickety vehicle for getting to truth. Yet the hive mind swarms ever forward. It now advances knowledge in almost every realm of existence—even the humanities, where neuroscience and computerization are shaping understanding of everything from free will to how art and literature have evolved over time.

He echoes Steven Pinker in the thought that science, traditionally left to the realm of discovering “physical” reality, is now making great inroads into what might have previously been considered philosophy, by exploring why and how our minds work the way they do. This can only be accomplished by deep critical thinking across a broad range of disciplines, and by the dual attack of specialists uncovering highly specific nuggets and great synthesizers able to suss out meaning from the big pile of facts.

The whole speech is worth a read and reflection, but Gawande’s conclusion is particularly poignant for an educated individual in a Republic:

The mistake, then, is to believe that the educational credentials you get today give you any special authority on truth. What you have gained is far more important: an understanding of what real truth-seeking looks like. It is the effort not of a single person but of a group of people—the bigger the better—pursuing ideas with curiosity, inquisitiveness, openness, and discipline. As scientists, in other words.

Even more than what you think, how you think matters. The stakes for understanding this could not be higher than they are today, because we are not just battling for what it means to be scientists. We are battling for what it means to be citizens.

Still Interested? Read the rest, and read a few other of this year’s commencements by Nassim Taleb and Gary Taubes. Or read about E.O. Wilson, the great Harvard biologist, and what he thought it took to become a great scientist. (Hint: The same stuff it takes for anyone to become a great critical thinker.)