Tag: John Boyd

Double Loop Learning: Download New Skills and Information into Your Brain

We’re taught single loop learning from the time we are in grade school, but there’s a better way. Double loop learning is the quickest and most efficient way to learn anything that you want to “stick.”

***

So, you’ve done the work necessary to have an opinion, learned the mental models, and considered how you make decisions. But how do you now implement these concepts and figure out which ones work best in your situation? How do you know what’s effective and what’s not? One solution to this dilemma is double loop learning.

We can think of double loop learning as learning based on Bayesian updating — the modification of goals, rules, or ideas in response to new evidence and experience. It might sound like another piece of corporate jargon, but double loop learning cultivates creativity and innovation for both organizations and individuals.

“Every reaction is a learning process; every significant experience alters your perspective.”

— Hunter S. Thompson

Single Loop Learning

The first time we aim for a goal, follow a rule, or make a decision, we are engaging in single loop learning. This is where many people get stuck and keep making the same mistakes. If we question our approaches and make honest self-assessments, we shift into double loop learning. It’s similar to the Orient stage in John Boyd’s OODA loop. In this stage, we assess our biases, question our mental models, and look for areas where we can improve. We collect data, seek feedback, and gauge our performance. In short, we can’t learn from experience without reflection. Only reflection allows us to distill the experience into something we can learn from.

In Teaching Smart People How to Learn, business theorist Chris Argyris compares single loop learning to a typical thermostat. It operates in a homeostatic loop, always seeking to return the room to the temperature at which the thermostat is set. A thermostat might keep the temperature steady, but it doesn’t learn. By contrast, double loop learning would entail the thermostat’s becoming more efficient over time. Is the room at the optimum temperature? What’s the humidity like today and would a lower temperature be more comfortable? The thermostat would then test each idea and repeat the process. (Sounds a lot like Nest.)

Double Loop Learning

Double loop learning is part of action science — the study of how we act in difficult situations. Individuals and organizations need to learn if they want to succeed (or even survive). But few of us pay much attention to exactly how we learn and how we can optimize the process.

Even smart, well-educated people can struggle to learn from experience. We all know someone who’s been at the office for 20 years and claims to have 20 years of experience, but they really have one year repeated 20 times.

Not learning can actually make you worse off. The world is dynamic and always changing. If you’re standing still, then you won’t adapt. Forget moving ahead; you have to get better just to stay in the same relative spot, and not getting better means you’re falling behind.

Many of us are so focused on solving problems as they arise that we don’t take the time to reflect on them after we’ve dealt with them, and this omission dramatically limits our ability to learn from the experiences. Of course, we want to reflect, but we’re busy and we have more problems to solve — not to mention that reflecting on our idiocy is painful and we’re predisposed to avoid pain and protect our egos.

Reflection, however, is an example of an approach I call first-order negative, second-order positive. It’s got very visible short-term costs — it takes time and honest self-assessment about our shortcomings — but pays off in spades in the future. The problem is that the future is not visible today, so slowing down today to go faster at some future point seems like a bad idea to many. Plus with the payoff being so far in the future, it’s hard to connect to the reflection today.

The Learning Dilemma: How Success Becomes an Impediment

Argyris wrote that many skilled people excel at single loop learning. It’s what we learn in academic situations. But if we are accustomed only to success, double loop learning can ignite defensive behavior. Argyris found this to be the reason learning can be so difficult. It’s not because we aren’t competent, but because we resist learning out of a fear of seeming incompetent. Smart people aren’t used to failing, so they struggle to learn from their mistakes and often respond by blaming someone else. As Argyris put it, “their ability to learn shuts down precisely at the moment they need it the most.”

In the same way, a muscle strengthens at the point of failure, we learn best after dramatic errors.

The problem is that single loop processes can be self-fulfilling. Consider managers who assume their employees are inept. They deal with this by micromanaging and making every decision themselves. Their employees have no opportunity to learn, so they become discouraged. They don’t even try to make their own decisions. This is a self-perpetuating cycle. For double loop learning to happen, the managers would have to let go a little. Allow someone else to make minor decisions. Offer guidance instead of intervention. Leave room for mistakes. In the long run, everyone would benefit. The same applies to teachers who think their students are going to fail an exam. The teachers become condescending and assign simple work. When the exam rolls around, guess what? Many of the students do badly. The teachers think they were right, so the same thing happens the next semester.

Many of the leaders Argyris studied blamed any problems on “unclear goals, insensitive and unfair leaders, and stupid clients” rather than making useful assessments. Complaining might be cathartic, but it doesn’t let us learn. Argyris explained that this defensive reasoning happens even when we want to improve. Single loop learning just happens to be a way of minimizing effort. We would go mad if we had to rethink our response every time someone asked how we are, for example. So everyone develops their own “theory of action—a set of rules that individuals use to design and implement their own behavior as well as to understand the behavior of others.” Most of the time, we don’t even consider our theory of action. It’s only when asked to explain it that the divide between how we act and how we think we act becomes apparent. Identifying the gap between our espoused theory of action and what we are actually doing is the hard part.

The Key to Double Loop Learning: Push to the Point of Failure

The first step Argyris identified is to stop getting defensive. Justification gets us nowhere. Instead, he advocates collecting and analyzing relevant data. What conclusions can we draw from experience? How can we test them? What evidence do we need to prove a new idea is correct?

The next step is to change our mental models. Break apart paradigms. Question where conventions came from. Pivot and make reassessments if necessary.

Problem-solving isn’t a linear process. We can’t make one decision and then sit back and await success.

Argyris found that many professionals are skilled at teaching others, yet find it difficult to recognize the problems they themselves cause (see Galilean Relativity). It’s easy to focus on other people; it’s much harder to look inward and face complex challenges. Doing so brings up guilt, embarrassment, and defensiveness. As John Grey put it, “If there is anything unique about the human animal, it is that it has the ability to grow knowledge at an accelerating rate while being chronically incapable of learning from experience.”

When we repeat a single loop process, it becomes a habit. Each repetition requires less and less effort. We stop questioning or reconsidering it, especially if it does the job (or appears to). While habits are essential in many areas of our lives, they don’t serve us well if we want to keep improving. For that, we need to push the single loop to the point of failure, to strengthen how we act in the double loop. It’s a bit like the Feynman technique — we have to dismantle what we know to see how solid it truly is.

“Fail early and get it all over with. If you learn to deal with failure… you can have a worthwhile career. You learn to breathe again when you embrace failure as a part of life, not as the determining moment of life.”

— Rev. William L. Swig

One example is the typical five-day, 9-to-5 work week. Most organizations stick to it year after year. They don’t reconsider the efficacy of a schedule designed for Industrial Revolution factory workers. This is single loop learning. It’s just the way things are done, but not necessarily the smartest way to do things.

The decisions made early on in an organization have the greatest long-term impact. Changing them in the months, years, or even decades that follow becomes a non-option. How to structure the work week is one such initial decision that becomes invisible. As G.K. Chesterton put it, “The things we see every day are the things we never see at all.” Sure, a 9-to-5 schedule might not be causing any obvious problems. The organization might be perfectly successful. But that doesn’t mean things cannot improve. It’s the equivalent of a child continuing to crawl because it gets them around. Why try walking if crawling does the job? Why look for another option if the current one is working?

A growing number of organizations are realizing that conventional work weeks might not be the most effective way to structure work time. They are using double loop learning to test other structures. Some organizations are trying shorter work days or four-day work weeks or allowing people to set their own schedules. Managers then keep track of how the tested structures affect productivity and profits. Over time, it becomes apparent whether the new schedule is better than the old one.

37Signals is one company using double loop learning to restructure their work week. CEO Jason Fried began experimenting a few years ago. He tried out a four-day, 32-hour work week. He gave employees the whole of June off to explore new ideas. He cut back on meetings and created quiet spaces for focused work. Rather than following conventions, 37Signals became a laboratory looking for ways of improving. Over time, what worked and what didn’t became obvious.

Double loop learning is about data-backed experimentation, not aimless tinkering. If a new idea doesn’t work, it’s time to try something else.

In an op-ed for The New York Times, Camille Sweeney and Josh Gosfield give the example of David Chang. Double loop learning turned his failing noodle bar into an award-winning empire.

After apprenticing as a cook in Japan, Mr. Chang started his own restaurant. Yet his early efforts were ineffective. He found himself overworked and struggling to make money. He knew his cooking was excellent, so how could he make it profitable? Many people would have quit or continued making irrelevant tweaks until the whole endeavor failed. Instead, Mr. Chang shifted from single to double loop learning. A process of making honest self-assessments began. One of his foundational beliefs was that the restaurant should serve only noodles, but he decided to change the menu to reflect his skills. In time, it paid off; “the crowds came, rave reviews piled up, awards followed and unimaginable opportunities presented themselves.” This is what double loop learning looks like in action: questioning everything and starting from scratch if necessary.

Josh Waitzkin’s approach (as explained in The Art of Learning) is similar. After reaching the heights of competitive chess, Waitzkin turned his focus to martial arts. He began with tai chi chuan. Martial arts and chess are, on the surface, completely different, but Waitzkin used double loop learning for both. He progressed quickly because he was willing to lose matches if doing so meant he could learn. He noticed that other martial arts students had a tendency to repeat their mistakes, letting fruitless habits become ingrained. Like the managers Argyris worked with, students grew defensive when challenged. They wanted to be right, even if it prevented their learning. In contrast, Waitzkin viewed practice as an experiment. Each session was an opportunity to test his beliefs. He mastered several martial arts, earning a black belt in jujitsu and winning a world championship in tai ji tui shou.

Argyris found that organizations learn best when people know how to communicate. (No surprise there.) Leaders need to listen actively and open up exploratory dialogues so that problematic assumptions and conventions can be revealed. Argyris identified some key questions to consider.

  • What is the current theory in use?
  • How does it differ from proposed strategies and goals?
  • What unspoken rules are being followed, and are they detrimental?
  • What could change, and how?
  • Forget the details; what’s the bigger picture?

Meaningful learning doesn’t happen without focused effort. Double loop learning is the key to turning experience into improvements, information into action, and conversations into progress.

What You Can Learn from Fighter Pilots About Making Fast and Accurate Decisions

“What is strategy? A mental tapestry of changing intentions for harmonizing and focusing our efforts as a basis for realizing some aim or purpose in an unfolding and often unforeseen world of many bewildering events and many contending interests.””

— John Boyd

What techniques do people use in the most extreme situations to make decisions? What can we learn from them to help us make more rational and quick decisions?

If these techniques work in the most drastic scenarios, they have a good chance of working for us. This is why military mental models can have such wide, useful applications outside their original context.

Military mental models are constantly tested in the laboratory of conflict. If they weren’t agile, versatile, and effective, they would quickly be replaced by others. Military leaders and strategists invest a great deal of time in developing and teaching decision-making processes.

One strategy that I’ve found repeatedly effective is the OODA loop.

Developed by strategist and U.S. Air Force Colonel John Boyd, the OODA loop is a practical concept designed to be the foundation of rational thinking in confusing or chaotic situations. OODA stands for Observe, Orient, Decide, and Act.

Boyd developed the strategy for fighter pilots. However, like all good mental models, it can be extended into other fields. We used it at the intelligence agency I used to work at. I know lawyers, police officers, doctors, businesspeople, politicians, athletes, and coaches who use it.

Fighter pilots have to work fast. Taking a second too long to make a decision can cost them their lives. As anyone who has ever watched Top Gun knows, pilots have a lot of decisions and processes to juggle when they’re in dogfights (close-range aerial battles). Pilots move at high speeds and need to avoid enemies while tracking them and keeping a contextual knowledge of objectives, terrains, fuel, and other key variables.

Dogfights are nasty. I’ve talked to pilots who’ve been in them. They want the fights to be over as quickly as possible. The longer they go, the higher the chances that something goes wrong. Pilots need to rely on their creativity and decision-making abilities to survive. There is no game plan to follow, no schedule or to-do list. There is only the present moment when everything hangs in the balance.

Forty-Second Boyd

Boyd was no armchair strategist. He developed his ideas during his own time as a fighter pilot. He earned the nickname “Forty-Second Boyd” for his ability to win any fight in under 40 seconds.

In a tribute written after Boyd’s death, General C.C. Krulak described him as “a towering intellect who made unsurpassed contributions to the American art of war. Indeed, he was one of the central architects of the reform of military thought…. From John Boyd we learned about competitive decision making on the battlefield—compressing time, using time as an ally.”

Reflecting Robert Greene’s maxim that everything is material, Boyd spent his career observing people and organizations. How do they adapt to changeable environments in conflicts, business, and other situations?

Over time, he deduced that these situations are characterized by uncertainty. Dogmatic, rigid theories are unsuitable for chaotic situations. Rather than trying to rise through the military ranks, Boyd focused on using his position as colonel to compose a theory of the universal logic of war.

Boyd was known to ask his mentees the poignant question, “Do you want to be someone, or do you want to do something?” In his own life, he certainly focused on the latter path and, as a result, left us ideas with tangible value. The OODA loop is just one of many.

The Four Parts of the OODA Loop

Let’s break down the four parts of the OODA loop and see how they fit together.

OODA stands for Observe, Orient, Decide, Act. The description of it as a loop is crucial. Boyd intended the four steps to be repeated again and again until a conflict finishes. Although most depictions of the OODA loop portray it as a superficial idea, there is a lot of depth to it. Using it should be simple, but it has a rich basis in interdisciplinary knowledge.

1: Observe
The first step in the OODA Loop is to observe. At this stage, the main focus is to build a comprehensive picture of the situation with as much accuracy as possible.

A fighter pilot needs to consider: What is immediately affecting me? What is affecting my opponent? What could affect us later on? Can I make any predictions, and how accurate were my prior ones? A pilot’s environment changes rapidly, so these observations need to be broad and fluid.

And information alone is not enough. The observation stage requires awareness of the overarching meaning of the information. It also necessitates separating the information which is relevant for a particular decision from that which is not. You have to add context to the variables.

The observation stage is vital in decision-making processes.

For example, faced with a patient in an emergency ward, a doctor needs to start by gathering as much foundational knowledge as possible. That might be the patient’s blood pressure, pulse, age, underlying health conditions, and reason for admission. At the same time, the doctor needs to discard irrelevant information and figure out which facts are relevant for this precise situation. Only by putting the pieces together can she make a fast decision about the best way to treat the patient. The more experienced a doctor is, the more factors she is able to take into account, including subtle ones, such as a patient’s speech patterns, his body language, and the absence (rather than presence) of certain signs.

2: Orient

Orientation, the second stage of the OODA loop, is frequently misunderstood or skipped because it is less intuitive than the other stages. Boyd referred to it as the schwerpunkt, a German term which loosely translates to “the main emphasis.” In this context, to orient is to recognize the barriers that might interfere with the other parts of the process.

Without an awareness of these barriers, the subsequent decision cannot be a fully rational one. Orienting is all about connecting with reality, not with a false version of events filtered through the lens of cognitive biases and shortcuts.

“Orientation isn’t just a state you’re in; it’s a process. You’re always orienting.”

— John Boyd

Including this step, rather than jumping straight to making a decision, gives us an edge over the competition. Even if we are at a disadvantage to begin with, having fewer resources or less information, Boyd maintained that the Orient step ensures that we can outsmart an opponent.

For Western nations, cyber-crime is a huge threat — mostly because for the first time ever, they can’t outsmart, outspend, or out-resource the competition. Boyd has some lessons for them.

Boyd believed that four main barriers prevent us from seeing information in an unbiased manner:

  1. Our cultural traditions
  2. Our genetic heritage
  3. Our ability to analyze and synthesize
  4. The influx of new information — it is hard to make sense of observations when the situation keeps changing

Boyd was one of the first people to discuss the importance of building a toolbox of mental models, prior to Charlie Munger’s popularization of the concept among investors.

Boyd believed in “destructive deduction” — taking note of incorrect assumptions and biases and then replacing them with fundamental, versatile mental models. Only then can we begin to garner a reality-oriented picture of the situation, which will inform subsequent decisions.

Boyd employed a brilliant metaphor for this — a snowmobile. In one talk, he described how a snowmobile comprises elements of different devices. The caterpillar treads of a tank, skis, the outboard motor of a boat, the handlebars of a bike — each of those elements is useless alone, but combining them creates a functional vehicle.

As Boyd put it: “A loser is someone (individual or group) who cannot build snowmobiles when facing uncertainty and unpredictable change; whereas a winner is someone (individual or group) who can build snowmobiles, and employ them in an appropriate fashion, when facing uncertainty and unpredictable change.”

To orient ourselves, we have to build a metaphorical snowmobile by combining practical concepts from different disciplines.

Although Boyd is regarded as a military strategist, he didn’t confine himself to any particular discipline. His theories encompass ideas drawn from various disciplines, including mathematical logic, biology, psychology, thermodynamics, game theory, anthropology, and physics. Boyd described his approach as a “scheme of pulling things apart (analysis) and putting them back together (synthesis) in new combinations to find how apparently unrelated ideas and actions can be related to one another.”

3. Decide

No surprises here. Having gathered information and oriented ourselves, we have to make an informed decision. The previous two steps should have generated a plethora of ideas, so this is the point where we choose the most relevant option.

Boyd cautioned against first-conclusion bias, explaining that we cannot keep making the same decision again and again. This part of the loop needs to be flexible and open to Bayesian updating. In some of his notes, Boyd described this step as the hypothesis stage. The implication is that we should test the decisions we make at this point in the loop, spotting their flaws and including any issues in future observation stages.

4. Act

While technically a decision-making process, the OODA loop is all about action. The ability to act upon rational decisions is a serious advantage.

The other steps are mere precursors. A decision made, now is the time to act upon it. Also known as the test stage, this is when we experiment to see how good our decision was. Did we observe the right information? Did we use the best possible mental models? Did we get swayed by biases and other barriers? Can we disprove the prior hypothesis? Whatever the outcome, we then cycle back to the first part of the loop and begin observing again.

Why the OODA Loop Works

The OODA loop has four key benefits.

1. Speed

Fighter pilots must make many decisions in fast succession. They don’t have time to list pros and cons or to consider every available avenue. Once the OODA loop becomes part of their mental toolboxes, they should be able to cycle through it in a matter of seconds.

Speed is a crucial element of military decision making. Using the OODA loop in everyday life, we probably have a little more time than a fighter pilot would. But Boyd emphasized the value of being decisive, taking initiative, and staying autonomous. These are universal assets and apply to many situations.

Take the example of modern growth hacker marketing.

“The ability to operate at a faster tempo or rhythm than an adversary enables one to fold the adversary back inside himself so that he can neither appreciate nor keep up with what is going on. He will become disoriented and confused…”

— John Boyd

The key advantage growth hackers have over traditional marketers is speed. They observe (look at analytics, survey customers, perform a/b tests, etc.) and orient themselves (consider vanity versus meaningful metrics, assess interpretations, and ground themselves in the reality of a market) before making a decision and then acting. The final step serves to test their ideas and they have the agility to switch tactics if the desired outcome is not achieved.

Meanwhile, traditional marketers are often trapped in lengthy campaigns which do not offer much in the way of useful metrics. Growth hackers can adapt and change their techniques every single day depending on what works. They are not confined by stagnant ideas about what worked before.

So, although they may have a small budget and fewer people to assist them, their speed gives them an advantage. Just as Boyd could defeat any opponent in under 40 seconds (even starting at a position of disadvantage), growth hackers can grow companies and sell products at extraordinary rates, starting from scratch.

2. Comfort With Uncertainty
Uncertainty does not always equate to risk. A fighter pilot is in a precarious situation, where there will be gaps in their knowledge. They cannot read the mind of the opponent and might have incomplete information about the weather conditions and surrounding environment. They can, however, take into account key factors such as the opponent’s nationality, the type of airplane they are flying, and what their maneuvers reveal about their intentions and level of training.

If the opponent uses an unexpected strategy, is equipped with a new type of weapon or airplane, or behaves in an irrational, ideologically motivated way, the pilot must accept the accompanying uncertainty. However, Boyd belabored the point that uncertainty is irrelevant if we have the right filters in place.

If we don’t, we can end up stuck at the observation stage, unable to decide or act. But if we do have the right filters, we can factor uncertainty into the observation stage. We can leave a margin of error. We can recognize the elements which are within our control and those which are not.

Three key principles supported Boyd’s ideas. In his presentations, he referred to Gödel’s Proof, Heisenberg’s Uncertainty Principle, and the Second Law of Thermodynamics.

Gödel’s theorems indicate that any mental model we have of reality will omit certain information and that Bayesian updating must be used to bring it in line with reality. Our understanding of science illustrates this.

In the past, people’s conception of reality missed crucial concepts such as criticality, relativity, the laws of thermodynamics, and gravity. As we have discovered these concepts, we have updated our view of the world. Yet we would be foolish to think that we now know everything and our worldview is complete. Other key principles remain undiscovered. The same goes for fighter pilots — their understanding of what is going on during a battle will always have gaps. Identifying this fundamental uncertainty gives it less power over us.

The second concept Boyd referred to is Heisenberg’s Uncertainty Principle. In its simplest form, this principle describes the limit of the precision with which pairs of physical properties can be understood. We cannot know the position and the velocity of a body at the same time. We can know either its location or its speed, but not both. Although Heisenberg’s Uncertainty Principle was initially used to describe particles, Boyd’s ability to combine disciplines led him to apply it to planes. If a pilot focuses too hard on where an enemy plane is, they will lose track of where it is going and vice versa. Trying harder to track the two variables will actually lead to more inaccuracy! Heisenberg’s Uncertainty Principle applies to myriad areas where excessive observation proves detrimental. Reality is imprecise.

Finally, Boyd made use of the Second Law of Thermodynamics. In a closed system, entropy always increases and everything moves towards chaos. Energy spreads out and becomes disorganized.

Although Boyd’s notes do not specify the exact applications, his inference appears to be that a fighter pilot must be an open system or they will fail. They must draw “energy” (information) from outside themselves or the situation will become chaotic. They should also aim to cut their opponent off, forcing them to become a closed system. Drawing on his studies, Boyd developed his Energy Maneuverability theory, which recast maneuvers in terms of the energy they used.

“Let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt.”

— Sun Tzu

3. Unpredictability

Using the OODA loop should enable us to act faster than an opponent, thereby seeming unpredictable. While they are still deciding what to do, we have already acted. This resets their own loop, moving them back to the observation stage. Keep doing this, and they are either rendered immobile or forced to act without making a considered decision. So, they start making mistakes, which can be exploited.

Boyd recommended making unpredictable changes in speed and direction, and wrote, “we should operate at a faster tempo than our adversaries or inside our adversaries[’] time scales. … Such activity will make us appear ambiguous (non predictable) [and] thereby generate confusion and disorder among our adversaries.” He even helped design planes better equipped to make those unpredictable changes.

For the same reason that you can’t run the same play 70 times in a football game, rigid military strategies often become useless after a few uses, or even one iteration, as opponents learn to recognize and counter them. The OODA loop can be endlessly used because it is a formless strategy, unconnected to any particular maneuvers.

We know that Boyd was influenced by Sun Tzu (he owned seven thoroughly annotated copies of The Art of War), and he drew many ideas from the ancient strategist. Sun Tzu depicts war as a game of deception where the best strategy is that which an opponent cannot pre-empt. Apple has long used this strategy as a key part of their product launches. Meticulously planned, their launches are shrouded in secrecy and the goal is for no one outside the company to see a product prior to the release.

When information has been leaked, the company has taken serious legal action as well as firing associated employees. We are never sure what Apple will put out next (just search for “Apple product launch 2017” and you will see endless speculation based on few facts). As a consequence, Apple can stay ahead of their rivals.

Once a product launches, rival companies scramble to emulate it. But by the time their technology is ready for release, Apple is on to the next thing and has taken most of the market share. Although inexpensive compared to the drawn-out product launches other companies use, Apple’s unpredictability makes us pay attention. Stock prices rise the day after, tickets to launches sell out in seconds, and the media reports launches as if they were news events, not marketing events.

4. Testing

A notable omission in Boyd’s work is any sort of specific instructions for how to act or which decisions to make. This is presumably due to his respect for testing. He believed that ideas should be tested and then, if necessary, discarded.

“We can’t just look at our own personal experiences or use the same mental recipes over and over again; we’ve got to look at other disciplines and activities and relate or connect them to what we know from our experiences and the strategic world we live in.”

— John Boyd

Boyd’s OODA is a feedback loop, with the outcome of actions leading back to observations. Even in Aerial Attack Study, his comprehensive manual of maneuvers, Boyd did not describe any particular one as superior. He encouraged pilots to have the widest repertoire possible so they could select the best option in response to the maneuvers of an opponent.

We can incorporate testing into our decision-making processes by keeping track of outcomes in decision journals. Boyd’s notes indicate that he may have done just that during his time as a fighter pilot, building up the knowledge that went on to form Aerial Attack Study. Rather than guessing how our decisions lead to certain outcomes, we can get a clear picture to aid us in future orientation stages. Over time, our decision journals will reveal what works and what doesn’t.

Applying the OODA Loop

In sports, there is an adage that carries over to business quite well: “Speed kills.” If you are able to be nimble, able to assess the ever-changing environment and adapt quickly, you’ll always carry the advantage over your opponent.

Start applying the OODA loop to your day-to-day decisions and watch what happens. You’ll start to notice things that you would have been oblivious to before. Before jumping to your first conclusion, you’ll pause to consider your biases, take in additional information, and be more thoughtful of consequences.

As with anything you practice,  if you do it right, the more you do it, the better you’ll get.  You’ll start making better decisions more quickly. You’ll see more rapid progress. And as John Boyd would prescribe, you’ll start to DO something in your life, and not just BE somebody.

***

Members can discuss this post on the Learning Community Forum

The Generalized Specialist: How Shakespeare, Da Vinci, and Kepler Excelled

“What do you want to be when you grow up?” Do you ever ask kids this question? Did adults ask you this when you were a kid?

Even if you managed to escape this question until high school, then by the time you got there, you were probably expected to be able to answer this question, if only to be able to choose a college and a major. Maybe you took aptitude tests, along with the standard academic tests, in high school. This is when the pressure to go down a path to a job commences. Increasingly, the education system seems to want to reduce the time it takes for us to become productive members of the work force, so instead of exploring more options, we are encouraged to start narrowing them.

Any field you go into, from finance to engineering, requires some degree of specialization. Once you land a job, the process of specialization only amplifies. You become a specialist in certain aspects of the organization you work for.

Then something happens. Maybe your specialty is no longer needed or gets replaced by technology. Or perhaps you get promoted. As you go up the ranks of the organization, your specialty becomes less and less important, and yet the tendency is to hold on to it longer and longer. If it’s the only subject or skill you know better than anything else, you tend to see it everywhere. Even where it doesn’t exist.

Every problem is a nail and you just happen to have a hammer.

Only this approach doesn’t work. Because you have no idea of the big ideas, you start making decisions that don’t take into account how the world really works. These decisions ripple outward, and you have to spend time correcting your mistakes. If you’re not careful about self-reflection, you won’t learn, and you’ll make one version of the same mistakes over and over.

Should we become specialists or polymaths? Is there a balance we should pursue?

There is no single answer.

The decision is personal. And most of the time we fail to see the life-changing implications of it. Whether we’re conscious of this or not, it’s also a decision we have to make and re-make over and over again. Every day, we have to decide where to invest our time — do we become better at what we do or learn something new?

If you can’t adapt, changes become threats instead of opportunities.

There is another way to think about this question, though.

Around 2700 years ago, the Greek poet Archilochus wrote: “the fox knows many things; the hedgehog one big thing.” In the 1950s, philosopher Isaiah Berlin used that sentence as the basis of his essay “The Hedgehog and the Fox.” In it, Berlin divides great thinkers into two categories: hedgehogs, who have one perspective on the world, and foxes, who have many different viewpoints. Although Berlin later claimed the essay was not intended to be serious, it has become a foundational part of thinking about the distinction between specialists and generalists.

Berlin wrote that “…there exists a great chasm between those, on one side, who relate everything to a single central vision, one system … in terms of which they understand, think and feel … and, on the other hand, those who pursue many ends, often unrelated and even contradictory, connected, if at all, only in some de facto way.”

A generalist is a person who is a competent jack of all trades, with lots of divergent useful skills and capabilities. This is the handyman who can fix your boiler, unblock the drains, replace a door hinge, or paint a room. The general practitioner doctor whom you see for any minor health problem (and who refers you to a specialist for anything major). The psychologist who works with the media, publishes research papers, and teaches about a broad topic.

A specialist is someone with distinct knowledge and skills related to a single area. This is the cardiologist who spends their career treating and understanding heart conditions. The scientist who publishes and teaches about a specific protein for decades. The developer who works with a particular program.

In his original essay, Berlin writes that specialists “lead lives, perform acts and entertain ideas that are centrifugal rather than centripetal; their thought is scattered or diffused, moving on many levels, seizing upon the essence of a vast variety of experiences and objects … seeking to fit them into, or exclude them from, any one unchanging, all embracing … unitary inner vision.”

The generalist and the specialist are on the same continuum; there are degrees of specialization in a subject. There’s a difference between someone who specializes in teaching history and someone who specializes in teaching the history of the American Civil war, for example. Likewise, there is a spectrum for how generalized or specialized a certain skill is.

Some skills — like the ability to focus, to read critically, or to make rational decisions — are of universal value. Others are a little more specialized but can be used in many different careers. Examples of these skills would be design, project management, and fluency in a foreign language.

The distinction between generalization and specialization comes from biology. Species are referred to as either generalists or specialists, as with the hedgehog and the fox.

A generalist species can live in a range of environments, utilizing whatever resources are available. Often, these critters eat an omnivorous diet. Raccoons, mice, and cockroaches are generalists. They live all over the world and can eat almost anything. If a city is built in their habitat, then no problem; they can adapt.

A specialist species needs particular conditions to survive. In some cases, they are able to live only in a discrete area or eat a single food. Pandas are specialists, needing a diet of bamboo to survive. Specialist species can thrive if the conditions are correct. Otherwise, they are vulnerable to extinction.

A specialist who is outside of their circle of competence and doesn’t know it is incredibly dangerous.

The distinction between generalist and specialist species is useful as a point of comparison. Generalist animals (including humans) can be less efficient, yet they are less fragile amidst change. If you can’t adapt, changes become threats instead of opportunities.

While it’s not very glamorous to take career advice from a raccoon or a panda, we can learn something from them about the dilemmas we face. Do we want to be like a raccoon, able to survive anywhere, although never maximizing our potential in a single area? Or like a panda, unstoppable in the right context, but struggling in an inappropriate one?

Costs and Benefits

Generalists have the advantage of interdisciplinary knowledge, which fosters creativity and a firmer understanding of how the world works. They have a better overall perspective and can generally perform second-order thinking in a wider range of situations than the specialist can.

Generalists often possess transferable skills, allowing them to be flexible with their career choices and adapt to a changing world. They can do a different type of work and adapt to changes in the workplace. Gatekeepers tend to cause fewer problems for generalists than for specialists.

Managers and leaders are often generalists because they need a comprehensive perspective of their entire organization. And an increasing number of companies are choosing to have a core group of generalists on staff, and hire freelance specialists only when necessary.

The métiers at the lowest risk of automation in the future tend to be those which require a diverse, nuanced skill set. Construction vehicle operators, blue collar workers, therapists, dentists, and teachers included.

When their particular skills are in demand, specialists experience substantial upsides. The scarcity of their expertise means higher salaries, less competition, and more leverage. Nurses, doctors, programmers, and electricians are currently in high demand where I live, for instance.

Specialists get to be passionate about what they do — not in the usual “follow your passion!” way, but in the sense that they can go deep and derive the satisfaction that comes from expertise. Garrett Hardin offers his perspective on the value of specialists: 

…we cannot do without experts. We accept this fact of life, but not without anxiety. There is much truth in the definition of the specialist as someone who “knows more and more about less and less.” But there is another side to the coin of expertise. A really great idea in science often has its birth as apparently no more than a particular answer to a narrow question; it is only later that it turns out that the ramifications of the answer reach out into the most surprising corners. What begins as knowledge about very little turns out to be wisdom about a great deal.

Hardin cites the development of probability theory as an example. When Blaise Pascal and Pierre de Fermat sought to devise a means of dividing the stakes in an interrupted gambling game, their expertise created a theory with universal value.

The same goes for many mental models and unifying theories. Specialists come up with them, and generalists make use of them in surprising ways.

The downside is that specialists are vulnerable to change. Many specialist jobs are disappearing as technology changes. Stockbrokers, for example, face the possibility of replacement by AI in coming years. That doesn’t mean no one will hold those jobs, but demand will decrease. Many people will need to learn new work skills, and starting over in a new field will put them back decades. That’s a serious knock, both psychologically and financially.

Specialists are also subject to “‘man with a hammer” syndrome. Their area of expertise can become the lens they see everything through.

As Michael Mauboussin writes in Think Twice:

…people stuck in old habits of thinking are failing to use new means to gain insight into the problems they face. Knowing when to look beyond experts requires a totally fresh point of view and one that does not come naturally. To be sure, the future for experts is not all bleak. Experts retain an advantage in some crucial areas. The challenge is to know when and how to use them.

Understanding and staying within their circle of competence is even more important for specialists. A specialist who is outside of their circle of competence and doesn’t know it is incredibly dangerous.

Philip Tetlock performed an 18-year study to look at the quality of expert predictions. Could people who are considered specialists in a particular area forecast the future with greater accuracy than a generalist? Tetlock tracked 284 experts from a range of disciplines, recording the outcomes of 28,000 predictions.

The results were stark: predictions coming from generalist thinkers were more accurate. Experts who stuck to their specialized areas and ignored interdisciplinary knowledge faired worse. The specialists tended to be more confident in their erroneous predictions than the generalists. The specialists made definite assertions — which we know from probability theory to be a bad idea. It seems that generalists have an edge when it comes to Bayesian updating, recognizing probability distributions, and long-termism.

Organizations, industries, and the economy need both generalists and specialists. And when we lack the right balance, it creates problems. Millions of jobs remain unfilled, while millions of people lack employment. Many of the empty positions require specialized skills. Many of the unemployed have skills which are too general to fill those roles. We need a middle ground.

The Generalized Specialist

The economist, philosopher, and writer Henry Hazlitt sums up the dilemma:

In the modern world knowledge has been growing so fast and so enormously, in almost every field, that the probabilities are immensely against anybody, no matter how innately clever, being able to make a contribution in any one field unless he devotes all his time to it for years. If he tries to be the Rounded Universal Man, like Leonardo da Vinci, or to take all knowledge for his province, like Francis Bacon, he is most likely to become a mere dilettante and dabbler. But if he becomes too specialized, he is apt to become narrow and lopsided, ignorant on every subject but his own, and perhaps dull and sterile even on that because he lacks perspective and vision and has missed the cross-fertilization of ideas that can come from knowing something of other subjects.

What’s the safest option, the middle ground?

By many accounts, it’s being a specialist in one area, while retaining a few general iterative skills. That might sound like it goes against the idea of specialists and generalists being mutually exclusive, but it doesn’t.

A generalizing specialist has a core competency which they know a lot about. At the same time, they are always learning and have a working knowledge of other areas. While a generalist has roughly the same knowledge of multiple areas, a generalizing specialist has one deep area of expertise and a few shallow ones. We have the option of developing a core competency while building a base of interdisciplinary knowledge.

“The fox knows many things, but the hedgehog knows one big thing.”

— Archilochus

As Tetlock’s research shows, for us to understand how the world works, it’s not enough to home in on one tiny area for decades. We need to pull ideas from everywhere, remaining open to having our minds changed, always looking for disconfirming evidence. Joseph Tussman put it this way: “If we do not let the world teach us, it teaches us a lesson.”

Many great thinkers are (or were) generalizing specialists.

Shakespeare specialized in writing plays, but his experiences as an actor, poet, and part owner of a theater company informed what he wrote. So did his knowledge of Latin, agriculture, and politics. Indeed, the earliest known reference to his work comes from a critic who accused him of being “an absolute Johannes factotum” (jack of all trades).

Leonardo Da Vinci was an infamous generalizing specialist. As well as the art he is best known for, Da Vinci dabbled in engineering, music, literature, mathematics, botany, and history. These areas informed his art — note, for example, the rigorous application of botany and mathematics in his paintings. Some scholars consider Da Vinci to be the first person to combine interdisciplinary knowledge in this way or to recognize that a person can branch out beyond their defining trade.

Johannes Kepler revolutionized our knowledge of planetary motion by combining physics and optics with his main focus, astronomy. Military strategist John Boyd designed aircraft and developed new tactics, using insights from divergent areas he studied, including thermodynamics and psychology. He could think in a different manner from his peers, who remained immersed in military knowledge for their entire careers.

Shakespeare, Da Vinci, Kepler, and Boyd excelled by branching out from their core competencies. These men knew how to learn fast, picking up the key ideas and then returning to their specialties. Unlike their forgotten peers, they didn’t continue studying one area past the point of diminishing returns; they got back to work — and the results were extraordinary.

Many people seem to do work which is unrelated to their area of study or their prior roles. But dig a little deeper and it’s often the case that knowledge from the past informs their present. Marcel Proust put it best: “the real act of discovery consists not in finding new lands, but in seeing with new eyes.”

Interdisciplinary knowledge is what allows us to see with new eyes.

When Charlie Munger was asked whether to become a polymath or a specialist at the 2017 shareholders meeting for the Daily Journal, his answer surprised a lot of people. Many expected the answer to be obvious. Of course, he would recommend that people become generalists. Only this is not what he said.

Munger remarked:

I don’t think operating over many disciplines, as I do, is a good idea for most people. I think it’s fun, that’s why I’ve done it. And I’m better at it than most people would be, and I don’t think I’m good at being the very best at handling differential equations. So, it’s been a wonderful path for me, but I think the correct path for everybody else is to specialize and get very good at something that society rewards, and then to get very efficient at doing it. But even if you do that, I think you should spend 10 to 20% of your time [on] trying to know all the big ideas in all the other disciplines. Otherwise … you’re like a one-legged man in an ass-kicking contest. It’s not going to work very well. You have to know the big ideas in all the disciplines to be safe if you have a life lived outside a cave. But no, I think you don’t want to neglect your business as a dentist to think great thoughts about Proust.

In his comments, we can find the underlying approach most likely to yield exponential results: Specialize most of the time, but spend time understanding the broader ideas of the world.

This approach isn’t what most organizations and educational institutions provide. Branching out isn’t in many job descriptions or in many curricula. It’s a project we have to undertake ourselves, by reading a wide range of books, experimenting with different areas, and drawing ideas from each one.

Still curious? Check out the biographies of Leonardo da Vinci and Ben Fraklin


Comment on Facebook | Discuss on Twitter | Save to Pocket