Tag: Complexity

Simple Rules: How to Thrive in a Complex World

Simple Rules

“Simple rules are shortcut strategies that save time and effort by focusing our attention and simplifying the way we process information. The rules aren’t universal— they’re tailored to the particular situation and the person using them.”

***

We use simple rules to guide decision making every day.

In fact, without them, we’d be paralyzed by the sheer mental brainpower required to sift through the complicated messiness of our world. You can think of them as heuristics. Like heuristics, most of the time they work yet some of the time they don’t.

Simple Rules: How to Thrive in a Complex World, a book by Donald Sull and Kathleen Eisenhardt, explores the understated power that comes from using simple rules. As they define them, simple rules refer to “a handful of guidelines tailored to the user and the task at hand, which balance concrete guidance with the freedom to exercise judgment.” These rules “provide a powerful weapon against the complexity that threatens to overwhelm individuals, organizations, and society as a whole. Complexity arises whenever a system— technical, social, or natural— has multiple interdependent parts.”

They work, the authors argue, because they do three things well.

First, they confer the flexibility to pursue new opportunities while maintaining some consistency. Second, they can produce better decisions. When information is limited and time is short, simple rules make it fast and easy for people, organizations, and governments to make sound choices. They can even outperform complicated decision-making approaches in some situations. Finally, simple rules allow the members of a community to synchronize their activities with one another on the fly.

Effective simple rules share four common traits …

First, they are limited to a handful. Capping the number of rules makes them easy to remember and maintains a focus on what matters most. Second, simple rules are tailored to the person or organization using them. College athletes and middle-aged dieters may both rely on simple rules to decide what to eat, but their rules will be very different. Third, simple rules apply to a well-defined activity or decision, such as prioritizing injured soldiers for medical care. Rules that cover multiple activities or choices end up as vague platitudes, such as “Do your best” and “Focus on customers.” Finally, simple rules provide clear guidance while conferring the latitude to exercise discretion.

Simple Rules for a Complex World

People often attempt to address complex problems with complex solutions. For example, governments tend to manage complexity by trying to anticipate every possible scenario that might arise, and then promulgate regulations to cover every case.

Consider how central bankers responded to increased complexity in the global banking system. In 1988 bankers from around the world met in Basel, Switzerland, to agree on international banking regulations, and published a 30-page agreement (known as Basel I). Sixteen years later, the Basel II accord was an order of magnitude larger, at 347 pages, and Basel III was twice as long as its predecessor. When it comes to the sheer volume of regulations generated, the U.S. Congress makes the central bankers look like amateurs. The Glass-Steagall Act, a law passed during the Great Depression, which guided U.S. banking regulation for seven decades, totaled 37 pages. Its successor, Dodd-Frank, is expected to weigh in at over 30,000 pages when all supporting legislation is complete.

Meeting complexity with complexity can create more confusion than it resolves. The policies governing U.S. income taxes totaled 3.8 million words as of 2010. Imagine a book that is seven times as long as War and Peace, but without any characters, plot points, or insight into the human condition. That book is the U.S. tax code.

[…]

Applying complicated solutions to complex problems is an understandable approach, but flawed. The parts of a complex system can interact with one another in many different ways, which quickly overwhelms our ability to envision all possible outcomes.

[…]

Complicated solutions can overwhelm people, thereby increasing the odds that they will stop following the rules. A study of personal income tax compliance in forty-five countries found that the complexity of the tax code was the single best predictor of whether citizens would dodge or pay their taxes. The complexity of the regulations mattered more than the highest marginal tax rate, average levels of education or income, how fair the tax system was perceived to be, and the level of government scrutiny of tax returns.

Simple Rules Focus on the Critical Variables

Simple rules do not trump complicated ones all the time but they work more often than we think. Gerd Gigerenzer is a key contributor in this space. He thinks that simple rules can allow for better decision making.

Why can simpler models outperform more complex ones? When underlying cause-and-effect relationships are poorly understood, decision makers often look for patterns in historical data under the assumption that past events are a good indicator of future trends. The obvious problem with this approach is that the future may be genuinely different from the past. But a second problem is subtler. Historical data includes not only useful signal, but also noise— happenstance correlations between variables that do not reveal an enduring cause-and-effect relationship. Fitting a model too closely to historical data hardwires error into the model, which is known as overfitting. The result is a precise prediction of the past that may tell us little about what the future holds.

Simple rules focus on the critical variables that govern a situation and help you ignore the peripheral ones. Of course, in order to identify the key variables, you need to be operating in your circle of competence. When we pay too much attention to irrelevant or otherwise unimportant information, we fail to grasp the power of the most important ones and give them the weighting they deserve. Simple rules also make it more likely people will act on them. This is something Napoleon intuitively understood.

When instructing his troops, Napoleon realized that complicated instructions were difficult to understand, explain, and execute. So, rather than complicated strategies he passed along simple ones, such as: Attack.

Making Better Decisions

The book mentions three types of rules that “improve decision making by structuring choices and centering on what to do (and what not to do): boundary, prioritizing, and stopping rules.

Boundary Rules cover what to do …

Boundary rules guide the choice of what to do (and not do) without requiring a lot of time, analysis, or information. Boundary rules work well for categorical choices, like a judge’s yes-or-no decision on a defendant’s bail, and decisions requiring many potential opportunities to be screened quickly. These rules also come in handy when time, convenience, and cost matter.

Prioritizing rules rank options to help decide which of multiple paths to pursue.

Prioritizing rules can help you rank a group of alternatives competing for scarce money, time, or attention. … They are especially powerful when applied to a bottleneck, an activity or decision that keeps individuals or organizations from reaching their objectives. Bottlenecks represent pinch-points in companies, where the number of opportunities swamps available resources, and prioritizing rules can ensure that these resources are deployed where they can have the greatest impact. In business settings, prioritizing rules can be used to assign engineers to new-product-development projects, focus sales representatives on the most promising customers, and allocate advertising expenditure across multiple products, to name only a few possibilities.

Stopping rules help you learn when to reverse a decision. Nobel Prize-winning economist Herbert Simon argued that we lack the information, time, and mental engine to determine the single best path when faced with a slew of options. Instead, we rely on a heuristic to help us stop searching when we find something that’s good enough. Simon called this satisficing. If you think that’s hard, it’s even hard to stop doing something we’re already doing. Yet when it comes to our key investments of time, money, and energy we have to know when to pull the plug.

Sometimes we pursue goals at all costs and ignore our self-imposed stopping rule. This goal induced blindness can be deadly.

A cross-continental team of researchers matched 145 Chicagoans with demographically similar Parisians. Both the Chicagoans and Parisians used stopping rules to decide when to finish eating, but the rules themselves were very different. The Parisians employed rules like “Stop eating when I start feeling full,” linking their decision to internal cues about satiation. The Chicagoans, in contrast, were more likely to follow rules linked to external factors, such as “Stop eating when I run out of a beverage,” or “Stop eating when the TV show I’m watching is over.” Stopping rules that rely on internal cues— like when the food stops tasting good or you feel full— decrease the odds that people eat more than their body needs or even wants.

Stopping rules are particularly critical in situations when people tend to double down on a losing hand.

These three decision rules—boundary, prioritizing, and stopping—help provide guidelines on what to do—”what is acceptable to do, what is more, important to do, and what to stop doing.”

Doing Things Better

Process rules, in contrast to boundary rules, focus on how to do things better.

Process rules work because they steer a middle path between the chaos of too few rules that can result in confusion and mistakes, and the rigidity of so many rules that there is little ability to adapt to the unexpected or take advantage of new opportunities. Simply put, process rules are useful whenever flexibility trumps consistency.

The most widely used process rule is the how-to rule. How-to rules guide the basics of executing tasks, from playing golf to designing new products. The other process rules, coordination and timing, are special cases of how-to rules that apply in particular situations. Coordination rules center on getting something done when multiple actors— people, organizations, or nations— have to work together. These rules orchestrate the behaviors of, for example, schooling fish, Zipcar members, and content contributors at Wikipedia. In contrast, timing rules center on getting things done in situations where temporal factors such as rhythms, sequences, and deadlines are relevant. These rules set the timing of, for example, when to get up every day and when dragonflies migrate.

While I was skeptical, the book is well worth reading. I suggest you check it out.

Atul Gawande: The Building Industry’s Strategy for Getting Things Right in Complexity

Old_timer_structural_worker2

Checklists establish a higher level of baseline performance.

***

A useful reminder from Atul Gawande, in The Checklist Manifesto:

In a complex environment, experts are up against two main difficulties. The first is the fallibility of human memory and attention, especially when it comes to mundane, routine matters that are easily overlooked under the strain of more pressing events. (When you’ve got a patient throwing up and an upset family member asking you what’s going on, it can be easy to forget that you have not checked her pulse.) Faulty memory and distraction are a particular danger in what engineers call all-or-none processes: whether running to the store to buy ingredients for a cake, preparing an airplane for takeoff, or evaluating a sick person in the hospital, if you miss just one key thing, you might as well not have made the effort at all.

A further difficulty, just as insidious, is that people can lull themselves into skipping steps even when they remember them. In complex processes, after all, certain steps don’t always matter. … “This has never been a problem before,” people say. Until one day it is.

Checklists seem to provide protection against such failures. They remind us of the minimum necessary steps and make them explicit. They not only offer the possibility of verification but also instill a kind of discipline of higher performance.

***

How you employ the checklist is also important. In the face of complexity most organizations tend to centralize decisions, which reduces the risk for egregious error. The costs to this approach are high too. Most employees loathe feeling like they need a hall pass to use the washroom. That’s why these next comments were so inspiring.

There is a particularly tantalizing aspect to the building industry’s strategy for getting things right in complex situations: it’s that it gives people power. In response to risk, most authorities tend to centralize power and decision making. That’s usually what checklists are about—dictating instructions to the workers below to ensure they do things the way we want. Indeed, the first building checklist I saw, the construction schedule on the right-hand wall of O’Sullivan’s conference room, was exactly that. It spelled out to the tiniest detail every critical step the tradesmen were expected to follow and when—which is logical if you’re confronted with simple and routine problems; you want the forcing function.

But the list on O’Sullivan’s other wall revealed an entirely different philosophy about power and what should happen to it when you’re confronted with complex, nonroutine problems—such as what to do when a difficult, potentially dangerous, and unanticipated anomaly suddenly appears on the fourteenth floor of a thirty-two-story skyscraper under construction. The philosophy is that you push the power of decision making out to the periphery and away from the center. You give people the room to adapt, based on their experience and expertise. All you ask is that they talk to one another and take responsibility. That is what works.

The strategy is unexpectedly democratic, and it has become standard nowadays, O’Sullivan told me, even in building inspections. The inspectors do not recompute the wind-force calculations or decide whether the joints in a given building should be bolted or welded, he said. Determining whether a structure like Russia Wharf or my hospital’s new wing is built to code and fit for occupancy involves more knowledge and complexity than any one inspector could possibly have. So although inspectors do what they can to oversee a building’s construction, mostly they make certain the builders have the proper checks in place and then have them sign affidavits attesting that they themselves have ensured that the structure is up to code. Inspectors disperse the power and the responsibility.

“It makes sense,” O’Sullivan said. “The inspectors have more troubles with the safety of a two-room addition from a do-it-yourselfer than they do with projects like ours. So that’s where they focus their efforts.” Also, I suspect, at least some authorities have recognized that when they don’t let go of authority they fail.

The Book of Trees: Visualizing Branches of Knowledge

“There certainly have been many new things
in the world of visualization; but unless
you know its history, everything might seem novel.”

— Michael Friendly

***

It’s tempting to consider information visualization a relatively new field that rose in response to the demands of the Internet generation. “But,” argues Manual Lima in The Book of Trees: Visualizing Branches of Knowledge, “as with any domain of knowledge, visualizing is built on a prolonged succession of efforts and events.”

This book is absolutely gorgeous. I stared at it for hours.

While it’s tempting to look at the recent work, it’s critical we understand the long history. Lima’s stunning book helps, covering the fascinating 800-year history of the seemingly simple tree diagram.

Trees are some of the oldest living things in the world. The sequoias in Northern California, for example, can reach a height of nearly 400 feet, with a trunk diameter of 26 feet and live to more than 3,500 years. “These grandiose, mesmerizing lifeforms are a remarkable example of longevity and stability and, ultimately, are the crowning embodiment of the powerful qualities humans have always associated with trees.”

Such an important part of natural life on earth, tree metaphors have become deeply embedded in the English language, as in the “root” of the problem or “branches” of knowledge. In the Renaissance, the philosophers Francis Bacon and Rene Descartes, for example, used tree diagrams to describe dense classification arrangements. As we shall see, trees really became popular as a method of communicating and changing minds with Charles Darwin.

The Kept

In the introduction Lima writes:

In a time when more than half of the world’s population live in cities, surrounded on a daily basis by asphalt, cement, iron, and glass, it’s hard to conceive of a time when trees were of immense and tangible significance to our existence. But for thousands and thousands of years, trees have provided us with not only shelter, protection, and food, but also seemingly limitless resources for medicine, fire, energy, weaponry, tool building, and construction. It’s only normal that human beings, observing their intricate branching schemas and the seasonal withering and revival of their foliage, would see trees as powerful images of growth, decay, and resurrection. In fact, trees have had such an immense significance to humans that there’s hardly any culture that hasn’t invested them with lofty symbolism and, in many cases, with celestial and religious power. The veneration of trees, known as dendrolatry, is tied to ideas of fertility, immortality, and rebirth and often is expressed by the axis mundi (world axis), world tree, or arbor vitae (tree of life). These motifs, common in mythology and folklore from around the globe, have held cultural and religious significance for social groups throughout history — and indeed still do.

[…]

The omnipresence of these symbols reveals an inherently human connection and fascination with trees that traverse time and space and go well beyond religious devotion. This fascination has seized philosophers, scientists, and artists, who were drawn equally by the tree’s inscrutabilities and its raw, forthright, and resilient beauty. Trees have a remarkably evocative and expressive quality that makes them conducive to all types of depiction. They are easily drawn by children and beginning painters, but they also have been the main subjects of renowned artists throughout the ages.

bookoftrees18-compressed

Our relationship with trees is symbiotic and this helps explain why it permeates our language and thought.

As our knowledge of trees has grown through this and many other scientific breakthroughs, we have realized that they have a much greater responsibility than merely providing direct subsistence for the sheltered ecosystems they support. Trees perform a critical role in moderating ground temperature and preventing soil erosion. Most important, they are known as the lungs of our planet, taking in carbon dioxide from the atmosphere and releasing oxygen. As a consequence, trees and humans are inexorably intertwined on our shared blue planet.

Our primordial, symbiotic relationship with the tree can elucidate why its branched schema has provided not only an important iconographic motif for art and religion, but also an important metaphor for knowledge-classification systems. Throughout human history the tree structure has been used to explain almost every facet of life: from consanguinity ties to cardinal virtues, systems of laws to domains of science, biological associations to database systems. It has been such a successful model for graphically displaying relationships because it pragmatically expresses the materialization of multiplicity (represented by its succession of boughs, branches, twigs, and leaves) out of unity (its central foundational trunk, which is in turn connected to a common root, source, or origin.)

While we can’t go back in time it certainly appears like Charles Darwin changed the trajectory of the tree diagram forever when he used it to change minds about one of our most fundamental beliefs.

Darwin’s contribution to biology—and humanity—is of incalculable value. His ideas on evolution and natural selection still bear great significance in genetics, molecular biology, and many other disparate fields. However, his legacy of information mapping has not been highlighted frequently. During the twenty years that led to the 1859 publication of On the Origin of Species by Means of Natural Selection, Darwin considered various notions of how the tree could represent evolutionary relationships among specifics that share a common ancestor. He produced a series of drawings expanding on arboreal themes; the most famous was a rough sketch drawn in the midst of a few jotted notes in 1837. Years later, his idea would eventually materialize in the crucial diagram that he called the “tree of life” (below) and featured in the Origin of Species.

Darwin was cognizant of the significance of the tree figure as a central element in representing his theory. He took eight pages of the chapter “Natural Selection,” where the diagram is featured, to expand in considerable detail on the workings of the tree and its value in understanding the concept of common descent.

1872_Origin_F391_figdiagram

A few months before the publication of his book, Darwin wrote his publisher, John Murray: “Enclosed is the Diagram which I wish engraved on Copper on folding out Plate to face latter part of volume. — It is an odd looking affair, but is indispensable to show the nature of the very complex affinities of past & present animals. …”

The illustration was clearly a “crucial manifestation of his thinking,” and of central importance to Darwin’s argument.

As it turned out it was the tree diagram, accompanied by Darwin’s detailed explanations, that truly persuaded a rather reluctant and skeptical audience to accept his groundbreaking ideas.

Coming back to the metaphor, before we go on to explain and show some of the different types of tree diagrams, Lima argues that given the long-lasting nature of the tree and its penetration into our lives as a way to organize, describe, and understand we can use the tree as a prism to better understand our world.

As one of the most ubiquitous and long-lasting visual metaphors, the tree is an extraordinary prism through which we can observe the evolution of human consciousness, ideology, culture, and society. From its entrenched roots in religious exegesis to its contemporary secular digital expressions, the multiplicity of mapped subjects cover almost every significant aspect of life throughout the centuries. But this dominant symbol is not just a remarkable example of human ingenuity in mapping information; it is also the result of a strong human desire for order, balance, hierarchy, structure, and unity. When we look at an early twenty-first-century sunburst diagram, it appears to be a species entirely distinct from a fifteenth-century figurative tree illustration. However, if we trace its lineage back through numerous tweaks, shifts, experiments, failures, and successes, we will soon realize there’s a defined line of descent constantly punctuated by examples of human skill and inventiveness.

Types of Tree Diagrams

Figurative Trees
Figurative Trees

Trees have been not only important religious symbols for numerous cultures through the ages, but also significant metaphors for describing and organizing human knowledge. As one of the most ubiquitous visual classification systems, the tree diagram has through time embraced the most realistic and organic traits of its real, biological counterpart, using trunks, branches, and offshoots to represent connections among different entities, normally represented by leaves, fruits, or small shrubberies.

Even though tree diagrams have lost some of their lifelike features over the years, becoming ever more stylized and nonfigurative, many of their associated labels, such as roots, branches, and leaves, are still widely used. From family ties to systems of law, biological species to online discussions, their range of subjects is as expansive as their time span.

 

Tree-Eagle Joachim of Fiore

tree of consanguinity-compressed

the common law-compressed

Vertical Trees

vertical trees

The transition from realistic trees to more stylized, abstract constructs was a natural progression in the development of hierarchical representations, and a vertical scheme splitting from top or bottom was an obvious structural choice. … Of all visualization models, vertical trees are the ones that retain the strongest resemblance to figurative trees, due to their vertical layout and forking arrangement from a central trunk. In most cases they are inverted trees, with the root at the top, emphasizing the notion of descent and representing a more natural writing pattern from top to bottom. Although today they are largely constrained to small digital screens and displays, vertical trees in the past were often designed in larger formats such as long parchment scrolls and folding charts that could provide a great level of detail.

La Chronique Universelle-compressed

Horizontal Trees
horizonatal tree

With the adoption of a more schematic and abstract construct, deprived of realistic arboreal features, a tree diagram could sometimes be rotated along its axis and depicted horizontally, with its ranks arranged most frequently from left to right.

Horizontal trees probably emerged as an alternative to vertical trees to address spatial constraints and layout requirements, but they also provide unique advantages. The nesting arrangement of horizontal trees resembles the grammatical construct of a sentence, echoing a natural reading pattern that anyone can relate to. This alternative scheme was often deployed on facing pages of a manuscript, with the root of the tree at the very center, creating a type of mirroring effect that is still found in many digital and interactive executions. Horizontal trees have proved highly efficient for archetypal models such as classification trees, flow charts, mind maps, dendrograms, and, notably, in the display of files on several software applications and operating systems.

Jurisprudence-compressed

Web trigrams-compressed

The Book of Trees: Visualizing Branches of Knowledge goes on to explore multi-directional, radial, hyperbolic, rectangular, Voronoi, and circular tree maps as well as sunbursts and icicle trees.

An Introduction to Complex Adaptive Systems

Let’s explore the concept of the Complex Adaptive Systems and see how this model might apply in various walks of life.

To illustrate what a complex adaptive system is, and just as importantly, what it is not, let’s take the example of a “driving system” – or as we usually refer to it, a car. (I have cribbed some parts of this example from the excellent book by John Miller and Scott Page.)

The interior of a car, at first glance is complicated. There are seats, belts, buttons, levers, knobs, a wheel, etc. Removing the passenger car seats would make this system less complicated. However, the system would remain essentially functional. Thus, we would not call the car interior complex.

The mechanical workings of a car, however, are complex. The system has interdependent components that must all simultaneously serve their function in order for the system to work. The higher order function, driving, derives from the interaction of the parts in a very specific way.

Let’s say instead of the passenger seats, we remove the timing belt. Unlike the seats, the timing belt is a necessary node for the system to function properly. Our “driving system” is now useless. The system has complexities, but they are not what we would call adaptive.

To understand complex adaptive systems, let’s put hundreds of “driving systems” on the same road, each with the goal of reaching their destination within an expected amount of time. We call this traffic. Traffic is a complex system in which its inhabitants adapt to each other’s actions. Let’s see it in action.

***

On a popular route into a major city, we observe a car in flames on the side of the road, with firefighters working to put out the fire. Naturally, cars will slow to observe the wreck. As the first cars slow, the cars behind them slow in turn. The cars behind them must slow as well. With everyone becoming increasingly agitated, we’ve got a traffic jam. The jam emerges from the interaction of the parts of the system.

With the traffic jam formed, potential entrants to the jam—let’s call them Group #2—get on their smartphones and learn that there is an accident ahead which may take hours to clear. Upon learning of the accident, they predictably begin to adapt by finding another route. Suppose there is only one alternate route into the city. What happens now? The alternate route forms a second jam! (I’m stressed out just writing about this.)

Now let’s introduce a third group of participants, which must choose between jams. Predicting the actions of this third group is very hard to do. Perhaps so many people in group #2 have altered their route that the second jam is worse than the first, causing the majority of the third group to choose jam #1. Perhaps, anticipating that others will follow that same line of reasoning, they instead choose jam #2. Perhaps they stay home!

What we see here are emergent properties of the complex adaptive system called traffic. By the time we hit this third layer of participants, predicting the behavior of the system has become extremely difficult, if not impossible.

The key element to complex adaptive systems is the social element. The belts and pulleys inside a car do not communicate with one another and adapt their behavior to the behavior of the other parts in an infinite loop. Drivers, on the other hand, do exactly that.

***

Where else do we see this phenomenon? The stock market is a great example. Instead of describing it myself, let’s use the words of John Maynard Keynes, who brilliantly related the nature of the market’s complex adaptive parts to that of a beauty contest in chapter 12 of The General Theory.

Or, to change the metaphor slightly, professional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view. It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.

Like traffic, the complex, adaptive nature of the market is very clear. The participants in the market are interacting with one another constantly and adapting their behavior to what they know about others’ behavior. Stock prices jiggle all day long in this fashion. Forecasting outcomes in this system is extremely challenging.

To illustrate, suppose that a very skilled, influential, and perhaps lucky, market forecaster successfully calls a market crash. (There were a few in 2008, for example.) Five years later, he publicly calls for a second crash. Given his prescience in the prior crash, market participants might decide to sell their stocks rapidly, causing a crash for no other reason than the fact that it was predicted! Like traffic reports on the radio, the very act of observing and predicting has a crucial impact on the behavior of the system.

Thus, although we know that over the long term, stock prices roughly track the value of their underlying businesses, in the short run almost anything can occur due to the highly adaptive nature of market participants.

***

This understanding helps us understand some things that are not complex adaptive systems. Take the local weather. If the Doppler 3000 forecast on the local news predicts rain on Thursday, is the rain any less likely to occur? No. The act of predicting has not influenced the outcome. Although near-term weather is extremely complex, with many interacting parts leading to higher order outcomes, it does have an element of predictability.

On the other hand, we might call the Earth’s climate partially adaptive, due to the influence of human beings. (Have the cries of global warming and predictions of its worsening not begun affecting the very behavior causing the warming?)

Thus, behavioral dynamics indicate a key difference between weather and climate, and between systems that are simply complex and those that are also adaptive. Failure to use higher-order thinking when considering outcomes in complex adaptive systems is a common cause of overconfidence in prediction making.

***

Complex Adaptive Systems are part of the Farnam Street latticework of Mental Models.

How Complex Systems Fail

A bit of a preface to this post. Please read the definition of Antifragile first. While the article below is interesting, the reader should read with a critical mind. Complexity ‘solved’ with increased complexity generally only creates a lot of hidden risks, slowness, or fragility.

A short treatise on the nature of failure; how failure is evaluated; how failure is attributed to proximate cause; and the resulting new understanding of patient safety by Richard I. Cook.

1. Complex systems are intrinsically hazardous systems

All of the interesting systems (e.g. transportation, healthcare, power generation) are inherently and unavoidably hazardous by their own nature. The frequency of hazard exposure can sometimes be changed but the processes involved in the system are themselves intrinsically and irreducibly hazardous. It is the presence of these hazards that drives the creation of defenses against hazard that characterize these systems.

2. Complex systems are heavily and successfully defended against failure

The high consequences of failure lead over time to the construction of multiple layers of defense against failure. These defenses include obvious technical components (e.g. backup systems, ‘safety’ features of equipment) and human components (e.g. training, knowledge) but also a variety of organizational, institutional, and regulatory defenses (e.g. policies and procedures, certification, work rules, team training). The effect of these measures is to provide a series of shields that normally divert operations away from accidents.

3. Catastrophe requires multiple failures – single point failures are not enough

The array of defenses works. System operations are generally successful. Overt catastrophic failure occurs when small, apparently innocuous failures join to create opportunity for a systemic accident. Each of these small failures is necessary to cause catastrophe but only the combination is sufficient to permit failure. Put another way, there are many more failure opportunities than overt system accidents. Most initial failure trajectories are blocked by designed system safety components. Trajectories that reach the operational level are mostly blocked, usually by practitioners.

4. Complex systems contain changing mixtures of failures latent within them

The complexity of these systems makes it impossible for them to run without multiple flaws being present. Because these are individually insufficient to cause failure they are regarded as minor factors during operations. Eradication of all latent failures is limited primarily by economic cost but also because it is difficult before the fact to see how such failures might contribute to an accident. The failures change constantly because of changing technology, work organization, and efforts to eradicate failures.

5. Complex systems run in degraded mode

A corollary to the preceding point is that complex systems run as broken systems. The system continues to function because it contains so many redundancies and because people can make it function, despite the presence of many flaws. After accident reviews nearly always note that the system has a history of prior ‘proto-accidents’ that nearly generated catastrophe. Arguments that these degraded conditions should have been recognized before the overt accident are usually predicated on naïve notions of system performance. System operations are dynamic, with components (organizational, human, technical) failing and being replaced continuously.

6. Catastrophe is always just around the corner

Complex systems possess potential for catastrophic failure. Human practitioners are nearly always in close physical and temporal proximity to these potential failures – disaster can occur at any time and in nearly any place. The potential for catastrophic outcome is a hallmark of complex systems. It is impossible to eliminate the potential for such catastrophic failure; the potential for such failure is always present by the system’s own nature.

7. Post-accident attribution to a ‘root cause’ is fundamentally wrong

Because overt failure requires multiple faults, there is no isolated ‘cause’ of an accident. There are multiple contributors to accidents. Each of these is necessarily insufficient in itself to create an accident. Only jointly are these causes sufficient to create an accident. Indeed, it is the linking of these causes together that creates the circumstances required for the accident. Thus, no isolation of the ‘root cause’ of an accident is possible. The evaluations based on such reasoning as ‘root cause’ do not reflect a technical understanding of the nature of failure but rather the social, cultural need to blame specific, localized forces or events for outcomes.

8. Hindsight biases post-accident assessments of human performance

Knowledge of the outcome makes it seem that events leading to the outcome should have appeared more salient to practitioners at the time than was actually the case. This means that ex post facto accident analysis of human performance is inaccurate. The outcome knowledge poisons the ability of after-accident observers to recreate the view of practitioners before the accident of those same factors. It seems that practitioners “should have known” that the factors would “inevitably” lead to an accident. Hindsight bias remains the primary obstacle to accident investigation, especially when expert human performance is involved.

9. Human operators have dual roles: as producers & as defenders against failure

The system practitioners operate the system in order to produce its desired product and also work to forestall accidents. This dynamic quality of system operation, the balancing of demands for production against the possibility of incipient failure is unavoidable. Outsiders rarely acknowledge the duality of this role. In non-accident filled times, the production role is emphasized. After accidents, the defense against failure role is emphasized. At either time, the outsider’s view misapprehends the operator’s constant, simultaneous engagement with both roles.

10. All practitioner actions are gambles

After accidents, the overt failure often appears to have been inevitable and the practitioner’s actions as blunders or deliberate willful disregard of certain impending failure. But all practitioner actions are actually gambles, that is, acts that take place in the face of uncertain outcomes. The degree of uncertainty may change from moment to moment. That practitioner actions are gambles appears clear after accidents; in general, post hoc analysis regards these gambles as poor ones. But the converse: that successful outcomes are also the result of gambles; is not widely appreciated.

11. Actions at the sharp end resolve all ambiguity

Organizations are ambiguous, often intentionally, about the relationship between production targets, efficient use of resources, economy and costs of operations, and acceptable risks of low and high consequence accidents. All ambiguity is resolved by actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily biased by hindsight and ignore the other driving forces, especially production pressure.

12. Human practitioners are the adaptable element of complex systems

Practitioners and first line management actively adapt the system to maximize production and minimize accidents. These adaptations often occur on a moment by moment basis. Some of these adaptations include: (1) Restructuring the system in order to reduce exposure of vulnerable parts to failure. (2) Concentrating critical resources in areas of expected high demand. (3) Providing pathways for retreat or recovery from expected and unexpected faults. (4) Establishing means for early detection of changed system performance in order to allow graceful cutbacks in production or other means of increasing resiliency.

13. Human expertise in complex systems is constantly changing

Complex systems require substantial human expertise in their operation and management. This expertise changes in character as technology changes but it also changes because of the need to replace experts who leave. In every case, training and refinement of skill and expertise is one part of the function of the system itself. At any moment, therefore, a given complex system will contain practitioners and trainees with varying degrees of expertise. Critical issues related to expertise arise from (1) the need to use scarce expertise as a resource for the most difficult or demanding production needs and (2) the need to develop expertise for future use.

14. Change introduces new forms of failure

The low rate of overt accidents in reliable systems may encourage changes, especially the use of new technology, to decrease the number of low consequence but high frequency failures. These changes maybe actually create opportunities for new, low frequency but high consequence failures. When new technologies are used to eliminate well understood system failures or to gain high precision performance they often introduce new pathways to large scale, catastrophic failures. Not uncommonly, these new, rare catastrophes have even greater impact than those eliminated by the new technology. These new forms of failure are difficult to see before the fact; attention is paid mostly to the putative beneficial characteristics of the changes. Because these new, high consequence accidents occur at a low rate, multiple system changes may occur before an accident, making it hard to see the contribution of technology to the failure.

15. Views of ‘cause’ limit the effectiveness of defenses against future events

Post-accident remedies for “human error” are usually predicated on obstructing activities that can “cause” accidents. These end-of-the-chain measures do little to reduce the likelihood of further accidents. In fact that likelihood of an identical accident is already extraordinarily low because the pattern of latent failures changes constantly. Instead of increasing safety, post-accident remedies usually increase the coupling and complexity of the system. This increases the potential number of latent failures and also makes the detection and blocking of accident trajectories more difficult.

16. Safety is a characteristic of systems and not of their components

Safety is an emergent property of systems; it does not reside in a person, device or department of an organization or system. Safety cannot be purchased or manufactured; it is not a feature that is separate from the other components of the system. This means that safety cannot be manipulated like a feedstock or raw material. The state of safety in any system is always dynamic; continuous systemic change insures that hazard and its management are constantly changing.

17. People continuously create safety

Failure free operations are the result of activities of people who work to keep the system within the boundaries of tolerable performance. These activities are, for the most part, part of normal operations and superficially straightforward. But because system operations are never trouble free, human practitioner adaptations to changing conditions actually create safety from moment to moment. These adaptations often amount to just the selection of a well-rehearsed routine from a store of available responses; sometimes, however, the adaptations are novel combinations or de novo creations of new approaches.

18. Failure free operations require experience with failure

Recognizing hazard and successfully manipulating system operations to remain inside the tolerable performance boundaries requires intimate contact with failure. More robust system performance is likely to arise in systems where operators can discern the “edge of the envelope”. This is where system performance begins to deteriorate, becomes difficult to predict, or cannot be readily recovered. In intrinsically hazardous systems, operators are expected to encounter and appreciate hazards in ways that lead to overall performance that is desirable. Improved safety depends on providing operators with calibrated views of the hazards. It also depends on providing calibration about how their actions move system performance towards or away from the edge of the envelope.

Here is a video of Richard talking about how complex systems don’t fail.

What Is Complexity?

While it seems more and more common these days, it’s important to determine when you’re operating in complexity. Complexity means that little things can have a big effect and big things can have no impact. Complexity also renders some of the way we think about problems as useless, at best.

In The Black Swan: The Impact of the Highly Improbable Fragility, Nassim Taleb writes:

I will simplify here with a functional definition of complexity—among many more complete ones. A complex domain is characterized by the following: there is a great degree of interdependence between its elements, both temporal (a variable depends on its past changes), horizontal (variables depend on one another), and diagonal (variable A depends on the past history of variable B). As a result of this interdependence, mechanisms are subjected to positive, reinforcing feedback loops, which cause “fat tails.” That is, they prevent the working of the Central Limit Theorem that, as we saw in Chapter 15 , establishes Mediocristan thin tails under summation and aggregation of elements and causes “convergence to the Gaussian.” In lay terms, moves are exacerbated over time instead of being dampened by counterbalancing forces. Finally, we have nonlinearities that accentuate the fat tails.

So, complexity implies Extremistan. (The opposite is not necessarily true.)

Based on this definition, complexity highlights some of the flaws in the way we approach things and inductive reasoning.

How do we know what we know? How do we know that what we have observed from given objects and events suffices to enable us to figure out their other properties ? There are traps built into any kind of knowledge gained from observation.

Consider the Turkey that is fed every day.

Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race “looking out for its best interests,” as a politician would say. On the afternoon of the Wednesday before Thanksgiving , something unexpected will happen to the turkey. It will incur a revision of belief.

If the hand that feeds you can wring your neck, you’re a turkey.

If you haven’t read it already, The Black Swan: The Impact of the Highly Improbable Fragility, is a must read.