Tag: Complexity

5 Design-Informed Approaches to Good Learning

John Maeda offers five design-informed appraoches for learning.

  1. BASICS are the beginning.
  2. REPEAT yourself often.
  3. AVOID creating desperation.
  4. INSPIRE with examples.
  5. NEVER forget to repeat yourself.

John Maeda is a graphic designer and computer scientist. His book, The Laws of Simplicity, proposes ten laws for simplifying complex systems in business and life. Think of it as simplicity 101.

Maeda has some interesting things to say on learning:

Learning occurs best when there is a desire to attain specific knowledge. Sometimes that need is edification, which is itself a noble goal. Although in the majority of cases, having some kind of palpable reward, whether a letter grade or a candy bar, is necessary to motivate most people. Whether there is an intrinsic motivation like pride or an extrinsic motivation like a free cruise to the Caribbean waiting at the very end, the journey one must take to reap the reward is better when made tolerable.

Maeda believes that the best motivator to learn is giving students a seemingly insurmountable challenge.

1. Basics are the beginning

The first step in conveying the BASICS is to assume the position of the first-time learner. As the expert, playing this role is not impossible, but it is best ceded to a focus group or any other gathering of external participants. Observing what fails to make sense to the non-expert, and then following that trail successively to the very end of the knowledge chain is the critical path to success. Gathering these truths is worthwhile but can be time consuming or else done poorly.

This echoes the first habit of effective thinking, understand deeply.

Be brutally honest about what you know and don’t know. Then see what’s missing, identify the gaps, and fill them in.

The easiest way to learn the basics is to teach them to yourself. Maeda tells this story to illustrate the point:

A few years ago, I visited the master of Swiss typographic design, Wolfgang Weingart, in Maine to give a lecture for his then regular summer course. I marveled at Weingart’s ability to give the exact same introductory lecture each year. I thought to myself, “Doesn’t he get bored?” Saying the same thing over and over had no value in my mind, and I honestly began to think less of the Master. Yet it was upon maybe the third visit that I realized how although Weingart was saying the exact same thing, he was saying it simpler each time he said it. Through focusing on the basics of basics, he was able to reduce everything that he knew to the concentrated essence of what he wished to convey. His unique example rekindled my excitement for teaching.

A quick way to figure out what basics you’re missing is the Feynman Technique.

2. Repeat yourself often. Repeat yourself often.

REPEAT-ing yourself can be embarrassing, especially if you are self-conscious-which most everyone is. But there’s no need to feel ashamed, because repetition works and everyone does it, including the US President and other leaders.

3. Avoid creating desperation

A gentle, inspired start is the best way to draw students, or even a new customer, into the immersive process of learning.

4. Inspire with examples

INSPIRATION is the ultimate catalyst for learning: internal motivation trumps external reward. Strong belief in someone, or else some greater power like God, helps to fuel belief in yourself and gives you direction.

5. NEVER forget to repeat yourself

forget to repeat yourself. Never Forget to repeat yourself. Never …

Eric Drexler on taking action in the face of limited knowledge

radical abundance

Science pursues answers to questions, but not always the questions that engineering must ask.

The founding father of nanotechnology, Eric Drexler, who aptly described the difference between science and engineering, comments on the central differences between how science and engineering approach solutions in a world of limited knowledge.

Drexler’s explanation, found in his insightful book Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, discusses how there is a certain amount of ignorance that pervades everything. How then, should we respond? Engineers apply a margin of safety.

Drexler writes:

When faced with imprecise knowledge, a scientist will be inclined to improve it, yet an engineer will routinely accept it. Might predictions be wrong by as much as 10 percent, and for poorly understood reasons? The reasons may pose a difficult scientific puzzle, yet an engineer might see no problem at all. Add a 50 percent margin of safety, and move on.

Safety margins are standard parts of design, and imprecise knowledge is but one of many reasons.

Engineers and scientists ask different questions:

… Accuracy can only be judged with respect to a purpose and engineers often can choose to ask questions for which models give good-enough answers.

The moral of the story: Beware of mistaking the precise knowledge that scientists naturally seek for the reliable knowledge that engineers actually need.


Nature presents puzzles that thwart human understanding.

Some of this is necessary fallibility—some things we simply cannot understand or predict. Just because we want to understand something doesn’t mean it’s within our capacity to do so.

Other problems represent limited understanding and predictability — there are things we simply cannot do yet, for a variety of reasons.

… Predicting the weather, predicting the folding of membrane proteins, predicting how particular molecules will fit together to form a crystal— all of these problems are long-standing areas of research that have achieved substantial but only partial success. In each of these cases, the unpredictable objects of study result from a spontaneous process— evolution, crystallization, atmospheric dynamics— and none has the essential features of engineering design.

What leads to system-level predictability?

— Well-understood parts with predictable local interactions, whether predictability stems from calculation or testing
— Design margins and controlled system dynamics to limit the effects of imprecision and variable conditions
— Modular organization, to facilitate calculation and testing and to insulate subsystems from one another and the external

… When judging engineering concepts, beware of assuming that familiar concerns will cause problems in systems designed to avoid them.

Seeking Unique Answers vs. Seeking Multiple Options

Expanding the range of possibilities plays opposite roles in inquiry and design.

If elephantologists have three viable hypotheses about an animal’s ancestry, at least two hypotheses must be wrong. Discovering yet another possible line of descent creates more uncertainty, not less— now three must be wrong. In science, alternatives represent ignorance.

If automobile engineers have three viable designs for a car’s suspension, all three designs will presumably work. Finding yet another design reduces overall risk and increases the likelihood that at least one of the designs will be excellent. In engineering, alternatives represent options. Not knowing which scientific hypothesis is true isn’t at all like having a choice of engineering solutions. Once again, what may seem like similar questions in science and engineering are more nearly opposite.

Knowledge of options is sometimes mistaken for ignorance of facts.

Remarkably, in engineering, even scientific uncertainty can contribute to knowledge, because uncertainty about scientific facts can suggest engineering options.

Simple, Specific Theories vs. Complex, Flexible Designs

Engineers value what scientists don’t: flexibility.

Science likewise has no use for a theory that can be adjusted to fit arbitrary data, because a theory that fits anything forbids nothing, which is to say that it makes no predictions at all. In developing designs, by contrast, engineers prize flexibility — a design that can be adjusted to fit more requirements can solve more problems. The components of the Saturn V vehicle fit together because the design of each component could be adjusted to fit its role.

In science, a theory should be easy to state and within reach of an individual’s understanding. In engineering, however, a fully detailed design might fill a truck if printed out on paper.

This is why engineers must sometimes design, analyze, and judge concepts while working with descriptions that take masses of detail for granted. A million parameters may be left unspecified, but these parameters represent adjustable engineering options, not scientific uncertainty; they represent, not a uselessly bloated and flexible theory, but a stage in a process that routinely culminates in a fully specified product.

Beware of judging designs as if they were theories in science. An esthetic that demands uniqueness and simplicity is simply misplaced.

Curiosity-Driven Investigation vs. Goal-Oriented Development

Organizational structure differs between scientific and engineering pursuits. The coordination of work isn’t interchangeable.

In science, independent exploration by groups with diverse ideas leads to discovery, while in systems engineering, independent work would lead to nothing of use, because building a tightly integrated system requires tight coordination. Small, independent teams can design simple devices, but never a higher-order system like a passenger jet.

In inquiry, investigator-led, curiosity-driven research is essential and productive. If the goal is to engineer complex products, however, even the most brilliant independent work will reliably produce no results.

The moral of the story: Beware of approaching engineering as if it were science, because this mistake has opportunity costs that reduce the value of science itself.

In closing, Drexler comments on applying the engineering perspective.

Drawing on established knowledge to expand human capabilities, by contrast, requires an intellectual discipline that, in its fullest, high-level form, differs from science in almost every respect.

Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization is worth reading in its entirety.

The Laws of Simplicity


“Simplicity is about subtracting the obvious,
and adding the meaningful.”

John Maeda is a graphic designer and computer scientist. His book, The Laws of Simplicity, proposes ten laws for simplifying complex systems in business and life. Think of it as simplicity 101.

Simplicity = Sanity

Technology has made our lives more full, yet at the same time we’ve become uncomfortably “full.”

Ten Laws

1 REDUCE The simplest way to achieve simplicity is through thoughtful reduction.
2 ORGANIZE Organization makes a system of many appear fewer.
3 TIME Savings in time feel like simplicity.
4 LEARN Knowledge makes everything simpler.
5 DIFFERENCES Simplicity and complexity need each other.
6 CONTEXT What lies in the periphery of simplicity is definitely not peripheral.
7 EMOTION More emotions are better than less.
8 TRUST In simplicity we trust.
9 FAILURE Some things can never he made simple.
10 THE ONE Simplicity is about subtracting the obvious, and adding the meaningful.

Three Keys

In addition to the ten Laws, Maeda offers three Keys to achieving simplicity in the technology domain.

1 AWAY More appears like less by simply moving it far, far away.
2 OPEN Openness simplifies complexity.
3 POWER Use less, gain more.

Let’s take a look at a few of these.

The simplest way to achieve simplicity.

The simplest way to achieve simplicity is through thoughtful reduction. When in doubt, just remove. But be careful of what you remove. … When it is possible to reduce a system’s functionality without significant penalty, true simplification is realized.

What is simplicity about?

Simplicity is about the unexpected pleasure derived from what is likely to be insignificant and would otherwise go unnoticed.

The relationship between space and clutter

At first, a larger home lowers the clutter to space ratio. But ultimately, the greater space enables more clutter.

Small changes in organization create big differences in a design.

Squint at the world.

The best designers in the world all squint when they look at something. They squint to see the forest from the trees-to find the right balance. Squint at the world. You will see more, by seeing less.

Time Savings

When forced to wait, life seems unnecessarily complex. Savings in time feel like simplicity. … A shot from the doctor hurts less when it happens quickly, and even less when we know that the shot will save our lives.

Knowledge makes everything simpler.

This is true for any object, no matter how difficult. The problem with taking time to learn a task is that you often feel you are wasting time, a violation of the third Law. We are well aware of the dive-in-head-first approach-“I don’t need the instructions, let me just do it.” But in fact this method often takes longer than following the directions in the manual.

Good Design

The best designers marry function with form to create intuitive experiences that we understand immediately-no lessons (or cursing) needed. Good design relies to some extent on the ability to instill a sense of instant familiarity.

Need to Know vs. Nice to Know

Difficult tasks seem easier when they are “need to know” rather than “nice to know.” A course in history, mathematics, or chemistry is nice to know for a teenager, but completing driver’s education satisfies a fundamental need for autonomy.

Simplicity and complexity need each other.

The more complexity there is in the market, the more that something simpler stands out.

It’s best to preserve emptiness because nothing is an important something.

The opportunity lost by increasing the amount of blank space is gained back with enhanced attention on what remains. More white space means that less information is presented. In turn, proportionately more attention shall be paid to that which is made less available. When there is less, we appreciate everything much more.

Some things can never be made simple. Complexity can be beautiful.

Concentrate on the deep beauty of a flower. Notice the many thin, delicate strands that emanate from the center and the sublime gradations of hue that occur even in the simplest white blossom. Complexity can be beautiful. At the same time, the beautiful simplicity of planting a seed and just adding water lies at even the most complex flower’s beginning.

Remember, “Technology and life only become complex if you let it be so.”

Learn More

Near the end of The Laws of Simplicity, Maeda offers “a few books that inspired each of the sections that I owe the debt of inspiration to mention here.”

The Tipping Point, by Malcolm Gladwell. The need for simplicity has reached the tipping point.

The Paradox of Choice, by Barry Schwartz. Provides a grounding in why few can be better than many.

Notes on the Synthesis of Form, by Christopher Alexander (1964). Ideas about organization as originated in architecture.

Toyota Production System, by Ohno Taiichi (1988). Dry treatise on optimizing production from the Toyota Master.

Motivation and Personality, by Abraham Maslow (1970). What really motivates people?

The Innovator’s Solution, by Clay Christensen (2003). Simple explanation of changeover effects led by technology.

Six Memos for the Next Millennium, by Italo Calvino (1993). Brilliantly beautiful thoughts on simply everything.

Emotional Design, by Donald Norman (2003). Usability guru makes a case for the useless.

The Long Tail, by Chris Anderson (2006). Adding up all the little things really matters.

Technics and Civilization, Lewis Mumford (1963). Prescient work by a man in touch with his time.

The Wisdom of Crowds, by James Surowiecki (2004). Supports the group outweighing the individual.

Cradle to Cradle, by W. McDonough and M. Braungart (2002). We’re running out of power and something has to be done.

Disabling Professions, by Ivan Illich (1978). Reminds you that you’re becoming increasingly useless.

Why Catastrophes Happen

From Ubiquity: Why Catastrophes Happen

There are many subtleties and twists in the story … but the basic message, roughly speaking, is simple: The peculiar and exceptionally unstable organization of the critical state does indeed seem to be ubiquitous in our world. Researchers in the past few years have found its mathematical fingerprints in the workings of all the upheavals I’ve mentioned so far [earthquakes, eco-disasters, market crashes], as well as in the spreading of epidemics, the flaring of traffic jams, the patterns by which instructions trickle down from managers to workers in the office, and in many other things. At the heart of our story, then, lies the discovery that networks of things of all kinds – atoms, molecules, species, people, and even ideas – have a marked tendency to organize themselves along similar lines. On the basis of this insight, scientists are finally beginning to fathom what lies behind tumultuous events of all sorts, and to see patterns at work where they have never seen them before.


In this simplified setting of the sandpile, the power law also points to something else: the surprising conclusion that even the greatest of events have no special or exceptional causes. After all, every avalanche large or small starts out the same way, when a single grain falls and makes the pile just slightly too steep at one point. What makes one avalanche much larger than another has nothing to do with its original cause, and nothing to do with some special situation in the pile just before it starts. Rather, it has to do with the perpetually unstable organization of the critical state, which makes it always possible for the next grain to trigger an avalanche of any size.

To this, John Mauldin adds:

Now, let’s couple this idea with a few other concepts. First, Hyman Minsky (who should have been a Nobel laureate) points out that stability leads to instability. The more comfortable we get with a given condition or trend, the longer it will persist and then when the trend fails, the more dramatic the correction. The problem with long term macroeconomic stability is that it tends to produce unstable financial arrangements. If we believe that tomorrow and next year will be the same as last week and last year, we are more willing to add debt or postpone savings in favor of current consumption. Thus, says Minsky, the longer the period of stability, the higher the potential risk for even greater instability when market participants must change their behavior.

Relating this to our sandpile, the longer that a critical state builds up in an economy, or in other words, the more “fingers of instability” that are allowed to develop a connection to other fingers of instability, the greater the potential for a serious “avalanche.”

Couple this with Didier Sornette, in Why Stock Markets Crash (incidentally, this is a book Nassim Taleb recommends — “I learned more from this book than any other on disequilibrium.”)

[T]he specific manner by which prices collapsed is not the most important problem: a crash occurs because the market has entered an unstable phase and any small disturbance or process may have triggered the instability. Think of a ruler held up vertically on your finger: this very unstable position will lead eventually to its collapse, as a result of a small (or an absence of adequate) motion of your hand or due to any tiny whiff of air. The collapse is fundamentally due to the unstable position; the instantaneous cause of the collapse is secondary.

Mauldin continues:

When things are unstable, it isn’t the last grain of sand that causes the pile to collapse or the slight breeze that causes the ruler on your fingertip to fall. Those are the “proximate” causes. They’re the closest reasons at hand for the collapse. The real reason, though, is the “remote” cause, the farthest reason. The farthest reason is the underlying instability of the system itself.

A fundamentally unstable system is exactly what we saw in the recent credit crisis. Consumers all through the world’s largest economies borrowed money for all sorts of things, because times were good. Home prices would always go up and the stock market was back to its old trick of making 15% a year. And borrowing money was relatively cheap. You could get 2% short-term loans on homes, which seemingly rose in value 15% a year, so why not buy now and sell a few years down the road?

Greed took over. Those risky loans were sold to investors by the tens and hundreds of billions of dollars, all over the world. And as with all debt sandpiles, the fault lines started to appear. Maybe it was that one loan in Las Vegas that was the critical piece of sand; we don’t know, but the avalanche was triggered.

The Future Is Not Like The Past

From Everything Is Obvious: How Common Sense Fails Us:

The ubiquity of complex systems in the social world is important because it severely restricts the kinds of predictions we can make. In simple systems, that is, it is possible to predict with high probability what will actually happen—for example when Halley’s Comet will next return or what orbit a particular satellite will enter. For complex systems, by contrast, the best that we can hope for is to correctly predict the probability that something will happen. At first glance, these two exercises sound similar, but they’re fundamentally different. To see how, imagine that you’re calling the toss of a coin. Because it’s a random event, the best you can do is predict that it will come up heads, on average, half of the time. A rule that says “over the long run, 50% of coin tosses will be heads, and 50% will be tails” is, in fact, perfectly accurate in the sense that heads and tails do, on average, show up exactly half the time. But even knowing this rule, we still can’t predict the outcome of a single coin toss any more than 50% of the time, no matter what strategy we adopt. Complex systems are not really random in the same way that a coin toss is random, but in practice it’s extremely difficult to tell the difference.

Still curious? Read How To Predict Everything next.

“Much complexity has been deliberately created”

More on the theme of intentional complexity.

Many of the products we buy today are, by their nature, complicated. If you want to buy a television or a car or a computer, you are faced with a bewildering range of types, sizes and optional features.

Such complexity means some consumers will inevitably pay more than they need because they make mistakes or do not take the time to understand the options. Some complexity is unavoidable, but much of the complexity consumers experience is not.

For its consumers, energy is the simplest of products. All electricity or gas on the public network is the same. The only material difference between providers is the length of time it takes them to respond to complaints about your bill. The only persuasive claim a salesman can make is that his electricity or gas is cheaper. The time he spends trying to persuade you of this is a clue that it probably isn’t. But to establish the truth may require a computer.

Much complexity has been deliberately created, to encourage consumers to pay more than they need, or expected. Or to reduce the likelihood that they will switch to another supplier. …


The problem is less the weakness of competition than its effectiveness. Whatever one supplier does, others are forced to follow if they are to protect their markets and their returns. I