Tag: Christian Madsbjerg

The Heart of Humanity

“What distinguishes the risks I’m interested in from mere bravado is that they are taken in the interest of what one is committed to…”


Hubert Dreyfus is the preeminent expert on Heidegger so much so that in fact the various copies of Being and Time in his office are held together with rubber bands. In 1965 he took on the entire computer science department of MIT and in so doing explained the key differences between humans and computers.

The Moment of Clarity: Using the Human Sciences to Solve Your Toughest Business Problems, explains the problem:

Dreyfus claimed that symbolic representational artificial intelligence would never succeed because the algorithmic design— so skilled at following rule sets—had no ability to infer or intuit. To use anthropologist Clifford Geertz’s phrase, artificial intelligence was forever in the realm of thin description , completely incapable of understanding the “thick description” of our humanity. Today, such a claim seems commonplace, but at the time, Dreyfus was considered a maverick. He had never programmed a computer in his life, but his training in phenomenology and his deep knowledge of philosophy convinced him that our greatest asset as humans had nothing to do with our ability to follow rules. Humans are human because they have a perspective: they care about things. One might call it our ability to give a damn. And it is this quality that allows us to determine what matters and where we stand. A computer can’t do that.

The ability to distinguish between what is relevant and what is not is the key; this is perspective.

“What is relevant right now is that I am sitting here talking to you in this room,” Dreyfus told us. “What is not relevant is that the room may have ten billion specks of dust on the floor and two screws in the left corner and tiles that weigh a half pound each.”

The ability to have a perspective—to respond to what matters and what is meaningful— is at the heart of humanity and, by extension, at the heart of all successful businesses. A perspective implies that you have prioritized certain things— relevant things—and by consequence let some things go. This risk—letting profitable opportunities go for the sake of others— is the essence of all value propositions. We can’t solve all the problems for all the consumers all the time. Nor can we design products that meet all the needs of all the people everywhere. What we can do is risk responding to what calls us. We can find ourselves committed to a perspective. We can build a successful business that will sustain us.

Dreyfus summed it up by saying, “What distinguishes the risks I’m interested in from mere bravado is that they are taken in the interest of what one is committed to, what they have defined themselves in terms of, and what makes meaningful differences in their lives. This is the kind of risk that is a necessary step in becoming a master at anything.”

The Default-Thinking Method of Problem Solving

You’ve been here before. It’s Monday morning and you walk into the office only to have your boss call an urgent meeting to “streamline processes.” You haven’t thought about this enough to have an opinion but you go anyway.

You know how to deal with this. You’ve done it before. You turn on your default brain and start solving the problem. You build a hypothesis to determine the problem, find some data to analyze, and presto out comes some efficiency.

Most of the time this works well enough, but not always. Sometimes—more often than we’d like to admit—things change: markets shift and consumers behave in unpredictable ways. Now we’re rudderless.

In the wonderful book The Moment of Clarity: Using the Human Sciences to Solve Your Toughest Business Problems, the authors write:

We are forever in the midst of change, but not all of it is seismic. It’s vital for a business to understand the difference between the uncertainties present on an average day and the uncertainties of a major cultural shift. … Business issues can be categorized along a problem scale within three levels of complexity . This framework is useful for distinguishing very complex problems from those that are actually manageable.

The Three Levels of Business Problems

1. A clear-enough future with a relatively predictable business environment. You know what the problem is, and you can apply a proven algorithm to fix it. “If I invest $ 1 in media spending for advertising, I know that I will get something like $ 1.5 back because of market stimulation.” “The industry has average admin costs of 8 percent of total revenue. Mine are 10 percent. We should cut that back.”

2. Alternative futures with a set of options available. You have a feel for the problem and might have seen something like it before. It makes sense to test your hunch as a hypothesis. For example, “Our sales numbers are down even when we invest in more salespeople, but we have seen the same pattern in the European Union and China. We might be hiring too many new salespeople too quickly and expecting them to deliver the same payback that the existing salespeople are delivering.”

3. High level of uncertainty, with no understanding of the problem. You simply don’t know what the problem is, let alone the solution. You can see that something is wrong, but have no clear idea about what to do. For example, “Our media division is losing business to internet start-ups,” “We are investing more in customer service, but our customers are becoming increasingly dissatisfied with us,” and “We are designing products that seem right for the marketplace, but the marketplace isn’t interested.”

Most of our problems tend to be in 1 or 2. Uncertainty, remember, happens when we fail to know the range of possible outcomes (and, correspondingly, their probabilities.) These are really messy problems.

Solving the problems of 1 and 2 are generally much easier. We use default thinking.

The default problem-solving model has its roots in what can be called instrumental rationalism. At the heart of the model is the belief that business problems can be solved through objective and scientific analysis and that evidence and facts should prevail over opinions and preferences. To get to the right answer, so the thinking goes, you should adhere to the following principles of problem solving:

1. All business uncertainties are defined as problems. Something in the past caused the problem, and the facts should be analyzed to clarify what the problem is and how to solve it.

2. Problems are deconstructed into quantifiable and formal problem statements (issues). For example, “Why is our profitability falling?”

3. Each problem is atomized into the smallest possible bits that can be analyzed separately— for example, breaking down the causes of profitability into logical issues. This analysis would include “issue trees” for all the hundreds of potential levers for either decreasing costs or growing revenue (customer segments, markets, market share, price, sales channels, operations, new business development, etc.)

4. A list of hypotheses to explain the cause of the problem is generated. For example, “We can increase profitability by lowering the cost of our operations.”

5. Data is gathered and processed to test each hypothesis— all possible stones are turned and no data source is left untouched.

6. Induction and deduction are used to test hypotheses, clarify the problem, and find the areas of intervention with the highest impact, or what is commonly called “bang for the buck.”

7. A well-organized structure of the analysis is deployed to build a logical and fact-based argument of what should be done. The structure is built like a pyramid that develops the supporting facts, some subconclusions, and an overall conclusion and then ends with a prioritized list of interventions to which the company should adhere.

8. All proposed actions are described as manageable work streams or must-win battles for which a responsible committee, or person, is assigned.

9. Performance metrics and a proposed time frame with follow-up monitoring are put in place for each committee to complete the task.

10. When all work streams have been completed, the problem is solved.

When done correctly by competent people, this can be a thing of beauty. This is, in part, why we hire consultants like myself, McKinsey, or Bain. We believe they can solve any problem. The idea that management is a type of science with a repeatable formula in the face of any problem is not a new idea. “It can be traced back to the nineteenth century, when positivism, the prevalent philosophy of the day, argued that you could objectively measure reality.” The founding father of the idea of management science, if there was one, Frederick Winslow Taylor.

Taylor left a prestigious education at Harvard to work at steel companies throughout Pennsylvania. Whereas most manufacturing and factory plants had cobbled together their organization through rules of thumb and common sense, Taylor was the quintessential positivist, seeking scientifically validated measurements, or properties. He followed workers, clicking his stopwatch every time they started and stopped, measuring the time it took to complete each discrete action of hauling their large iron ore loads. Through his enormously successful tenure at steel companies, he extracted generalized principles of management that he used to create the world’s first business case study. It wasn’t long before a partnership between Harvard’s School of Applied Science and its brand-new business school came calling. Might Taylor bring together his experience into something the school could teach its young students about productivity? Taylorism, based on the following premise, was born:

To work according to scientific laws, the management must take over and perform much of the work which is now left to the men; almost every act of the workman should be preceded by one or more preparatory acts of the management which enable him to do his work better and quicker than he otherwise could.

Today’s problems seem infinitely more complex than counting iron ore hauls and yet we still attack them with the same general approach of Taylorism. This is what most MBAs, including mine, taught: people work harder with the right incentives, optimize and perfect workflow, analyze every movement looking for efficiencies, remove discretion when possible because that creates variance, etc. Of course we’ve evolved Taylorism, today we call it “lean” and “six sigma” and whatever else.

For most of us, default thinking is so familiar to us— the very air we breathe— that we are no longer able to explain it or even to see it. For that reason, if we really want to understand why we continue to get people wrong, we need to unpack the fundamental assumptions that make up the culture of most of our days.

Is this really how we approach problems? What are the assumptions we’re making when we take this approach?

Assumption 1: People Are Rational And Fully Informed

One of the unintentional consequences of solving problems by testing logical hypotheses is that you are forced to assume that people are rational decision makers: aware of their needs, fully informed of all their choices, and capable of making the best choice. The reason is simple: it is very difficult to test a hypothesis about things that you can’t measure objectively. It’s even harder to test something that is deeply personal, cannot be decoded into explicit descriptions, and requires a lot of interpretation. Think about the question “Are you a good parent?” or “Do you have good taste?”

A simple answer misses most of what matters about parenting and good taste. To deal with this problem, companies base their problem solving on what can objectively be described, quantified, and analyzed without too much interpretation.

So we default to measuring perceptions and desires, more specifically, we end up with people’s perceptions of reality. There is nothing wrong with this but it is limited and we should be aware of its limitations. These are not the only two aspects of humanity that matter. And even if they were, the way default thinking solves problems rarely offers us any understanding of how they work. We find some spurious relationship and assume causation when, in reality, it’s merely a correlation. When it changes we have no idea why. We’re more complex than that.

Most recent studies evaluating how people buy reveal us to be far more chaotic creatures. We rarely know what we want. We almost never fully grasp the market and, most important, we almost always buy something at a different price than what we thought we would. Even studies of people with written shopping lists (milk, eggs, apples, etc.) reveal that they find themselves far astray from their original intentions once they reach the grocery store.

We do this because intentions are relatively easy to study. But as Dr. House says, “everybody lies.”

People think they cook a lot, but they really don’t. It’s not that they want to lie to other people; they are simply lying to themselves.

There is often a wealth of distance between what people say and what people do.

It’s not that people don’t care about anything. They just don’t care as much as most companies assume that they do. And most often, people couldn’t care less. When they buy one kind of chocolate bar rather than another, it is rarely because they have a strong brand preference. More often than not, it is because the chocolate was closer on the counter, it had a color that fit the mood, or it simply came packaged as a “two for one.” The good news for companies is that we buy a lot of stuff. The bad news is that we don’t always know why.

Assumption 2: Tomorrow Will Look Like Today

A good example of this attitude can be found in a 2006 article in the McKinsey Quarterly. In identifying trends that will shape the business environment, the article says that management itself will shift from an art to a science:

Long gone is the day of the “gut instinct” management style. Today’s business leaders are adopting algorithmic decision-making techniques and using highly sophisticated software to run their organizations. Scientific management is moving from a skill that creates competitive advantage to an ante that gives companies the right to play the game.

We’re bombarded with the word “science.”

When thought leaders use the word science to describe a business discipline like marketing, retail design, negotiation skill, or strategy, we are led to believe that these disciplines can be predicated on scientific truths. Does the science of shopping have the same universal laws as Darwin’s theory of natural selection?

This is part of the reason we trick ourselves.

Rarely do we have to ask, “Where does the hypothesis come from?” But by assuming that the hypothesis is based on some kind of universal law, we fool ourselves into believing that the assumptions of the current moment will also hold true in the future. In these situations, the idea that management is a kind of natural science blinds us rather than enlightens us.

Assumption 3: Hypotheses are Objective and Unbiased

Here is a great example:

In the toy industry, the dominating idea is that children have a short attention span and need toys that stimulate their desire for instant traction. A toy, it is assumed, must grab the attention of the child in the store, and he or she should not need any skills to play with it. Another assumption is that physical toys are losing ground to digital toys because the former are too tedious and not stimulating enough.

In reality, when you study children— and if you read the majority of academic literature about children—you will probably reach the opposite conclusion: children are highly motivated by play experiences that require skill and mastery and that can give them a sense of hierarchy and accomplishment. Digital play is gaining in popularity precisely because it requires a very sophisticated skill set; it can be played for thousands of hours and it gives the players clear feedback with levels and hierarchies.

Over time, companies and people create “commonsense” ideas about the world and how it works. We take things as given and rarely challenge them.

The French anthropologist Pierre Bourdieu coined the term habitus to describe the somehow hidden but always present dispositions that shape our perceptions, thoughts, and actions. In his view, many things that we regard as common sense are in fact shaped by the social context we are in. Over time we learn what is normal and taken as a given through our social interaction with the world— our family, our society, our friends, our work— and our perceptions become a kind of automatic understanding of the world. This understanding enables us to act normally without really thinking about it. Over time, companies similarly create commonsense ideas about the world. Certain things are simply taken as a given, no longer contested: for example, the idea that designers and engineers will never see eye-to-eye, or that open offices provide more opportunities for collaboration.

This is one reason consultants can be effective, they come in with a different understanding and offer opinions—intentionally or not – that challenge some of these commonsensical views.

In terms of default thinking:

A company might think that it has created an objective set of possible hypotheses to test. But in reality each hypothesis is always based on something . Very often, that thing is a product of culture, not of science. And once our assumptions are firmly rooted in our cultural understanding, they have a way of becoming ever more entrenched.

Then confirmation bias kicks in. We look for opinions, ideas, and facts that support our beliefs.

In the end, our hypotheses are “almost never based on objective truth.” How could it be otherwise? But of course, the point is to know the limitations of the tools we’re working with and hope that awareness allows us to make better decisions by using better tools.

In Leo Tolstoy’s nonfiction magnum opus The Kingdom of God Is Within You, he writes:

“The most difficult subjects can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid before him.”

If you can’t question assumptions at your company, you probably can’t question anything.

Assumption 4: Numbers Are the Only Truth

“Not everything that can be counted counts, and not everything that counts can be counted.” — Albert Einstein

The heart of default problem solving is quantitative analysis.

It has become so dominant that companies tend to forget that the world consists not only of quantities but also of qualities. Roger Martin, the dean of Rothman School of Management, argues that companies will simply lack ability to find the full potential of growth opportunities if they only focus on quantitative models: “The greatest weakness of the quantitative approach is that it decontextualizes human behavior, removing an event from its real-world setting and ignoring the effects of variables not included in the model.”

Default thinking catalogs the world into properties: how big is the market, how many people will buy our products, how many people know our brand, which category is growing fastest, which geography is the most profitable, which customers have the highest loyalty and what technologies have the highest adoption.

Yes, all of those have a numbers side, but they also have a qualitative side that might also shed light on things. If you know that a certain percent of your customers are happy with their interactions with your company, that’s different than knowing what the experience of interacting with your company is like. Both of those things are needed to inform decisions.

Numbers are great for covering your ass, so they tend to trump anything else. Numbers however, limit ideas and solutions to only one right answer.

For obvious reasons, the past does not include data on things that haven’t happened or ideas that have not yet been imagined. As a result, data analysis of the future tends to underestimate or even ignore past events or conditions that can’t be measured while overestimating those that can. Nowhere is this more visible than in business case studies.

“In our view,” the authors write,” the quantitative obsession leads to a sorely diminished approach to future planning. It tends to be conservative rather than creative because it implicitly favors what can be measured over what cannot.”

Assumption 5: Language Needs to be Dehumanizing

Business and management science has become a world in itself, and the language of business has become increasingly technical, introverted, and coded. You don’t fire people anymore; you “right-size the organization.” You don’t do the easiest things first; you “pick the low-hanging fruit.” You don’t look at where you sell your products; you “evaluate your channel mix.” You don’t promote people; you “leverage your human resources.” You don’t give people a bonus check; you “incentivize.” You don’t do stuff; you “execute.” You “synergize, optimize, leverage, simplify, utilize, transform, enhance, and reengineer.” You avoid “boiling the ocean, missing the paradigm shift, having tunnel vision, and increasing complexity.” You make sure that “resources are allocated to leverage synergies across organizational boundaries and with a customer-centric mind-set that can secure a premium position while targeting white spots in the blue ocean to ensure that there is bang for the buck.” It can become almost poetic.

Talk about jargon.

The German philosopher Jürgen Habermas has developed an extensive analysis of what happens when technical language outstrips the language of everyday life. He argues that the change from a normal, everyday language to a technical, specific language suggests a shift in power. When technical language conquers simple language of the every day, it is a sign that the system is gaining ground and everyday human reality, what he calls the lifeworld, is losing ground. He goes so far as to call this shift a colonization of the lifeworld; everyday life being colonized by a force of bureaucratization and rationalization that it cannot defend itself against. Such a shift leads to a far more systematic, rule-based, and technical idea of the world. It widens the gap between who we really are and the systems that we have become.

* * *

Of course, default thinking doesn’t always work. You know you’ve stepped out of default thinking space when leaders say, “think outside the box.” Problems arise when you try to solve the third type of problem (where there is a high level of uncertainty) with the same thinking you use to fix problems in one and two.

If you enjoyed this post, you’d love the book The Moment of Clarity: Using the Human Sciences to Solve Your Toughest Business Problems.

Sensemaking as a Complement to Default Thinking

Most of the time we devise strategies in the default mode of problem-solving, prioritizing maximum growth and profit through rational and logical analysis. But we already know that rational and logical analysis doesn’t always result in the best decisions.

In The Moment of Clarity: Using the Human Sciences to Solve Your Toughest Business Problems Christian Madsbjerg and Mikkel Rasmussen write:

The ideal is to turn strategy work into a rigorous discipline with the use of deductive logic, a well-structured hypothesis, and a thorough collection of evidence and data. Such problem solving has dominated most research and teaching in business schools over the last decades and has formed the guiding principles of many global management consultancies. Slowly but steadily, this mind-set has gained dominance in business culture over the last thirty years. Today it is the unspoken default tool for solving all problems.

This mindset is almost scientific in the quest for precision.

… learn from past examples to create a hypothesis you can test with numbers. As it uses inductive reasoning for its foundation, it is enormously successful at analyzing information extrapolated from a known set of data from the past. Default thinking helps us create efficiencies, optimize resources, balance product portfolios, increase productivity, invest in markets with the shortest and biggest payback , cut operational complexity, and generally get more bang for the buck. In short, it works extraordinarily well when the business challenge demands an increase in the productivity of a system.

But this method often falls short. And one area where it falls short is people’s behavior. “When it comes to cultural shifts, the use of a hypothesis based on past examples will give us a false sense of confidence, sending us astray into unknown waters with the wrong map,” the authors write.

Certain problems benefit from a linear and rational approach, while other, less straightforward challenges—navigating in a fog—benefit from the problem solving utilized in the human sciences like philosophy, history, the arts, and anthropology. We call this problem-solving method sensemaking.

Sensemaking is really about finding how things are experienced through culture.

The hard sciences involving mathematics and universal laws tell us the way things are and tend to take the main spotlight when we discuss our understanding of the world. This tendency is so common, we often disregard the wide range of sciences that are used to shed light on other phenomena, or the way things are experienced in culture. If default thinking shows us what exists in the foreground (e.g., “we are losing our market share in competitive athletic apparel”), the human sciences investigate the invisible background— the layered nuance behind what we perceive (e.g., “well -being, not competition , is the main motivating factor for many people participating in sports”).

How we experience the world may be as important as, or more important than the hard, objective facts about the world. This is especially true for the specific set of problems where past data or scenarios no longer seem relevant.

Default thinking and sensemaking are complementary tools.

How default thinking and sensemaking complement one another
How default thinking and sensemaking complement one another. Source: The Moment of Clarity

But we tend to see leadership more in the default thinking way, which makes perfect sense. Who doesn’t want to be hypothesis-driven, quantitative, and linear in their approach to solving problems? Most of these things are visible, which has an added benefit too. But if this is the only tool in our toolbox we’re going to fall short.

The difference between decision makers and sensemakers.
The difference between decision-makers and sensemakers. Source: The Moment of Clarity

It was perfectly rational, yet wrong, for the steel companies in The Innovator’s Dilemma to cede low-margin market share to the mini-mills. This is the decision we’d all make if we look at it through the lens of finance and default thinking. In my interview with Forbes last year, I elaborated on this concept as well with the example of a textile company. It’s only when you approach problems through different lenses that you can solve them.

The Moment of Clarity: Using the Human Sciences to Solve Your Toughest Business Problems goes on to explain how humanities help solved some of our toughest business problems.

Genevieve Bell on the Value of Humanities in an Executive Role

Genevieve Bell is one of the most powerful and influential social scientist in the tech industry. Speaking to Christian Madsbjerg in an excerpt from The Moment of Clarity: Using the Human Sciences to Solve Your Toughest Business Problems, she says something quite profound on executive management and cognitive dissonance.

I’ve been really struck by what it takes to be an executive at a company like Intel. Increasingly, much like in my own training in the social sciences, it requires holding these multiple competing realities in one’s head at the same time. An executive has to be able to hold the reality of what the company needs to be now with what it needs to be ten years from now, and these concepts are often at odds with one another. You also have to hold the realities of different markets in your head that have completely different formulations of success. In the US, you have to think about miles per gallon and environmentally sensitive processes, and in China you just have to go really fast. For Intel executives from a culture of engineering, this is really hard. They are taught to think that dissonance should be resolved in the design: “There is one answer, and we have to get it right.”

And a few sentences later, she explained how we lose track of what’s important to our customer. “Moore’s Law stated that semiconductors were going to get smaller,” Bell explained, “but it didn’t tell us anything about what people were going to do with them or why a consumer should be interested. It started to become increasingly clear to all of us that consumers just didn’t care about the same things that we cared about. They weren’t necessarily engaged in our narrative.”

While not a silver bullet, one thing I see more and more through my engagements with companies is the value of humanities. Humanities offer a different perspective on the same problems, often with different (and better) results, they bring a better sense of people and their behaviors. When you’re solving difficult problems, you want cognitive diversity.