Ashridge

Virtual Learning Resource Centre

Will your next mistake be fatal? Avoiding the chain of mistakes that can destroy your organisation

Book cover

by Robert E. Mittelstaedt, Jr., Wharton School Publishing, 2005.

Abstract

Catastrophes don’t just happen. From Enron to the Space Shuttle disaster to 9/11, nearly every disaster is the result of a series of mistakes. Each one is easy to overlook and each one occurs because people refuse to believe the evidence before them. This book presents a systematic approach to managing multiple mistakes so they don’t lead to disaster and shows how to build internal systems that trigger loud and actionable alarms before "failure chains" accelerate beyond control.

(Reviewed by Kevin Barham in April 2006)

(These book reviews offer a commentary on some aspects of the contribution the authors are making to management thinking. Neither Ashridge nor the reviewers necessarily agree with the authors’ views and the authors of the books are not responsible for any errors that may have crept in.

We aim to give enough information to enable readers to decide whether a book fits their particular concerns and, if so, to buy it. There is no substitute for reading the whole book and our reviews are no replacement for this. They can give only a broad indication of the value of a book and inevitably miss much of its richness and depth of argument. Nevertheless, we aim to open a window on to some of the benefits awaiting readers of management literature.)

Introduction

Catastrophes don’t just happen. As this highly original book points out, nearly every disaster, from Enron to the Space Shuttle disaster to 9/11, is the result of a series of mistakes. Each one is easy to overlook and each one is set in motion because people refused to believe the evidence staring at them.

Robert Mittelstaedt is Dean of the Arizona State University Business School and former Director of the Wharton Innovation Centre. (He was also once a US Navy officer on nuclear submarines so he brings some particular personal insights on how to prevent disasters.)

He identifies the common factors beneath massive failures ranging from the Titanic to Three Mile Island and "New Coke". He considers why they happened, what might have prevented them and why they spiralled out of control. Based on these lessons, he presents a systematic approach to managing multiple mistakes so they don’t lead to disaster. He examines errors in preparation, execution, strategy and organisational culture and shows how to build internal systems that trigger loud and actionable alarms before "failure chains" accelerate beyond control.

These techniques don’t just apply to high-profile disasters. They can also help you avert mistakes in operations, analysing markets, designing and implementing strategy, and making capital investments.

Some of the important messages include: Disasters are waiting to happen but you can prevent them. Your record of success can be your worst enemy – don’t get too confident or too comfortable. Don’t listen to people who say "It can’t happen". Identify and train for your "impossible" worst-case scenarios.

Back to the top

The power of M3 and the need to understand mistakes

As the author says, the problem with mistakes is that they creep up on you. Some of the companies that not so long ago were described as "visionary" and "excellent" have fallen on hard times. They made serious mistakes or a chain of mistakes that accelerated their fall from the status of highly admired organisations.

One of the problems is that disasters are exponential – they grow very fast. We must learn to recognise the patterns of mistakes that precede most business disasters and take actions to eliminate the threat or reduce it to something that does not require full-scale crisis management. These patterns are surprisingly similar across physical and business disasters and across industries. Although this should make it easier to learn how to deal with dangerous situations, we rarely take the time to see the parallels in what we think are unrelated experiences. If we did take the time, it might help us to learn and change our behaviour. We can learn to see patterns that will help us anticipate, prevent, minimise or control the potential exponential downside for disaster.

The concept of Managing Multiple Mistakes (M3) is based on the observation that nearly all serious accidents, whether physical or business, are the result of more than one mistake. If we don’t break the chain of mistakes early, the damage and the cost will rise exponentially until the situation is irreparable (the Watergate scandal is a classic example.)

The most important element in handling the unexpected in business is prior mental preparation. This may take the form of training, expert consultation and communication or cultural values for guidance. The author quotes Louis Pasteur, inventor of vaccination, who said: "Half of scientific discovery is by chance, but chance favours the prepared mind."

Mental preparation is critical because organisations and individuals are rarely good at learning by drawing parallels. They need to be taught to recognise types and patterns of mistakes and learn to extrapolate implications from other situations into their own.

Back to the top

Execution mistakes

The author draws on case studies of a number of companies including Eastern Airlines, Coca-Cola, American Express, Air Florida and Webvan (online grocery) to investigate why we fail to learn and why execution mistakes occur. The biggest enemy of learning is that we do not realise that an event is or could be more than an isolated incident. We should learn from it whether it takes place in our company or industry or not. Training yourself and others in your organisation to look at accidents, incidents, or disasters and the mistakes that led to them may be one of your most important functions as a manager.

It is easy to get distracted and there are times when you need to have a "stern" talk with yourself and ask if you are spending your time on the most important things.

You cannot afford even a "whiff" of an ethical lapse. Issues of trust are serious and strategic in today’s world, largely because there have been so many ethical lapses in recent times that many consumers don’t trust the actions of corporations and their executives any more. The slightest sense of uncertainty or lack of openness can create suspicion that mushrooms into a very costly lack of confidence.

Execution mistakes can occur through a lack of resources or knowledge. Even a good strategy will fail without adequate resources, training and discipline around implementation.

Some possible solutions:

  • Establish and enforce standard operating procedures. You need to look for everything that can be standardised and make the procedures widely known in the organisation, train for them, and hold accountable people who don’t follow them.
  • Make responsibilities clear. You are more likely to catch and stop mistakes if you know who is responsible for what and if you know who should be providing additional oversight and advice.
  • Seek to understand assumptions. There is often a "disconnect" between the views of customers and those of insiders or a lack of sensitivity to local cultural and political views. Failure to seek or disregard of advice and data on customer behaviour is a significant cause of mistake of all types.
  • If something does not make sense, stop and figure out what is going on. In many of the cases investigated in the book, there is evidence of confusion or lack of information at some point that troubled those involved. Call a "timeout" to understand what is happening.
  • People are usually at the root of the problem. You have to look at mishaps as system problems. Multiple causes are far more likely than single causes but multiple causes nearly always involve some set of mistakes that were directly people-related. Don’t look for simple answers that blame only one cause – this will probably produce an inadequate understanding of the problem and lead to repetition of the problems that caused the accident. It is critical to focus on people-related issues of process, training and knowledge-building that will help them think their way through when technology or process fails.

Back to the top

Mistakes as catalysts of change

Many execution-related mistakes occur because criteria for measuring progress and performance have not been identified or communicated explicitly. This includes the need to understand not only what the measures are, but how frequently they should be checked and what the priorities and actions should be when an "out-of-specification" condition occurs.

Failure to analyse data points and ask what they mean is a major source of mistakes. We block our interest and ability to be analytic with time pressures, distractions, and organisational cultures that are not "curious" or enquiring. The question: "I see it, but what does it mean?" is the most important thing you can ask to break a mistake chain. The answer will not always be obvious but starting the inquiry process is critical.

Ignoring data is dangerous; misinterpreting customer data can be catastrophic. Intel initially ignored customer concerns about a flaw in a new chip damaging public opinion of the firm and gave an opportunity to competitors. When Coca-Cola introduced New Coke, it did not adequately test the depth of its data with hard-core users of its product. It underestimated the cultural attachment to the traditional product and missed the real market. On the other hand, when Johnson & Johnson suspected that its drug Tylenol had been tampered with, it never lost sight of its responsibility to customers, It recalled the product at huge cost to itself, and used the situation not only as an opportunity to remedy the situation for its customers, but also as a catalyst for improving safety in its packaging.

Across industries and situations, ineffective communications can accelerate deterioration of a mistake chain. Conversely, effective communication is one of the keys to breaking a mistake chain. Spending time and money to build a culture that takes mistake seriously may have the highest return on investment of anything you can do as a manager.

Look for the opportunity for an accident or even a major success to be a rallying cry for change and transformation. This is a unique opportunity that should not be ignored. It is the "silver lining" in an accident – your ability to identify some greater benefit that comes from the learning.

Back to the top

Strategic mistakes

The author uses a variety of examples to illustrate the difficulties involved in getting strategy "right" and in detecting mistakes in strategy.

As the history of Xerox shows, a very successful business can blind you to opportunity. This is because you make comparative judgements on the basis of current business criteria that may not last, while underestimating the potential of new businesses that have not yet grown far enough to show their full potential. Being successful also raises, often inappropriately, your confidence in your own decision-making.

Your competitors may not be who you think they are. Until recently, Xerox did not realise that the biggest threat to the copier business was not Canon or Minolta but Hewlett-Packard and Xerox’s own laser printer. On the other hand, Xerox’s indecision and its perceived mistake in failing to enter the computer business seriously may not, with hindsight, have been a mistake at all, given the level of competition in the computer market. So, paradoxically, sometimes a mistake is not a mistake. If a mistake is a wrong action, then we have to make some judgement about whether a strategic business decision is "right" or "wrong", and that may not be obvious as quickly as we think. This reinforces the importance of continuing analysis of decisions after the fact and of potential future scenarios.

As Motorola demonstrates, even companies that have successfully reinvented themselves have to work hard, perhaps even harder, to understand when it is time to do so again.

Kodak almost never made a mistake and did everything right for over 100 years. But disruptive technology in the shape of digital photography arrived. Although Kodak saw this coming, it failed to commercialise its own digital technologies aggressively enough. It focused too much on physical products in the traditional business and gave too little attention to helping consumers by expanding services. The insight here is that with disruptive technology, prices usually drop and value shifts to customers. This is a normal part of the economic cycle that you should anticipate and use proactively to advantage.

Kodak also shows that many changes happen without your permission. Learn to recognise the signs and "get on board" early. Many more industries and companies will see the value continue to shift from hardware to software and services. Even companies that thought of themselves as primarily manufacturers will move more deeply into services for growth.

Recognising strategy mistakes and competitive changes is not easy but some "strategy deficiency" clues should set off alarms for further investigation. These include:

  • Employees don’t understand the strategy.
  • "Surprise" competitors or competitive products.
  • Missed opportunities.
  • Looking outside for growth or technology because organic growth has become difficult.
  • Timing issues – "early to market, early to lose".
  • Loss of pricing power on flat or declining volume – the first stage of commoditisation.
  • Indirect loss of pricing power, indicating increased competitive pressure.
  • Feeling your competencies are undifferentiated.
  • Declining price/earnings ratio (P/E) and other financials. By the time you see this, you have serious problems.

Back to the top

Cultural causes of physical disasters

The author looks at the examples of the Titanic, Three Mile Island (TMI) and NASA to identify the cultural foundations of business disasters and their business implications. Assumptions, he believes, are the core of mistakes in physical systems and business. The problem is that we often make assumptions and draw what we think are conclusions on the basis of limited data. If nothing bad happens, we begin to view the assumptions as truth.

TMI was assumed to be fail-safe under all conditions. NASA assumed that since foam had been coming off the centre fuel tank for more than 100 shuttle launches and had never caused serious damage, it could not cause serious damage. The investigation of the Columbia incident showed, however, that a piece of foam moving at over 500 mph relative to the shuttle’s wing could do enough damage to bring the shuttle down if it hit in just the right place. The problem with assumptions is that they are just that and they have limitations we may not realise. Once we believe them, we have closed the doors to understanding, killing curiosity and analysis. Assumptions must be tested and retested until they are proven beyond a doubt.

Push or ignore engineered safety at your peril. Engineers and designers who build systems of all types build in features designed to enhance the ability of the system to perform the intended function, but they also include features to minimise the chance of damage in the event of partial or full failure. This is true in both physical systems such as airplanes and in the business world where "systems" include complex human/software process systems. The same opportunity exists in business systems as in a physical disaster to damage a business because of the complexity of the design and operational interface of man and machine.

Although they try to anticipate adverse conditions that threaten the success of the system, designers will not always successfully design for every circumstance. Even if they do, human intervention can often overcome the most rigorous safety design. Understanding that built-in safety features are there for a reason should be a cause for understanding limits. Pushing systems to their limits or ignoring threats that test or evade safety systems should be undertaken only with the greatest care and understanding of the extreme risk involved. Titanic, Three Mile Island and Columbia all pushed engineering limits and lost.

Believe the data. The failure to believe information that is staring you in the face is one of the most common causes of catastrophe. In Titanic, TMI and at NASA, operators had warnings of danger and did not heed them. In each case, danger could have been averted. The same is true of many of the business situations discussed in the book.

Train for the "can’t happen" scenario. People obviously train for known situations but you need to think about how you would handle something "they" say "can’t happen".

Open your mind past your "blinders". This is very difficult – how do you know whether your response has been conditioned by your experiences? The only defence may be to play "what if" games with yourself and your colleagues. If you find yourself in a confusing situation, the question when you have exhausted all other avenues is: "What’s the other right answer?" By discarding what you have already thought about without success, this question can open the mind to considering whether there is another answer.

Back to the top

Cultures that create accidents

The author identifies some common traits in the physical disasters of Titanic, TMI and Columbia that are seen over and over in business situations as well:

  • Each organisation thought – in fact they were very confident – they knew what they were doing.
  • Each was operating a system that was state-of-the-art for its time.
  • Each ended up in a disaster of their own making, created or amplified by multiple mistakes.
  • Each failed to act on numerous warning signs and so guaranteed that exponential damage would result from the mistake chain.
  • Each created a disaster so significant it will always be remembered in business history in a negative light.

In many business situations, the entire culture of a company becomes supportive of a mistake waiting to happen. Culture is powerful – what creates success may kill you. Many successful companies have cultures, typically in the early stages, that help them see things in markets that others miss, grow faster, and deal with competitive threats. The same powerful force that binds an organisation together for success can also be a catalyst or cause of failure. Myopia is a fatal business disease.

There is a fine line between world-class motivation to achieve, and destructive arrogance. Enron is a classic example of a complex mistake chain fostered by the organisation’s culture of "supremacy". Enron management came to believe that they were unique in business and that their company had extraordinary capabilities possessed by no others. Its "we can do no wrong" culture contributed to its impressive growth and success but somewhere along the way it changed from legitimate pride in accomplishment to arrogance and a feeling of invincibility. Enron’s management began to believe they could enter virtually any business and dominate it through clever trading and market making. A policy of aggressive financial management led them to take high risks. They pushed boundaries to win – pushing their auditors to acquiesce and pushing the banks to rate their stock favourably. There was an "unbelievable" lack of oversight by the board and its committees who asked few questions about transactions put before them. The destruction could have been stopped earlier, in time to save something of the company, but that would have involved personal admissions of imperfection and poor judgement.

No matter what the culture or circumstances, it still generally takes multiple mistakes to cause serious damage. This means there are still chances to break the chain but it is very difficult when the automatic cultural reaction is to reject any effort to break the chain.

Back to the top

Mistakes as catalysts for cultural change

Culture is powerful but be sure you understand where to extend it. McDonald’s core culture, for example, is built on attention to detail, standardisation and discipline in operations and marketing. This is a tremendous strength but it cannot easily be extended to other businesses, even food businesses. Going into new businesses requires rapid learning and adaptation. It means that McDonald’s efforts in new businesses will be less efficient or that it will have to develop teams with different competencies.

Rapid culture change designed to obliterate mistakes in highly critical areas is possible, but sharp focus, extra diligence and continuous training are necessary for success. The problem is that if you do not have a history of being a high-performance organisation, it is almost impossible to invent this capability on demand. But these standards cannot be relaxed if you wish to maintain performance.

Most cultures develop by accident. The author believes that those designed to accomplish a purpose are more effective (he is thinking of the US Navy’s submarine force amongst other organisations). Successful companies design cultures through consistent priorities and behaviours. This does not mean that priorities remain unchanged; the changes take place in a considered and deliberate fashion and are communicated very well.

Back to the top

How entire industries lose it

Don’t ignore economics, says the author. Economic forces and laws are real, and industry changes are real. They are not as unexpected as many people think; it is usually only a matter of the timing. The mistake chain where entire industry changes occur is driven by a failure to recognise the need to make fundamental changes in a business model early enough to avoid being consumed by the natural laws of economics.

Long cycle economics can hide mistakes temporarily. If your analysis is not deep enough you can blame poor performance on any number of current factors. But mistake chains often start when executives make short-term decisions that seem expedient without realising that the symptoms they see may be a part of a longer-run, industry or country economic cycle. Understanding and evaluating the potential impact of the economic cycle can provide context that makes it easier to see mistakes before they lead to economic destruction.

The world automobile industry tried to defy the laws of economics and ended up with global manufacturing capacity that exceeds demand by 20 million vehicles a year. A failure to anticipate the economic cycle of the industry has brought the Big Three (Ford, GM and the Chrsyler portion of DaimlerChrysler) to a point where the profit they make comes primarily from financing and selling spare parts. They are still large companies that produce a lot of cars but (according to the author) they do not even return their cost of capital on a consistent basis.

Every industry has two breakeven points. The first occurs early in the evolution of the industry when competitors are all losing money while figuring out how to make the industry work. From there, companies improve production, increase volume and produce profits. The second occurs as profits are declining due to excessive competition and/or changed markets. Business people understand the first breakeven point and pour investment into companies that are trying to accelerate the time to industry profitability. But executives are loathe to admit that the second, downward breakeven point exists.

In the first case, if you are reasonably competitive you can ride the growth wave and survive for a period. But in the second case, late in an industry’s life, you have to be great because only one or two firms usually survive the shakeout.

So being number one or two in an industry really does matter (as GE knows). It is a reflection of the laws of economics that will punish you if you are not a leader in your field. You need a vision that includes an understanding of the forces at work and the time you have available.

The ability to avoid the mistakes that lead to economic destruction depends on the ability to understand economic cycles, synthesise signals and sense changes while developing ideas about what might work in future. This not so much "visionary" as being curious and analytic about what you see around you. Leaders of companies such as GKN or Dell have not been visionary as much as observant. They see, a little earlier than others, what will later be described as obvious after the fact. They see signals and react a little quicker than others – but not too far in advance. (There are plenty of examples of people who were too far ahead of their time because the market was not ready.)

The ability to be ahead just enough to win is what the author calls "economic business visioning" (EBV). This involves an understanding of the economic cycle and what is driving cost, price, volume and the level of competition. It calls for an understanding of what customers value at different points in the cycle, of the role of productivity in the cycle, of which costs are truly fixed and variable, and of the economic factors driving R&D. It requires competitive modelling and gaming of product cycles, strategic intent and vulnerabilities. It also depends on a commitment that the objective is not to stay in the business you are in, but to be able to generate value in a business that will be in demand, where you have a plan for superiority, and where you have or can develop the competencies required to win.

EBV is not optional, says the author, but he points out that few companies factor such a process into their planning. For both the short and the long term, designing a process for analysis and spending time understanding the shifts in the industry economic issues will help avoid mistakes that are deadly.

Back to the top

Mistakes are not just for big companies

Startups and small businesses make mistakes in the same way that larger organisations do. However, they usually have fewer resources that would enable them to avoid or recover and less flexibility to survive mistakes with alternative plans or products. While the patterns are similar, some mistakes or sequences are unique. These involve fundraising and the mechanics of getting things done in the early stages.

Many of the principles for dealing with larger situations apply to smaller companies, including looking for signals, developing standard procedures, evaluating scenarios and risk factors, and developing an EBV process. The author suggests that smaller companies may also find it useful to appoint an advisory board composed of individuals from a range of skills and backgrounds who can help to identify blind spots, push management to do the things they have been avoiding, and provide a wider perspective.

Back to the top

Making M3 part of your culture for success

Do you want to trust "saving the business" to your last line of defence, asks the author? That is what you will do, he says, if you do not develop systems for detecting and correcting mistakes before there is any damage of consequence.

There are two ways for early intervention to stop preventable disasters:

  1. Heeding early warning of specific danger and detecting patterns in operations.
  2. Detecting dangerous patterns of action in operations and strategy.

Learning to evaluate and believe early warnings starts with the firm’s operating culture which should be market-driven. The author recommends an "E3" approach whereby: Everyone knows the customer; Everyone improves quality; Everyone markets and sells. The ability to spot warnings also depends on:

  • Delineation of responsibilities, standard operating procedures, and metrics.
  • Training, simulation and designed safety.
  • Believing your indications and acting on them. Never ignore customer data. Communicate information whether it is good or bad.
  • Mental preparation and scenario analysis.
  • The last line of defence – this is the place where top and senior management have done all they can do and those in the front line are on their own. This is where all the training, preparation, procedures and cultural development become the context for an individual to make decisions in a confusing situation. It might mean approaching a much more senior individual to let them know you think the organisation is making a big mistake. You should train the members of your team to understand they have a responsibility to exercise this option when the situation calls for it.

Learning to detect dangerous patterns and strategic blunders depends firstly on developing a strategic culture that assumes disruptive change will occur in future. Guidelines here include:

  • Be suspicious of success.
  • Track and understand micro-trends.
  • Understand competitors in detail.
  • Analyse current and past decisions and decision-making processes to learn from past mistakes.
  • Test and retest assumptions.
  • Train people to speak up, identify what is happening and seek help or advice.

The last line of defence is the ability of those on the front line to save the organisation. Have you, therefore, built a questioning culture that will, when confronted with a tough situation, somehow find a way to save the day?

The author offers a final insight. If you don’t make any mistakes, he says, you may not be taking enough risk. But failing to take risks at all may be the most dangerous mistake a business can make. A firm can become so risk averse that it dooms itself to failure because innovation by others renders it irrelevant. This does not mean you should seek mistakes for the sake of making them, but the lack of mistakes does not always correlate with the highest level of success. Risk-reward is an economic principle that underlies the whole business system.

Perhaps the very first step you should take to avoid the potential disaster lurking around the corner is to read this book.

Back to the top