Six Sigma captured the imagination of CEOs around the world. There have been many claims of its successes yet these have at least partially been attributed to the Hawthorne Effect, which implies that if enough money is thrown into any methodology, at least some short-term results can reasonably be expected. However, an increasing number of articles such as those in Fortune, The Wall St Journal and Fast Companysuggest that Six Sigma companies are failing. The market share performance of many companies—including Ford, General Electric, Motorola, Delphi, Home Depot, 3M, Eastman Kodak, Xerox, and Larson—has fallen dramatically, and 91 percent of Six Sigma companies have trailed the S&P 500 since adopting Six Sigma.
Individual examples include the Detroit News’s account of quality problems with the new Ford Edge, despite Ford Motor Co. having trained 10,000 Six Sigma Black Belts and spent hundreds of millions of dollars on the methodology. Quality problems caused Ford to recruit quality expert Kathi Hanley away from Toyota Motor Corp. Hanley pored over the Edge and found more than 70 significant issues and hundreds of minor concerns.
The Ritz-Carlton is the only service company to have won the Malcolm Baldrige Award twice, and its success is based on total quality management (TQM). This may be compared to Home Depot, a Six Sigma company with falling profits and what has been described in an investigative report by Los Angeles NBC affiliate, NBC4, as the “Worst Service Ever.” What has gone wrong with Six Sigma and what lessons can be learned from the past, particularly from the great W. Edwards Deming?
The appeal of Six Sigma to management was a change from a nebulous “continuous improvement” to a specific target of 3.4 defects per million operations (dpmo). The number 3.4 provided a carrot, a stick, and a means of comparing processes and competing companies. The appeal was irresistible to thousands of companies, which leaped onto the Six Sigma bandwagon unquestioningly, despite a focus on defect reduction having been criticized by Deming.
As few as 3.4 defects in a million operations appeared the ultimate in quality. Many thought it unattainable, and for service industries it was. The very best that humans can do is about 5 errors in 1,000 operations, in other words about 5,000 dpmo. Service industries account for 78 percent of U.S. economic output and a similar percentage of employees. Hence, for the vast majority of employees, 3.4 dpmo is meaningless, and for nonservice industries there are much more serious problems.
A defect occurs when a product doesn’t meet its specification. The most obvious way to reduce defects is to change the specification. A broader specification means fewer defects. This may sound silly but it’s exactly what was advocated by the founder of Six Sigma, Bill Smith. Deming’s approach to quality was to reduce variation. Smith suggested that changing the specification "Influences the quality of product as much as the control of process variation does." Six Sigma’s basis in the product specification is perhaps its most fundamental flaw.
The “3.4” is curious. Why not zero or some other number? One of my previous articles, “Sick Sigma,” gives a detailed description of the origins of 3.4. Bill Smith obtained the number in error and used it in a comment about uncontrolled processes. Mikel Harry then proved the number as a drift that all processes experience (see figure 1). Harry based his proof on errors in the height of stacks of discs, which of course bear no relation whatsoever to processes. He later said that it was empirical and not needed.
Figure 1: The consequences of Six Sigma’s +/–1.5 sigma shift that Mikel Harry claims for every process—a process wildly out of control.
In 2003, Harry created a new proof based on a completely different basis and this time called it a correction (see figure 2). Harry’s partner Reigle Stewart changed the proof again to be a “dynamic mean off-set.” All of these proofs are easily shown to be invalid.
Figure 2: Mikel Harry’s 2003 attempt to justify his new +/–1.5 sigma correction. He arbitrarily chooses the pink line at 99 percent to get 1.5. By choosing other plots and P values, a wide range of other corrections are possible. In reality, control charts aren’t probability plots, and no correction is needed.
While the fundamentals of Six Sigma may be flawed, consultants will claim that it’s more than just a methodology focused on defects and more than just the nonsensical 3.4 dpmo. Six Sigma experts claim that Six Sigma is “80 percent identical to TQM.” What are the differences?
In part 2, I will examine in detail the differences and we will look at the lessons that Six Sigma companies can learn from the teachings of Deming.
1. Lean Six Sigma—An Oxymoron? by Mike Micklewright
2. “New rule: Look out, not in.” Betsy Morris, Fortune, July 11, 2006
3. “The ‘Six Sigma’ Factor for Home Depot” Karen Richardson, The Wall St Journal
4. “Six Sigma Stigma” Martin Kihn, Fast Company, September 2005
5. Deming, The New Economics, 2d edition, p.31
6. Error rate studies: http://panko.shidler.hawaii.edu/HumanErr/index.htm
7. “Making War on Defects,” Bill Smith, Motorola. IEEE Spectrum 1993
8. “Sick Sigma” Dr. A. Burns. Quality Digest. April 2006 : http://qualitydigest.com/IQedit/QDarticle_text.lasso?articleid=8819
9. “Sick Sigma Part 2. Tail Wagging Its Dog” Dr. A. Burns. Quality Digest. February 2007: http://qualitydigest.com/IQedit/QDarticle_text.lasso?articleid=11905
10. Mikel Harry http://www.mikeljharry.com/story.php?cid=6
11. Daniel Goleman. “Working With Emotional Intelligence,” http://www.danielgoleman.info/blog/emotional-intelligence/
12. “The Mist of Six Sigma,” Alan Ramias, BPTrends October 2005.
13. The Practice of Management, Peter F Drucker, 1954.
14. Out of The Crisis Edwards Deming. 1982
15. The Nielson Group http://www.nielsongroup.com/articles/articles_climateformotivation.shtml
16. Advanced Topics in Statistical Process Control, Donald Wheeler. SPC Press 1995
17. Normality and The Process Behaviour Chart, Donald Wheeler. SPC Press 2006