



© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Published: 04/18/2006
In 1633, opposition to the common viewpoint could mean death. This was the case with Galileo when he proposed that the Earth revolved around the sun. He was tried for heresy. Of course things are different today. People who question dogma are no longer burnt at the stake. Instead, they’re branded as having suspect motives, as reactionaries or simply as nutcases.
Consider global warming. The common viewpoint is that man-made carbon dioxide is to blame, but the Earth has been through ice ages and periods of global warming for millions of years. As recently as 1,000 years ago, the Earth was a degree warmer in the “Medieval Warm Period” and the Vikings could grow crops in Greenland. No one questions how this could happen so many years before our recent fuel consumption excesses. No one questions why man-made carbon dioxide would have any effect on global warming when it constitutes less than 1 percent of greenhouse gases (the major greenhouse gas is water vapor). No one questions the recent Antarctic ice cores from Dome Concordia, with ice up to 700,000 years old, which show increases in atmospheric carbon dioxide concentration occurring about 1,000 years after global temperature rises, thus suggesting that high carbon dioxide levels are a result of global warming, not a cause.
In recent years, the common viewpoint has been that Six Sigma is the ultimate quality improvement methodology. Six Sigma has been unquestioningly accepted by major companies around the world. Sadly, this has been done in an area that actually teaches us to ask questions, to collect data, to verify hypotheses. It’s time to look at Six Sigma with critical eyes.
“Look at the results,” you might say. Thousands of companies have saved thousands of dollars with Six Sigma programs. It’s equally true that placebos have cured thousands of sick people. Could Six Sigma be a placebo?
Six Sigma is different from programs that have gone before it, such as quality circles, TQM, quality improvement and continuous improvement. Past programs have typically been driven by a quality manager with no line authority and little if any budget. Even “Vice President of Quality” has often been a title lacking in real power. Six Sigma is different in that it’s been driven from the top, with senior executives such as Jack Welch playing a central role. Consequently, expenditure on quality has been at unprecedented levels, with companies like General Electric spending more than half a billion dollars per annum on Six Sigma. Any program that’s driven with such dedication and force is likely to produce results.
Why has Six Sigma received such support from senior management? Unlike previous approaches, Six Sigma is a proprietary methodology driven by a private company. Clever marketing has produced great support.
Rather than accepting the corporate world revolving around the Six Sigma sun, we’ve got to risk accusations of heresy and start asking questions. First, the Six Sigma methodology suggests we should count defects, and if we have fewer than 3.4 defects per million opportunities, we have a six sigma process. The number 3.4 comes from assuming processes are normally distributed, then applying a factor of +/- 1.5 sigma to account for the drift that processes inevitably experience over time. In other words, suppose a process has a target value of 10.0 and control limits work out to be, say, 13.0 and 7.0 with a sigma of 1.0. Process drift implies that the mean will drift to 11.5 (or 8.5), with control limits changing to 14.5 and 8.5. Now this is terrible news for customers expecting to receive a product that stays on target. They’re being told that the Six Sigma process will produce exceptional quality, and at the same time they can expect huge variations in the process mean.
You may wish to vary the numbers above to suit your particular product, but the result is the same. Drifting process averages imply poor quality. If I ask for a product with a certain target value, I want that target value to remain. I don’t want to be told by a supplier that it’s “inevitable” that the mean will drift considerably. How did this extraordinary situation arise? Where did the 1.5 sigma drifting means originate?
The +/-1.5 shift was introduced by Mikel Harry. Where did he get it? Harry refers to a paper written in 1975 by Evans, “Statistical Tolerancing: The State of the Art. Part 3. Shifts and Drifts.” The paper is about tolerancing. That’s how the overall error in an assembly is affected by the errors in components. Evans refers to a paper by Bender in 1962, “Benderizing Tolerances—A Simple Practical Probability Method for Handling Tolerances for Limit Stack Ups.” He looked at the classical situation with a stack of disks and how the overall error in the size of the stack related to errors in the individual disks. Based on probability, approximations, and experience, he asks:
“How is this related to monitoring the myriad processes that people are concerned about?” Very little. Harry then takes things a step further. Imagine a process where five samples are taken every half hour and plotted on a control chart. Harry considered the “instantaneous” initial five samples as being “short term” (Harry’s n=5) and the samples throughout the day as being “long term” (Harry’s g=50 points). Because of random variation in the first five points, the mean of the initial sample is different from the overall mean. Harry derived a relationship between the short-term and long-term capability, using the equation to produce a capability shift or “Z shift” of 1.5. Over time, the original meaning of instantaneous “short term” and the 50-sample point “long term” has been changed to result in long-term drifting means.
Harry has clung tenaciously to the “1.5,” but over the years its derivation has been modified. In a recent note, Harry writes, “We employed the value of 1.5 since no other empirical information was available at the time of reporting.” In other words, 1.5 has now become an empirical rather than theoretical value. A further softening from Harry: “… the 1.5 constant would not be needed as an approximation.”
The above shakes the very foundation of Six Sigma, but there’s a more fundamental problem. Six Sigma is a specification-driven methodology. Six Sigma is based on counting defects, and defects relate to the specification. It’s easy for consultants to claim they’ll halve defects. They simply change the specification. Specifications tell us nothing about what the process is doing. Specifications are the voice of the customer, not the process. If we’re to improve processes, we must listen to the process. The voice of the process is the control limit. Control limits have been and will always be based on three sigma.
There’s a perception in many quarters that six sigma is better than three sigma. Three-sigma control limits have a basis in theory, but principally they’re economic limits. They aren’t probability limits. They give signals as to when process variation is likely to be caused by other than chance and when it’s appropriate to take action. Three-sigma control limits apply no matter what kind of process distribution exists. Six sigma bears no relation to the behavior of processes.
Next, heretics should ask, “Why so many tools?” There’s no doubt that it’s necessary to teach 30 or 40 tools to justify the cost and time to train a Black Belt. However, is this necessary to improve quality and cut costs? Do more tools make a better tradesperson? Professor Ishikawa stated in 1989: “The seven quality tools, if used skilfully, will enable 95 percent of workplace problems to be solved.” Nothing has changed. If there were more focus on the primary tools, perhaps time could be spent teaching the correct way to estimate six sigma and how to deal with real world processes, most of which don’t follow the Six Sigma assumption of normality. Perhaps more time could be spent helping people understand the meaning of three-sigma control limits.
We should also ask, “Why do we need Black Belts?” Do companies need elitism of this kind? Deming taught us in point 9 of his 14 points, “to drive out fear” and in point 10, “to break down barriers between departments.” Reducing elitism leads to better communication and allows people from different areas to work better together to solve problems in the workplace.
This article isn’t the first attempt to highlight the failings of Six Sigma. In 1990, a group of 15 IBM statisticians and quality managers published a position paper on Six Sigma. They expressed “concern with Motorola’s misuse of statistical terms, the thin theoretical and practical evidence for the 1.5 sigma shift, and the dubious means of counting defects and opportunities for defects.” The paper was regarded as “too disruptive” to IBM’s progress in defect reduction, which management wanted to attribute to Six Sigma policies. Furthermore, a paper by Donald Wheeler, “The Six Sigma Zone,” suggests that “the best that can be said is that these numbers (defects per million opportunities) represent a triumph of computation over common sense.”
Perhaps the above gives some clue as to why Toyota continues to make the number-one quality-rated car in the United States (J.D. Power 2005). Unlike Ford and General Motors, which are strong followers of Six Sigma, Toyota does NOT use Six Sigma. G.M. had a loss of $8.6 billion for 2005. Ford lost $4 billion in the first nine months of 2005 and plans to eliminate 30,000 jobs and close 14 plants across the United States over the next six years.
At the risk of being burnt at the stake, people should question what’s put before them rather than blindly accepting common viewpoints at face value.