6 Sigma Pro
6 Sigma Con
ISO 9000:2K
ISO Consult
Baldrige

 

Article

Six Sigma: Fad or Fundamental?

The much-debated program can be central to quality improvement.

by John S. Ramberg

 

See the opposing view
The creators of the term "Six Sigma" were a bold bunch: They actually included a Greek symbol, s, in the name of their plan to achieve high-quality processes, products and services. But what is Six Sigma? In the narrow statistical sense, Six Sigma is a quality objective that specifies the variability required of a process in terms of the specifications of the product so that product quality and reliability meets and exceeds today's demanding customer requirements.

 The originators of Six Sigma went far beyond this narrow definition, which was actually derived from their own reliability assessment of product performance. They defined a program to accomplish Six Sigma that included leadership, infrastructure, and tools and methods. Thus they have followed the advice of Joseph M. Juran and Peter Senge to make quality part of the corporate business plan. According to General Electric (GE)--an early adopter of the program--Six Sigma is a "disciplined methodology of defining, measuring, analyzing, improving and controlling the quality in every one of the company's products, processes and transactions--with the ultimate goal of virtually eliminating all defects." Six Sigma advocates continue to develop this methodology to improve organizational performance.

Tools and methods

 Although Six Sigma's tools and methods include many of the statistical tools that were employed in other quality movements, here they're employed in a systematic project-oriented fashion through the define, measure, analyze, improve and control (DMAIC) cycle. In addition, advances that facilitate the application of these tools have taken place over time. Empowering all employees with Kaoru Ishikawa's seven quality tools creates a workforce capable of solving many problems, as was learned through total quality management. Using these tools in conjunction with other statistical methods embodied in the scientific method and the availability of modern statistical software with graphical outputs reduces the drudgery and helps statistically oriented personnel to better ply their trade. It frees problem solvers to lead their teams to improve quality, reduce cost and reduce time to delivery simultaneously.

 Each organization should tailor its Six Sigma program, with the help of specialists, to meet its particular needs. Some organizations may wish to include other topics such as lean thinking. However, the training materials should not be simply a collection of defunct programs.

 The statistical methods employed in Six Sigma differ substantially from those taught in run-of-the-mill engineering or statistics programs. Six Sigma emphasizes observational methods and experimentation in the scientific context--for example, two-level factorial experiments are standard, and graphical methods are emphasized for the analysis of these experiments. Experimentation is not simply analysis of variance. While the latter facilitates understanding of experimental results for statisticians, for engineers it can be an impediment to planning and carrying out experiments.

Figure 1: Six-Sigma Process with +1.5s Shift
vs. Centered Three-Sigma Process

How Six Sigma works

 Let's take a closer look at the difference between three-sigma and six-sigma processes under the assumption of normality, which is critical to the calculations that follow. (The assumption of stability is also critical. Without it, one cannot predict the operation of the process or state probabilities.) Figure 1 illustrates a three-sigma (centered) process and a six-sigma (+1.5 s  shifted) process. Both process distributions appear to be entirely within the product specifications.

 We define LSL as lower specification limit and USL as upper specification limit. The target, T, equals (USL + LSL)/2. For this example, we chose T = 0, USL = –6 and LSL = +6 for ease of explanation.

  We have traditionally operated at the three-sigma level. Given the specifications, the process variation (s) must be small enough so that the base of the normal distribution fits within the specifications, when the mean equals the target. That is, the length from m - 3s to m + 3s, a length of 6s , must be less than USL – LSL. Hence s = (USL – LSL/6) is the largest variation allowable. The figure illustrates this situation with a normal distribution where the mean, m , equals 0, and the variation, s, equals 2. A process operating in this mode will produce 2,700 parts per million (PPM) defectives, with 1,350 PPM beyond each specification limit. And should the process mean shift to m = t  ± 1.5s (which would be ± 3), then it would produce 66,807 PPM defectives.

 Following the same reasoning as was given for the three-sigma process, a process variation of s = 1 or smaller is required to achieve the six sigma objective [s = (USL – LSL)/12]. If the six-sigma process mean were centered on the target value, the process would produce defectives at a rate of two parts per billion (PPB), one PPB beyond each specification limit. Should the process mean shift by 1.5 sigma, the defective rate would increase to 3.4 PPM defectives. The figure illustrates the latter situation, with a normal distribution where the mean, m, equals 1.5, and the variation, s, equals 1.

Note that every process has a variation that can be estimated through a process capability study. It isn't determined by the specifications of the product being produced. Here we are determining how precise a process must be to accomplish a three- or six-sigma objective for the product. P.R. Tadikamalla showed that the 3.4 PPM rate could be obtained in other ways depending upon the process spread and the process shift.

Origin of Six Sigma: Designing for Performance Excellence

The late Bill Smith, a reliability engineer at Motorola, is widely credited with originating Six Sigma and selling it to Motorola's legendary CEO, Robert Galvin. Smith noted that system failure rates were substantially higher than predicted by final product test. He suggested a number of possible causes for this phenomenon, including a dramatic increase in system complexity and the resulting opportunities for failure and a fundamental flaw in traditional quality thinking. He concluded that a much higher level of internal quality was required and convinced Galvin of the importance of setting Six Sigma as a quality goal. Smith's holistic view of reliability (as measured by mean time to failure) and quality (as measured by process variability and defect rates) was indeed new, as was the Six Sigma quality objective.

 Prior to Smith's analysis, a number of gurus, including Joseph M. Juran, Dorian Shainin, Genichi Taguchi and Eliyahu Goldratt, had presented their programs for quality and productivity improvement at Motorola. Mikel Harry, president of the Six Sigma Academy and co-author of Six Sigma: The Breakthrough Management Strategy Revolutionizing the World's Top Corporations, attended some of these programs and developed a program for the Government Electronics Division of Motorola that included Juran's quality journey, statistical process control (SPC) and Shainin's advanced diagnostic tools (ADT) and planned experimentation (PE).

 Harry later teamed with Smith on the Six Sigma initiative and created Motorola's Six Sigma Institute prior to forming his own firm. Smith and Harry's initial Six Sigma umbrella included SPC, ADT and PE. Later, they added design for manufacture (product capability and product complexity) and, as quality was linked to business performance, accomplishing quality through projects.

 Motorola's design margin had been 25 percent (or 4s or Cp = 1.33). When Smith noted that escaping and latent defects under this strategy were far too high, he reasoned that the disparity between actual reliability and the reliability expected at final test could be accounted for by increased product complexity and deviations of the process mean from the target value, arriving at a value of 1.5 sigma. The complexity phenomena had been noted previously by Wernher von Braun in the U.S. space program: If a large number of components must function for a system to accomplish its objective, the probability of system success diminishes rapidly as the number of components increases unless the reliability of each is essentially perfect. The 1.5-sigma deviation remains controversial, but it's not a fundamental issue. What is important is that Smith recognized that a process mean could not be maintained exactly on target, and when it deviated from target, the traditional three-sigma process produced large numbers of parts that exceeded specifications. Thus, this breaking with the three-sigma quality tradition was a major contribution, as was the recognition of the role of complexity, which dramatically increases the number of opportunities for (and thus, probability of) defects and the likelihood of subsequent system failure.

 But what about the then-existing theory of optimal quality levels? Motorola observed that Japanese products were of much higher quality than was predicted by the traditional optimal quality level curves. Independently, Robert Cole investigated this issue and noted several reasons for this change in the quality viewpoint. Japanese quality professionals, he asserted, realized that the costs of poor quality were far larger than had been supposed; recognized that focusing on quality improvement as a companywide effort improved a wide range of performance measures; established a system that moved toward quality improvement and low-cost solutions simultaneously; shifted the focus of quality improvement from product attributes to operational procedures; developed a dynamic model in which customer demands for quality rise along with their willingness to pay for these improvements; and focused on preventing error at the source, thereby dramatically reducing appraisal costs.

 As Motorola set out on its quality journey, Harry noted that the company ran into a five sigma wall. Motorola found that it could attain a three-sigma level by installing process improvement and control in its own installations, and improve this to the four- or five-sigma level through the education of its suppliers. However, Six Sigma only became possible once the company had attained a better understanding of the role of robust design--systems design, parameter design and tolerance design.

 Not coincidentally, Motorola won the Malcolm Baldrige National Quality Award shortly after the rollout of Six Sigma. Receiving the Baldrige Award requires the winning company to present its concepts to the world. Thus, as Six Sigma was approaching adolescence, quality professionals at Motorola were describing their methods to their colleagues and learning how far Motorola had advanced in comparison to other companies. At this point Harry wrote a strategic vision for accelerating Six Sigma. This included a change in focus, anchoring quality by dollars and seeking a business transformation. It included a description of different competence levels in the Six Sigma methods, which, in the karate tradition, were designated by belts--Green Belt, Black Belt and Master Black Belt.

 Elsewhere, GE's Jack Welch and AlliedSignal's Larry Bossidy (first at GE Financial) led their organizations' cultural change through Six Sigma initiatives. In 1998, Business Week reported that GE saved $330 million through Six Sigma, doubling its CEO's previous prediction. Welch has predicted a savings of $10 billion over five years. It's no wonder Six Sigma has gained industry's attention.

A look at some criticisms

 Despite the evidence that it does work, Six Sigma still has its critics. One of the more common criticisms is that it has little to offer that can't be found elsewhere, that it's simply a marketing ploy. It's true that Six Sigma programs have incorporated tools that have been useful in previous quality initiatives. It also includes some older measures such as defects per unit, representing them in a modern quality context as defects per million opportunities. Although the quality movement has attempted to move from defect measures to a variable measure of product and process performance, the older methods don't necessarily represent a primitive way of thinking about quality. After all, defect and defectives counts provide tangible, measurable results that we can use.

 Strangely, another criticism is that Six Sigma doesn't take enough of a preventive, proactive approach to problems, that it's more of an appraisal system or a corrective action system. There's much more to Six Sigma than appraisal, including the education and training of the entire workforce and the creation of an infrastructure of experts who focus on projects to improve quality, timeliness and cost. Further, this criticism unfairly assumes that appraisal programs aren't useful. In fact, appraisal can be a great starting point--it makes clear the terrible consequences of current practices. Appraisals also provide a foundation for identifying and tracking improvements, which is critical to the well-being of any program. Any program director who believes a supervisor who tells him or her to just do a great job and not worry about tracking accomplishments will be out on the street sooner rather than later.

 As with appraisals and assessments, prescriptive corrective actions aren't the evil they're made out to be. All organizations reside in some state of operation. How can that situation be improved if it's not assessed, with the assessment followed by corrective actions? Although it may be preferable to design a whole system, all too often that cure ignores the original problems.

 Some have made the argument that the success of quality programs, whether they have a corrective or a preventive approach, depends more on the establishment of the right organizational culture than on the program--in other words, no quality program will result in improvement if it can't be internalized. This is one area where Six Sigma truly stands out: Unlike many of the earlier programs, Six Sigma's training system addresses the importance of companywide program buy-in. Indeed, Six Sigma recognizes that cultural issues, including leadership development, are among the most important issues to be addressed in the improvement of any organization.

 Leadership is key to the success of any plan that attempts to change the way an organization does business. Without the support, participation and leadership of top- and mid-level management and the development of an appropriate infrastructure, any program is destined to become just another fad or the latest flavor-of-the-month program.

 For example, let's imagine that an important component of a system is regularly shipped on schedule for installation in the main system. Each time, following installation, inspectors find that the component, while functional, isn't quite ready for deployment. It's removed from the system, returned back to the supplier, repaired, shipped back, installed and retested. Look at the wasted steps, the costs and the quality losses caused by removal, shipping and reinstallation. Why does this continue? Perhaps the supplier is recording the component as complete when first shipped and receives payment or credit, but no record is kept about the return. This hypothetical situation is commonplace. Six Sigma can facilitate resolution of these situations through its fact-finding approach.

 Other criticisms are more technical. Some argue that assuming a process mean to be 1.5 sigma off target is ridiculous. (See "Origin of Six Sigma," page 30, and "How Six Sigma Works," page 32.) Perhaps 1.5 sigma is a bit large, but even more ridiculous is the assumption that one could keep the process mean exactly on target. Furthermore, sigma, as defined in process capability studies, is the "short-term within sample variability." Thus the 1.5-sigma shift allows for variation of the mean about the target. Any process's long-term variation is often larger than its short-term variation due to other sources of variability introduced by operator, materials and operating conditions. In this setting, the shift serves as a proxy for this variation as reflected in a substantially smaller Cpk than the corresponding Cp.

 Another technical criticism is that the normal distribution is just a model and doesn't represent most processes. This is true--but it doesn't mean that a model can't still be useful. The normal distribution is reasonable for many situations. Furthermore, any other realistic process model (log-normal, student's t, normal mixture) will make the case for Six Sigma even more emphatically because they all will predict a much larger probability of producing a product outside the specifications. Regardless, process stability is perhaps more important than distribution, as stability is a requirement. Many industrial processes cannot be regarded as stable, and insufficient effort has been directed toward this fundamental of Walter A. Shewhart.

Putting it all together

 Ultimately, Six Sigma is a superb strategy that addresses leadership, tools and infrastructure issues, some of which were neglected by previous programs. CEOs of leading U.S. firms praise the accomplishments of their Six Sigma initiatives. Outstanding quality professionals and industrial statisticians concur and describe potential roles for their colleagues. Engineering programs have begun to incorporate elements of Six Sigma into their curricula. When have quality professionals ever had such a splendid opportunity to contribute? Constructive criticism of Six Sigma or any other quality program can be useful--but it should be informed and fair.

 

References

    Cole, R.E., "The Quality Revolution," Production and Operations Management, 1, 1, 1992, 118–120.

    Hahn, G.J., W.J. Hill, R.W. Hoerl and S.A. Zinkgraft, "The Impact of Six Sigma Improvement--A Glimpse into the Future of Statistics," The American Statistician, v. 532, Aug. 1999, 208–215.

    Harry, M. and R. Schroeder, Six Sigma: The Breakthrough Management Strategy Revolutionizing the World's Top Corporations, Doubleday, 1999.

    Pignatiello, J.J. Jr. and J.S. Ramberg, Section V. Chapter 19, "Process Capability: Engineering and Statistical Issues," Statistical Applications in Process Control, J. Bert Keats and Douglas C. Montgomery, Editors, Marcel Dekker, 1996, 407–434.

    Pignatiello, J.J. Jr. and J.S. Ramberg, "Top Ten Triumphs and Tragedies of Genichi Taguchi," Quality Engineering, Vol. 4, No. 2, 1991, 211–226.

    Pyzdek, T., The Complete Guide to Six Sigma, Quality Press, 2000.

    Tadikamalla, P.R. "The Confusion over Six Sigma," Quality Progress, Nov. 1994, p. 83–85.

 

About the author

  John S. Ramberg, Ph.D., is a fellow of the Institute of Industrial Engineers, the American Society for Quality and the American Statistical Association. He is also a member of the founding committee of the International Quality Federation and teaches engineering statistics and quality management at the University of Arizona. Ramberg is the principal investigator of an NSF grant on the implementation and deployment issues of modern quality initiatives and a GE Foundation grant that features the introduction of Six Sigma into engineering curricula and is also a principal of the newly formed Arizona Partnership for Six Sigma . He has served as editor of the Journal of Quality Technology and head of the University of Arizona's Systems and Industrial Engineering Department. Ramberg has received the Craig, Ott, Saty and Shewell awards for his research and presentations. E-mail him at jramberg@qualitydigest.com .

 Ramberg wishes to thank Felix Harjanto, Aamir Kazi and Santosh Shetty, master's candidates at the University of Arizona, for their assistance in the preparation of Figure 1 and their helpful input.

Today's Specials
 

Menu Level Above 

[Contents] [News] [WebLinks] [Columnists]

This Menu LeveL 

[6 Sigma Pro] [6 Sigma Con] [ISO 9000:2K] [ISO Consult] [Baldrige]

Menu  Level Below 

 

Copyright 2000 QCI International. All rights reserved.
Quality Digest can be reached by phone at (530) 893-4095. E-mail:
Click Here