Featured Product
This Week in Quality Digest Live
Six Sigma Features
Douglas C. Fair
Part 3 of our series on SPC in a digital era
Scott A. Hindle
Part 2 of our series on SPC in a digital era
Donald J. Wheeler
Part 2: By trying to do better, we can make things worse
Douglas C. Fair
Introducing our series on SPC in a digital era
Donald J. Wheeler
Part 1: Process-hyphen-control illustrated

More Features

Six Sigma News
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers

More News

John Flaig

Six Sigma

Is the 1.5 Sigma Shift an ‘Ill-Conceived Safety Margin?’

The hidden use of safety margins in quality engineering

Published: Monday, February 24, 2014 - 15:02

Engineers have used safety margins for centuries to protect their companies and customers from the consequences of product degradation and failure. Sometimes the safety margins are fairly obvious (e.g., maximum-load limits posted in elevators), and other times they’re not.

Design margins are often hidden from view. For example, some computers have cooling fans that are used to reduce the CPU’s failure rate. A heat sensor in the computer automatically increases the fan’s speed, which increases air flow and cools the CPU, thus reducing the failure rate of this expensive computer component.

In quality engineering there are some popular statistics that have less-than-obvious safety margins included. Consider the process capability index Cpk, which is defined for a stable normal process as:

Cpk = min{(USL – mean), (mean – LSL)}/ 3 sigma

where sigma is the within the rational subgroup estimate of variation.

Because the minimum is used, the metric reflects only half the distribution, and hence the fraction nonconforming for the distribution can range from p to 2p, where p corresponds to the tail area of the minimum distance of the mean to the specification limits for a normal distribution.

And because p is the worst case for half of the distribution, Cpk has a hidden safety margin in terms of the possible fraction nonconforming that ranges from zero to p. The fraction nonconforming is considered by many to be an empirical measure of process incapability; therefore, Cpk is a nonspecific but conservative measure of process capability.

Another example of a hidden safety margin in quality engineering is the assumed ±1.5 sigma shift that is part of the Six Sigma methodology. Of course, it’s not called a safety margin, but a rose by any other name is still a rose.

According to historical anecdotes, engineers at Motorola observed that their process means tended to drift around by about ±1.5 sigma. They wanted to build in a safety margin to compensate for this variation in case other processes exhibited similar instability. At least this is the story that we are left with today. However, I think the real story might be a little different.

Let’s assume we have a stable process (i.e., one that is not drifting around by ±1.5 sigma) given by X = {x1, x2, x3, …, xN} and further assume that the xi are approximately normally distributed. The xi ~ N(µ, σ) can be mapped into N(0, 1) using the linear transform:

zi = (xi – µ)/σ, which results in Z ~ N(0, 1).

Forming moving subgroups of Z of size n = 2, the estimated sigma of the mean (m) is given by:

Where, for the standard normal distribution:

Then,

Finally, the α = 0.1 two-sided 90-percent confidence interval for the mean µ is given by:

However, this is for the Z distribution where , so for the original X distribution we have:

Then,

I think that this may be how the Motorola people arrived at the ±1.5 sigma shift.

Now, I agree that it’s good engineering practice to specify a safety margin to mitigate risk, but I really think the type-1 error choice should be based on an analysis of the ramifications of failure rather than set at an arbitrary ±1.5 sigma. This decision is just like buying an insurance policy: You may not be paying enough, or you could be paying too much to protect yourself against your risk exposure. 

Discuss

About The Author

John Flaig’s picture

John Flaig

John J. Flaig, Ph.D., is a fellow of the American Society for Quality and is managing director of Applied Technology at www.e-at-usa.com, a training and consulting company. Flaig has given lectures and seminars in Europe, Asia, and throughout the United States. His special interests are in statistical process control, process capability analysis, supplier management, design of experiments, and process optimization. He was formerly a member of the Editorial Board of Quality Engineering, a journal of the ASQ, and associate editor of Quality Technology and Quantitative Management, a journal of the International Chinese Association of Quantitative Management.

Comments

Ill-used, anyway...

I've been doing some research for a paper on this subject. Michael Harry originally conceived the 1.5-sigma shift as a design margin. He told his engineers to run their simulations with all the critical component metrics shifted 1.5 sigma in the worst-case direction, to simulate the potential effects on the system of a worst-case tolerance stack nightmare. This makes a lot of sense, from a robust-design perspective.

What doesn't make sense is that for whatever reason, this idea ended up being extrapolated to an assumption that a process could somehow sustain an undetected 1.5-sigma shift indefinitely (or, at least, over a statistically stable production run of 1,000,000). That absurd claim is probably the worst of several fundamentally flawed assumptions for the "Process Sigma Table" in most Six Sigma training materials, and its claim that a process operating at "Six Sigma" levels of quality produce no more than 3.4 defects per million opportunities.