Our PROMISE: Our ads will never cover up content.
Our children thank you.
Published: Tuesday, April 4, 2006  21:00
I’ve been reading about the 1.5 sigma shift for some time. Several different interpretations have been published and people are losing sleep over this matter. Some have even driven a stake into the ground to fervidly defend their position. My intent in writing this column is to simplify understanding of 1.5 sigma shift in contributing to a greater ease of use for the Six Sigma methodology. Following are the best two definitions I could find on 1.5 sigma shift:
"1.5sigma shifts and drifts: The theory that over time any process in control will shift from its target by a value of up to 1.5 sigma. Allowing for the 1.5 sigma shift results in the generally accepted six sigma value of 3.4 defects per million opportunities. Ignoring the 1.5 sigma shift results in a six sigma value of 2 defects per billion opportunities." (from ASQ’s Six Sigma Forum Magazine)
"By offsetting normal distribution by a 1.5 standard deviation on either side, the adjustment takes into account what happens to every process over many cycles of manufacturing. … Simply put, accommodating shift and drift is our ’fudge factor,’ or a way to allow for unexpected errors or movement over time. Using 1.5 sigma as a standard deviation gives us a strong advantage in improving quality not only in industrial process and designs, but in commercial processes as well. It allows us to design products and services that are relatively impervious, or ’robust,’ to natural, unavoidable sources of variation in processes, components, and materials."
(From Six Sigma: The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations [Mikel Harry and Richard Schroeder, Currency, 1999])
Understanding 1.5 sigma shift
Reflecting back to 1986, when I moved from Austin, Texas, to Schaumburg, Illinois, and started working with Bill Smith, inventor of Six Sigma, I learned from him the ideas around the sigma shift. I had already designed a sigma program in Austin, but that was without the shift. I remember Smith saying that it had something to do with process control charts.
When it all started
Back in 1986, PCs had just become popular. Bill Smith had a Radio Shack computer in his office. PCs weren’t commonly used in large numbers yet. In the 1920s, during the days of Walter Shewhart, people didn’t think about computers or calculators. Calculations were all done manually. When Shewhart developed his statistical control model, he must have thought about simplifying calculations. When he studied variations, he recommended 25 subgroups with a sample size of four. By doing this, he could easily calculate percentages to determine probabilities by counting rather than doing tedious calculations, or integrating probability density function within specified limits by parts.
Some of the arguments for using a sample size of four or five were:
The cost, in terms of the amount of inspection and the amount of computation, of sampling four or five is less than that of sampling larger numbers.
With a sample size of four or five, there’s less chance of a process shifting while taking the sample. The smaller sample averages will mask the effect of causes less than larger samples will.
Of the two, a sample of five makes it easier to calculate sample average, while a sample size of four makes it easier to calculate .
Let’s look at control charts. The most commonly used control charts are Xbar, R (or Xbar, s charts), instead of X, R charts. Xbar, R charts are the distributionindependent because of the application of the central limit theorem, which states that sample averages tend to follow a normal distribution irrespective of the distribution of the population. Measure of dispersion among means is sometimes known as the "standard error of estimate.” Thus, larger sample size means will be close to one another. In other words, sampletosample variation will be less. That’s why sample size matters. Without computers standard deviations were prone to calculation errors, so the tolerance limits were determined using sample range values.
The typical formula for determining control limits is:
Where can be estimated using . Similarly, if the population standard deviation is known, the standard estimate of error, i.e., dispersion among means is calculated as follows:
The following table summarizes Xbar,R chart control limits for various sample sizes:
Sample 
Square root 
Control limits for Xbar, R chart 
Control limits 
1 
1 

3 
2 
1.414214 

2.121320344 
3 
1.732051 

1.732050808 
4 
2 

1.5 
5 
2.236068 

1.341640786 
10 
3.162278 

0.948683298 
As we can see, when sample size changes from four to 10, the control limits for the Xbar, R chart are 1.5 sigma, and 0.95 sigma, respectively. The worst outofcontrol condition would be when a data point (i.e., process mean) exceeds its control limits. When the sample size is four, the allowed shift before the process goes out of control will be 1.5 sigma. When the sample size is 10, the allowable shift before the process goes out of control will be 0.95 sigma. The higher the sample size, the more frequent the disruption in the process, making control charts too sensitive to process variation based on the sigma level, but less sensitive to process variation based on the sample size. The allowed part per million (PPM) with six sigma limits could be 3.4 PPM and 3 PPM for the sample size of four and 10, respectively. Because the Six Sigma model chooses to use a sample size of four to allow the 1.5 sigma shift in the process mean, larger samples will have smaller shift thus presenting a conservative measure of sigma.
The table below shows that a process can have any sigma amount of shift. With a sample of four or more, the worstcase shift will be 1.5 sigma. It doesn’t mean that the process will always have a 1.5 sigma shift. The Six Sigma model is based on certain assumptions to standardize the process. Today, it’s easy to do precise calculations in terms of PPM corresponding to various values for sigma shift in process means to make economic decisions.
Six Sigma was defined based on benchmarked failure rates representing virtual perfection, control chart theory for maximum allowed shift in the process mean and Shewhart’s recommendation for effective sample sizes while monitoring variation in assignable causes. According to Shewhart’s theory, for a typical sample size of four the maximum allowed shift in the process mean is 1.5 sigma of population. It doesn’t mean that the process has to have this variation. Given today’s computing capability, people can calculate the exact sigma level of a process based on any sample size, although it would be difficult to benchmark among companies. If that were the case, it wouldn’t be Six Sigma.
Comments
thank you sir ,it was v.good
thank you sir ,it was v.good article
i wander if there is a relation between confidence intervals and 1.5 sigma shifting ???