Featured Video
This Week in Quality Digest Live
Six Sigma Features
Michael Causey
Be proactive in anticipation of new leadership and direction at the FDA
Harish Jose
Embracing change agents
Donald J. Wheeler
You can still put an upper bound on measurement error
Kyle Pheland
Solving problems and forging better partnerships
Donald J. Wheeler
The art of using a consistency chart

More Features

Six Sigma News
Nov. 30, 2016, in Copenhagen
A story about how organizations rise and fall—and can rise again
Quality Essentials includes downloadable tools and resources, videos of Juran, and his Quality Handbook
Company headquarters and 30 jobs in Dayton, operations in Europe, stay in place
Same great content in mobile-friendly format
Enhanced to dramatically improve workflow with unprecedented speed and ease

More News

Praveen Gupta

Six Sigma

The 1.5 Sigma Shift Explained

Standard deviation shouldn’t be a controversial matter.

Published: Tuesday, April 4, 2006 - 22:00

I’ve been reading about the 1.5 sigma shift for some time. Several different interpretations have been published and people are losing sleep over this matter. Some have even driven a stake into the ground to fervidly defend their position. My intent in writing this column is to simplify understanding of 1.5 sigma shift in contributing to a greater ease of use for the Six Sigma methodology. Following are the best two definitions I could find on 1.5 sigma shift:

"1.5-sigma shifts and drifts: The theory that over time any process in control will shift from its target by a value of up to 1.5 sigma. Allowing for the 1.5 sigma shift results in the generally accepted six sigma value of 3.4 defects per million opportunities. Ignoring the 1.5 sigma shift results in a six sigma value of 2 defects per billion opportunities." (from ASQ’s Six Sigma Forum Magazine)


"By offsetting normal distribution by a 1.5 standard deviation on either side, the adjustment takes into account what happens to every process over many cycles of manufacturing. … Simply put, accommodating shift and drift is our ’fudge factor,’ or a way to allow for unexpected errors or movement over time. Using 1.5 sigma as a standard deviation gives us a strong advantage in improving quality not only in industrial process and designs, but in commercial processes as well. It allows us to design products and services that are relatively impervious, or ’robust,’ to natural, unavoidable sources of variation in processes, components, and materials."
(From Six Sigma: The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations [Mikel Harry and Richard Schroeder, Currency, 1999])

Understanding 1.5 sigma shift
Reflecting back to 1986, when I moved from Austin, Texas, to Schaumburg, Illinois, and started working with Bill Smith, inventor of Six Sigma, I learned from him the ideas around the sigma shift. I had already designed a sigma program in Austin, but that was without the shift. I remember Smith saying that it had something to do with process control charts.

When it all started
Back in 1986, PCs had just become popular. Bill Smith had a Radio Shack computer in his office. PCs weren’t commonly used in large numbers yet. In the 1920s, during the days of Walter Shewhart, people didn’t think about computers or calculators. Calculations were all done manually. When Shewhart developed his statistical control model, he must have thought about simplifying calculations. When he studied variations, he recommended 25 subgroups with a sample size of four. By doing this, he could easily calculate percentages to determine probabilities by counting rather than doing tedious calculations, or integrating probability density function within specified limits by parts.

Some of the arguments for using a sample size of four or five were:

  • The cost, in terms of the amount of inspection and the amount of computation, of sampling four or five is less than that of sampling larger numbers.

  • With a sample size of four or five, there’s less chance of a process shifting while taking the sample. The smaller sample averages will mask the effect of causes less than larger samples will.

  • Of the two, a sample of five makes it easier to calculate sample average, while a sample size of four makes it easier to calculate .

Let’s look at control charts. The most commonly used control charts are X-bar, R (or X-bar, s charts), instead of X, R charts. X-bar, R charts are the distribution-independent because of the application of the central limit theorem, which states that sample averages tend to follow a normal distribution irrespective of the distribution of the population. Measure of dispersion among means is sometimes known as the "standard error of estimate.” Thus, larger sample size means will be close to one another. In other words, sample-to-sample variation will be less. That’s why sample size matters. Without computers standard deviations were prone to calculation errors, so the tolerance limits were determined using sample range values.

The typical formula for determining control limits is:

Where can be estimated using . Similarly, if the population standard deviation is known, the standard estimate of error, i.e., dispersion among means is calculated as follows:

The following table summarizes X-bar,R chart control limits for various sample sizes:

Sample
size

Square root

Control limits for X-bar, R chart

Control limits
For X-bar, R chart (multiples of population sigma)

1

1

3

2

1.414214

2.121320344

3

1.732051

1.732050808

4

2

1.5

5

2.236068

1.341640786

10

3.162278

0.948683298

 

As we can see, when sample size changes from four to 10, the control limits for the X-bar, R chart are 1.5 sigma, and 0.95 sigma, respectively. The worst out-of-control condition would be when a data point (i.e., process mean) exceeds its control limits. When the sample size is four, the allowed shift before the process goes out of control will be 1.5 sigma. When the sample size is 10, the allowable shift before the process goes out of control will be 0.95 sigma. The higher the sample size, the more frequent the disruption in the process, making control charts too sensitive to process variation based on the sigma level, but less sensitive to process variation based on the sample size. The allowed part per million (PPM) with six sigma limits could be 3.4 PPM and 3 PPM for the sample size of four and 10, respectively. Because the Six Sigma model chooses to use a sample size of four to allow the 1.5 sigma shift in the process mean, larger samples will have smaller shift thus presenting a conservative measure of sigma.

The table below shows that a process can have any sigma amount of shift. With a sample of four or more, the worst-case shift will be 1.5 sigma. It doesn’t mean that the process will always have a 1.5 sigma shift. The Six Sigma model is based on certain assumptions to standardize the process. Today, it’s easy to do precise calculations in terms of PPM corresponding to various values for sigma shift in process means to make economic decisions.

Six Sigma was defined based on benchmarked failure rates representing virtual perfection, control chart theory for maximum allowed shift in the process mean and Shewhart’s recommendation for effective sample sizes while monitoring variation in assignable causes. According to Shewhart’s theory, for a typical sample size of four the maximum allowed shift in the process mean is 1.5 sigma of population. It doesn’t mean that the process has to have this variation. Given today’s computing capability, people can calculate the exact sigma level of a process based on any sample size, although it would be difficult to benchmark among companies. If that were the case, it wouldn’t be Six Sigma.

Discuss

About The Author

Praveen Gupta’s picture

Praveen Gupta

Praveen Gupta is the founding president of Accelper Consulting (www.accelper.com), has worked at Motorola and AT&T Bell Laboratories, and consulted with nearly 100 small- to large-size companies including CNA, Abbott Labs, Superior Essex, Dentsply, Hexel, Experian, Sloan Valves, Weber Markings, Wayne State (Ford), and Telular. Gupta taught Operations Management at DePaul University, and Business Innovation at Illinois Institute of Technology, Chicago. He has conducted seminars worldwide for over 20 years.

He is the author of several books including Business Innovation in the 21st Century, Stat Free Six Sigma, Six Sigma Performance Handbook, and Service Scorecard.