Featured Product
This Week in Quality Digest Live
Quality Insider Features
Taran March @ Quality Digest
If at first you don’t succeed, make it a quality problem
Richard Ruiz
Seven ways automation can focus layered process audits on quality improvement rather than administrative workload
Ryan E. Day
Dimensional Engineering uses FARO QuantumS ScanArm for a complex reverse engineering project in petrochemical industry
NIST
New imaging technique reveals how mechanical damage begins at the molecular scale
Bruce Hamilton
Why sharing and teaching are the best ways to learn

More Features

Quality Insider News
Leader in workplace productivity introduces document automation product
Performs quick and efficient inspections of cylindrical gear tooth profiles in a production environment
Three sculptures selected: Auguste Rodin’s Hanako and Head of Balzac, and Julio González’s Mask: Reclining Head
High-accuracy measurements at all test loads up to 0.5%
Harnessing business to achieve the sustainable development goals through finance, technology, and law reform
Help drive team productivity with customizable preprinted templates
Oct. 4, 2019, 10 a.m.–2 p.m., at Starrett corporate headquarters in Athol, MA
Workers more at ease about job security. Millennials more confident regarding wages.

More News

Douglas C. Fair

Quality Insider

A Dose of Reality

Using the right statistical tool

Published: Monday, August 27, 2007 - 21:00

Just weeks after earning my industrial statistics degree, I hired on with a major aerospace company. My first “real” job entailed working with engineers and operators to deploy statistical process control (SPC) in a large machine shop. I quickly found out that the warm, coddling confines of a university classroom had done little to prepare me for the complexities of a short-run, aerospace SPC deployment. I found myself confronted with a bewildering array of different machines, jigs, tooling aids, part numbers, and union rules. As a newly hired statistician, I was stunned to find that the aerospace world lacked any similarity to what I had been reading in my statistical texts. Although 20 years have passed, I remember a great deal about my experience because that shop floor taught me a lot about the practical application of SPC.

My first lesson remains very clear: Traditional X-bar and range-control charts don’t work in a high-mix, low-volume environment. Too bad, especially because that was what I studied in school. Instead, we learned to apply control charts that were process-specific. Rather than create a separate control chart for each part/feature we measured, we had control charts that were unique to the machines and features we were trying to control. Each of these machine-centric control charts showed how the process changed through time and across many part numbers. Believe me, this practice wasn’t easy, especially using paper and pencil. Even if each part had different engineering specifications, we placed them on the same chart. Doing this required that we apply data-normalization techniques to account for the different specifications. Regardless of the extra work involved, I learned that normalization was crucial for using true process control charts.

When I mention true process control, I’m not referring to the control of process-specific parameters, such as speeds, feeds, and temperatures. Tracking those parameters is fairly straightforward: Create a control chart for temperatures and see how they vary throughout the day or week. Instead, I’m focusing this column on the complex practice of applying a process control chart across different parts with different engineering specifications.

Suppose we wish to use SPC at a lathe. The lathe is used to turn and cut stainless steel shafts. On each shaft we measure outside diameters (ODs) in three different important locations—OD location A, OD location B, and OD location C. Our shaft part numbers are generically identified by their color (such as blue part, yellow part, or green part). ODs measured on each part at the same location have different target values from one colored part to the next. While each part’s engineering target value is different, the tolerance of +/– .005” is identical.

Here is a summary of our example:

  1. A machine (lathe 167) is the process we wish to control

  2. A part feature (OD location B) is common to our parts

  3. Two different parts (blue part and yellow part) have been run on lathe 167. Both parts are similar in configuration but have different target values for OD location B.

The control chart for our example is shown in figure 1.

The chart’s subgroup data values are displayed in time order, while the part manufacturing sequence is highlighted with vertical blue lines (the blue part was made first, then the yellow part, then the blue part once again).

The plotted data values in figure 1 have been normalized on the X-bar chart as deviation from target. That is, if the subgroup average for OD location B were, say, 2.7497 and its target were 2.7500, then the value which is plotted on the X-bar chart is –0.0003 (2.7497 – 2.7500).

There’s no normalization applied to range values. The mathematical formula for deviation from target normalization follows:

Plotted Value = (Subgroup Average – Target)

By using this simple normalization technique, one can plot parts with different target values on the same chart. However, simple isn’t always correct.

You see, there’s a catch. Isn’t there always? Sometimes normalization techniques cannot be used for creating common control limits. When standard deviations are significantly different between parts, control limits cannot be shared. In figure 1, notice that the control limits for the yellow part look closer together on the X-bar chart as compared with the blue part. This has happened because the average range for the yellow part is much lower than the blue part. This is an indication that the yellow part has less variability. Is the variability significantly less? Yes, as confirmed by the Kruskal-Wallis nonparametric test for differences in variability (Handbook of Parametric and Nonparametric Statistical Procedures, David J. Sheskin, CRC Press LLC, 1997).

Because the variability is significantly different, common, identical control limits cannot be shared across parts. So what can be done? Well, we need a more robust means of normalizing the data. Calculating and plotting Z-values is a wonderful way of managing different standard deviations between parts. In fact, even if means, standard deviations and/or specification limits are different from part to part, Z-values can be used to make fair comparisons between those parts. They also result in common control limits and give us what we really want: a control chart for assessing true process control.

An X-bar chart’s Z-value plot points are the distance from a part’s overall mean, expressed in standard deviations. The mathematical formula for calculating the X-bar chart’s Z-values should be based upon each part’s unique mean and standard deviation:

Plotted Z-value = (X-bar – Mean)/Standard Deviation

Figure 2 is a display of the same data set as figure 1; however, plot points have been normalized using the Z-value transformation.

Figure 2 shows that the control limits are identical between the two different part numbers. Note that, like any Shewhart control chart, control limits are set at +/– 3 standard deviations. The Z-value control limits are actually +/– 3.0000, making it very simple to determine if a plotted value falls outside of the control limits. Because the plotted values have been converted to standard deviations, this common scale can accommodate any feature or part, no matter how statistically different they might be. The Z-value normalization is very handy for short-run situations, or, as in our case, for assessing machine consistency regardless of the parts it makes.

If you choose to use normalization procedures for assessing true process control, make certain your software can:

  1. Display unique control limits for each part on the chart. This will help to highlight whether or not the variability or means are different by part.

  2. Determine if the variability is significantly different from part to part. Statistical tools such as the Kruskal-Wallis test can help determine the optimal normalization technique.

  3. Change normalization techniques on-the-fly. There are many normalization techniques that can be used. Make sure that various techniques are available and that they can be changed and applied to pre-existing data at any time.

  4. Modify the normalization technique even after data have been collected. The use of a particular normalization technique is typically based upon empirical evidence. Therefore, you might not know which normalization is best in the beginning. Instead, collect some data to learn what you are up against. Then, change the normalization to suit your unique situation. The bottom line is, make certain the normalization technique can be modified at any time even after data have been collected. Otherwise, if you guess wrong, you might have to start all over.

Summary
To apply true process control, you need the right statistical tool. Standard control charts where control limits are based upon just a single part won’t suffice. Instead, consider using a control chart to control the process. The deviation from target and Z-value normalizations allows one to assess process control through time. Isn’t that the point anyway? Let’s not forget that the “P” in SPC stands for “process.”

Discuss

About The Author

Douglas C. Fair’s picture

Douglas C. Fair

A quality professional with 30 years’ experience in manufacturing, analytics, and statistical applications, Douglas C. Fair serves as chief operating officer for InfinityQS. Fair’s career began at Boeing Aerospace, and he worked as a quality systems consultant before joining InfinityQS in 1997. Fair earned a bachelor’s degree in industrial statistics from the University of Tennessee, and a Six Sigma Black Belt from the University of Wisconsin. He’s a regular contributor to various quality magazines and has co-authored two books on industrial statistics: Innovative Control Charting (ASQ Quality Press, 1998), and Quality Management in Health Care (Jones and Bartlett Publishing, 2004).