Featured Product
This Week in Quality Digest Live
Six Sigma Features
Scott A. Hindle
Part 4 of our series on SPC in the digital era
Donald J. Wheeler
What are the symptoms?
Douglas C. Fair
Part 3 of our series on SPC in a digital era
Scott A. Hindle
Part 2 of our series on SPC in a digital era
Donald J. Wheeler
Part 2: By trying to do better, we can make things worse

More Features

Six Sigma News
How to use Minitab statistical functions to improve business processes
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers

More News

Steven Ouellette

Six Sigma

Know the Process Before Altering Its Specifications

You must balance risk and benefits when determining acceptability.

Published: Monday, June 21, 2010 - 05:00

So I thought I was done with measurement system analysis after my last column, but I just finished reading Don Wheeler’s June 1 column, “Is the Part in Spec?” and the first thing I thought was, “Well, that was… complicated and ultimately unhelpful in answering the article’s title question.” I like a diversity of viewpoints, but they have to make sense. Does Wheeler’s? Let’s take a closer look.

ADVERTISEMENT

To be fair, I corresponded with Wheeler as I wrote this response to make sure I was accurately portraying his viewpoint, which is exactly the point I'm missing in the article: i.e., whether a particular part conformed, given a particular measurement. The way I see it, the article talks about (and would be interpreted as) changing production limits for the process based on measurement error. However, doing so could drive some bad decisions, even if this was not what Wheeler intended. I showed the article to a number of people, and that is how they interpreted it as well.

To determine if a part is conforming to spec, we must understand the measurement system, among other important things. Repeated measurements will give us an estimate of the measurement error, σ E = 54.458. So far so good. But then (to my way of thinking), the line of reasoning goes off the rails.

Wheeler’s article could very well give someone the idea that they need to set their manufacturing specifications to some tighter limit to ship conforming product, given a certain measurement error. Nothing could be further from the truth. As process engineers know, changing a manufacturing specification doesn't affect the process output; changing the process affects the process output.

There are two processes that affect the as-measured product conformance to customer specification—the manufacturing and the measurement processes. The manufacturing process could be helped by first attaining control if you haven’t already, and second, by decreasing variability around the customer target by designed experimentation. But before you do that, you probably should deal with the measurement process. Although that is not completely necessary to make improvements, a gauge that is highly variable (but in control) will increase your sample size needed to detect improvements and thus make improvement more expensive.

Like any other process, a measurement system must show its capability to meet requirements. It also must show control, as I describe in my measurement system analysis (MSA) articles listed at the end of this column. Capability for measurement systems is determined in regard to being able to measure within the specification limits.

If you have a customer specification to which your gauge is not capable of measuring, setting tighter manufacturing limits will only result in you internally scrapping a lot of product that is probably within specification. Instead, if you have a gauge that is incapable of measuring to the customers’ specification—here comes a radical idea—consider fixing the measurement system or buying a new one.

This course of action should be obvious. Why is it not when using Wheeler’s approach and generating manufacturing limits? Because there is no calculation of measurement system capability to measure to a specification. All the calculations of watershed specifications and probable errors so far remove the reader’s focus on the reality of measurement error that by the time you get through calculating such things and someone walks by and suggests buying a new gauge, you say, “Wait, what?”

Wheeler’s notion seems to allow for the backward view that changing a specification will magically cause the process to produce product that meets that specification. This is a throwback to the days before the “Red Bead Experiment” showed that changing the specification regardless of inherent process variation only stresses out our “willing workers.” In my experience, an earnest desire to meet a tighter specification does not, in fact, have an effect on the process output.

Wheeler's article doesn’t give a specification for the viscosity, so let’s make some up to play with and see how it works. Let’s say that, regardless of viscosity target, customers expect ±175 centistokes around that target as the tolerance. Let’s say one of the products is targeted to 2,500 centistokes.

Now that you have read Wheeler’s article, is your gauge even capable of making a decision about conformance? Or as your boss would ask, “Can you use it?”

If you are like most people, at this point you will realize that you don’t know how to answer that question, given the information in the article. Wheeler doesn’t answer that question. And that is the question your boss is going to want answered.

Wheeler tells us, “If you tighten the watershed specifications by three probable errors on each end, then all of the product you will end up shipping will have at least a 99-percent chance of conforming.” (This is where I get the idea that someone might think that tightening up the specifications is the way to ship conforming parts—a leap, I’ll grant you.)

All right, let’s follow his advice and see where it leads. You go to production and find out that the current product is in control with a capability as measured of Cp= C pk= Cpm= 1 (mean = 2,500, as-measured standard deviation = 58.324, normal distribution). This means that the as-measured variability, directly from the lab database and including product and measurement error, is exactly equal to the customer’s specification width and is centered and on-target. You are currently making 99.73 percent of the product in specification.

Now we will use Wheeler’s calculations to generate the watershed specifications—2,328.5 and 2,671.5. To come within the three-probable-errors test on each side, based on our short repeated-measures study (3 × 36.759 = 110.277), would mean that we have to move our internal rejection specifications to a lower specification of 2,438.777 and an upper specification of 2,561.223. If we use that as our internal rejection limits, we will now be making 29.385-percent scrap, assuming a normal distribution.

Wait, what?

The process going in was already making 99.73-percent in specification with the current gauge variability, so that 29.385-percent scrap is almost certainly all in specification. So why are we scrapping it, when it neither measures out of spec nor do we expect that the product itself is really out of spec? Because the manufacturing specs told us to.

How can this be? It fundamentally doesn’t make any sense to me to move internal rejection limits based on gauges with high measurement error in the absence of understanding the process variability. This approach is confusing, backwards, and it turns out, unnecessary.

I would use that measurement error of 54.458 and compare it to the customer’s specifications by calculating a %R&R.

We would discover that the viscosity gauge had a %R&R of 80.1 percent. So, about 80 percent of the customer specification is taken up by the measurement device itself. Still, the process as it stands, variable gauge and all, is minimally capable of producing product that meets the specification. Presumably, that is because the true product variability in the absence of measurement error is pretty low, so even when thrown together with that large measurement error, we still as-measured make almost everything in specification.

By way of illustration, something like the graph in figure 1 is really going on.

Figure 1: Product viscosity

Note that the “real” product viscosity is tightly conforming to the nominal, that 80 percent of the spec is taken up by the measurement error alone, and that the as-measured variability is minimally capable at Cp = Cpk = Cpm = 1. (At this point, you should be thinking, “Thank goodness standard deviations are not additive!”)

Hint: if the customer is going to tighten the specification, your actual product is performing better than the measurements would indicate, so look at the measurement process before thinking you need to improve the manufacturing process. With the low actual product variability and the high measurement error still resulting in minimal capability, this gauge might be acceptable for production.

Side note: Acceptability is different than capability. Capability is gauge error compared to specification width, as measured by %R&R, and is a calculation. Acceptability is if it is reasonable to use that gauge in production, and is a business decision. If my product variability was higher (i.e., I was in control and making more product out of specification) or out of statistical control, I would run the chance of misclassifying product as conforming or nonconforming. I describe this in the MSA articles listed below. In such a case, this gauge would definitely be too risky. Right now, based on our historical data, the real product value doesn’t often get anywhere near the specifications, but—“Danger, Will Robinson!”—if you do have one excursion, you may not detect it right away, which might be bad. Perhaps very, very bad. However, if it goes on long enough, you will eventually detect it on your control chart. You must balance risk and benefits when determining acceptability.

If this process is a high priority for improvement or at risk for excursions, we would probably first investigate the measurement system for opportunities to decrease measurement error. If that failed, we might consider purchasing a new one. At no point would we consider moving in the internal rejection limits. That is just crazy talk.

If, on the other hand, the process was highly capable, say Cp = Cpk = Cpm = 2, both process and actual product variability are low compared to the specification. Let’s say that we would be there if our tolerance was ±350, so it would look something like the graph in figure 2.

Figure 2: Product viscosity in capable process

If I go out of control, I had better be investigating it, regardless of whether it is in or outside the manufacturing limits. I still don’t need tightened manufacturing limits, although they are not as damaging as with the earlier example.

If the process is highly variable compared to the specification, say Cp = Cpk = Cpm= 0.5, then I have real problems because I have promised to do something (i.e., make specification) that I clearly am incapable of doing. In such a case, something like the graph in figure 3 is going on.

Figure 3: Viscosity is highly variable compared to specification.

Now this might be the one place where the tightened specifications make sense mathematically. However, I would not enjoy watching you tell your boss that not only is your process totally incapable of making the specification, but you also have to scrap 59.97 percent of what you make, as opposed to the 13.36 percent that is as-measured falling outside the customer’s specification.

Fortunately, I don’t often see processes with such low capability anymore. And really, do you need tightened specifications to tell you that you had better get off your duff and work on reducing the process and measurement variability for this process? On your way to do that, don’t forget to go punish sales for taking this business. (Make them do the math above; they hate that.)

My point is that you cannot efficiently decide if a part is in or out of specification without knowing more about the process itself than the measurement error. Following the recommendations from Wheeler’s article produces results from meh to disastrous.

Rather than using watershed specifications and coming in by a certain number of probable errors (which to me is just one step up from Acceptance Sampling ), I would recommend doing a proper measurement system analysis followed by a capability analysis to answer the question, “Is this part in spec?”

So don’t forget: Test for and attain control in your measurement process first, assess the capability of the measurement system to measure within the specification, then determine if the gauge is acceptable for use based on factors such as the total variability, cost to make multiple measures, cost of misclassification, and cost to replace the gauge. Move on and do a traditional capability study so that you can build a smart reaction plan. Based on what you find, assess your need for reducing variability of the gauge and/or the process, and use the tools in the quality sciences to do so. That sounds pretty straightforward, and I have given you the tools for much of that in the MSA articles listed at the end of this column.

Wheeler’s column, I fear, has not provided a framework to answer the question, “Is this part in spec?” It could mislead someone into making a bad decision by tightening the manufacturing limits in an effort to answer Wheeler's final question, “How can we be sure we are shipping conforming product?” Following the process, his column seems to promote could encourage bad behavior—and it would be hard to “gauge” the costs of doing so.

The approach I use to perform an MSA is detailed in this sequence of articles:

Letting You In On a Little Secret

The Mystery Measurement Theatre

Performing a Short-Term MSA Study

Performing a Long-Term MSA Study

Destructive Gauges and Measurement System Analysis

Discuss

About The Author

Steven Ouellette’s picture

Steven Ouellette

Steve Ouellette, ME, CMC started his career as a metallurgical engineer. It was during this time that his eyes were opened to find that business was not a random series of happenings, but that it could be a knowable enterprise. This continues to fascinate him to this day.

He started consulting in 1996 and is a Certified Management Consultant through the Institute for Management Consulting. He has worked in heavy and light industry, service, aerospace, higher education, government, and non-profits. He is the President of The ROI Alliance, LLC. His website can be found at steveouellette.com.

Steve has a black belt in aikido, a non-violent martial art, and spent a year in Europe on a Thomas J. Watson Fellowship studying the “Evolution, Fabrication, and Social Impact of the European Sword."

 

 

Comments

Specs

I think whether to use or not to use Dr. Wheelers approach for setting the specs "depends on" the type of industry you are in....If you are in the health care, aero space or auto safety industry, the ramifications of a product failure is not worth the the risk...this approach is applicable there. However, we must at the same time must work to improve: 1. the measurement system and 2. the process. We can use this approach until we have some success. In the mean time we understand that we are throwing away lot of conforming product due to misclassifcation errors.

But if you are in the consumer goods industry like adhesive tape we don't have to kill ourselves by tightening the specs, because it is worth to take the risk by minimizing the producers risk otherwise we will scrap or rework lots of good product by misclassifcation (type 1 error).

The answer to the question is (every enginneers and statisticians favorite answer): "It depnds"

Robin Francis

Hi RFRANCIS2 - thanks for

Hi RFRANCIS2 - thanks for reading and commenting!
.
It makes more sense to me to use that approach, at most, as a stopgap. If I am in a high-risk industry, why should I tolerate high gauge variation as a proportion of spec?
.
Anyway, that is my thought, which hopefully the article conveys.

Personal sniping at Don ?

Don Wheeler has provided another of his excellent, informative and well explained articles. Steve's reply seems almost like personal sniping at Don, with nonsensical comments like "Wheeler’s notion seems to allow for the backward view that changing a specification will magically cause the process to produce product that meets that specification."

Hi ADB, thanks for reading

Hi ADB, thanks for reading and commenting.
.
I hope this does not come across as personal attacks. When I write these Six Sigma Heretic articles, they tend to come out snarky for some reason - ever since the beginning - but hopefully not disrespectful. I think it is coming clear that people who are familiar with Wheeler's approach (as I am too) see past the article to what they think he meant, whereas people who are not familiar with the approach could read that article and get the wrong idea.
.
Consider the possibility and see what you think.

Education...

Wheeler provides access to other publications on his website that provide the theory and rationale for this latest Quality Digest article. Those who have read these corresponding publications should recognize the profound and extremely important points he is trying to make with respect to the improper estimation and application of traditional gauge R&R..

However, I must admit that I have found the path from paper to practice much clearer with his previous articles (in fact we have implemented his teachings from nearly all of his past articles over the years); in contrast, for this article, the path is a bit muddy... and so I tend to agree with Steven that the intent of this latest article could be misunderstood and misapplied.

Perhaps Dr. Wheeler may find this feedback useful and offer a clarification to his fan base...

Thanks for reading the

Thanks for reading the article VPSCHROEDER, and for your feedback!

Trend in Quality Digest Articles

I see a trend in Quality Digest articles i.e. criticize Don Wheeler and try to become famous by doing so :)

Just kidding!

Guru
(Gurbachan Chadha)

Hi Guru, . Jeez, I hope not.

Hi Guru,
.
Jeez, I hope not. I respect Dr. Wheeler quite a bit, though I think he is wrong about a few things, we are very collegial about it.

Guardbanding

I don't think this is an either/or situation. Wheeler answered a narrow question with a viable narrow answer.
He never said we shouldn't improve our processes or our measurement systems - in fact he is one of the longest (living) proponents of doing exactly that. He simply addressed a means of dealing with results that are at the limit(s) in the presence of significant measurement error.

I too advocate and have been very successful at improving process performance and measurement system performance, BUT I've also used guardbanding and still use it to protect the customer until the process or measurement system can be improved. And in several industries that I've been in, the measurement systems may be difficult to improve to the point where guardbanding can be eliminated quickly even when the process itself is greatly improved. Processes go out of control, vendors go out of control, physics happens. 'Marginal' materila will be produced. And while the author is correct that simply changing the limit at which we accept or reject product doesn't improve the PROCESS OUTPUT, it does improve the quality of the SHIPPED product. A little bit of scrapp or rework is far less expensive than failures in the Customer's hands...

Hi BDaniels, and thanks for

Hi BDaniels, and thanks for reading and commenting!
.
My position is that Wheeler's article does not convey to those reading that article alone without the context of other materials what the correct actions should be. So this is partly an issue about communication, partly a difference of approach. I know Wheeler knows this stuff, but you can't judge the impact of an article on what you think the author meant, you must judge it on what is in the article. The problem is that the article tells the reader that in order to ship high conformance stuff, you must tighten the manufacturing specs, and this is simply not true. Or rather, it is only true in narrow circumstances and is not true across the board. And we are not talking about a "little" scrap and re-work...in the example above, we know that it will be around 30% scrapped and reworked that are in fact really close to target and far away from spec limits. I call that unnecessary waste.
.
Had the article been an article about the very narrow question "how to determine if a part is in spec regardless of process understanding and in the presence of known gauge error" then the article would have been a lot shorter and would have mentioned the caveats that are applicable. But when multiple people read it and conclude that it is an article about a process used to ship conforming product (which I contend is a possible reading), we have a problem.
.
As you say, in the few circumstances where this applies, this should be a temporary (and expensive) holding action. You and I and maybe Wheeler know this, and yet the concept is not in the article.
.
As you say, it is only relevant in the situation where you have high gauge variability and a process that is either out of control or with a Cpk < 1 (or possibly high-risk as other comments present). You and I and maybe Wheeler know this, but it too is not in the article.
.
I read Wheeler's article and thought, "Oh my goodness, people are going to read that and think they need to decrease internal rejection specs in order to get conformance." I base this reaction on specific sentences that say exactly that. And this interpretation drives incorrect (and unnecessarily expensive) behavior.
.
I fully concede that I may be interpreting the article incorrectly, but if so I am not the only one and I think it is valuable to point out how this interpretation could be devastating.
.
Anyway, I hope my article is entertaining and useful!

A more thorough reading might be needed

Having read and used all of Wheeler's material on EMP, I might be biased, but I didn't see any problem with his column. He is not talking about changing the specifications, he's talking about how much of the specification is useful when the measurement error is shown to be of some certain value. If I measure something that's technically "in spec" at 6.1 units, but I have .5 units of probable measurement error, there is a chance at I am actually out of spec. All Wheeler is talking about is how that can be quantified and adjusted for. He's not talking about changing the specs, he's talking about playing the slice in the measurement system so we can be sure that what we are measuring is in specifications.

Hi Rip, . You might be right

Hi Rip,
.
You might be right - that is Dr. Wheeler's position as to what the article is about. But I just don't see it, and I spent weeks looking at it and had multiple people read it, and they all concluded that he was saying to tighten rejection limits in response to high gauge variability in order to ship conforming product. So at least a few of us are reading it that way, and I thought it valuable to point out the fallacy of that approach.
.
But note: if that thing you measure at 6.1 comes from a process that is in control with a Cpk > 1, it is almost certainly in spec. If that 6.1 is taken without any process context, sure, it might be out of spec or in spec. My article is trying to get you to look at that extra context in order to make informed conformance decisions.
.
Thanks for reading!!

Altering Specs

I also used guardbanding when I worked in the semiconductor industry. The test specs would be tightened-up by 2 PE which would provide additional assurance that what we were shipping to our customer would meet their specification requirements. The thought process was that it was better to scrap some borderline product than have a customer return, complaint, SCAR, factory visit........you get the idea.

Rich

Guardbands [mfg limits]

In a previous automotive manufacturing enviroment we established 'guardbands' set at 3 std dev on the gage error. This to ensure we did not pass any product that might be rejected by the customer if tested. The product was crash sensors. We were willing to make a class one error to avoid a class two. In all the millions we made, we have never had one come back or lost in litigation as when retested, they were always within spec. We have had lawyers bringing product back in plastic bags that we could not touch unless under their supervision. They always passed wthin specification. We used horizontal linear accelerators to simulate a crash speed. We could measure in units of .001 MPH or less, but we had multiple test machines. The specification was to 0.1 All were correlated back to a master thruster that was tracable to the NBS. We always used two accelerometers that were compared to each other. If one started to drift the test would abort and the thruster would shut down until re-correlated with new acclerometers.

We were aware we could be calling good product bad, but due to the nature of the product we could not afford to pass bad product. If the reject rate increased statistically, the thruster would be shut down and maintence performed untill the varation was reduced.

Although I am now in a different industry now, I still believe the method we used was valid. All correlations used were stastically based using ANOVAs. The exact method is proprietary

Guardbanding

Hey guys, thanks for reading and posting!
.
As far as guardbands, I have no quarrel with these at all, as long as you understand the costs. We too have had to do similar things over the years, but my point is that you have to make such decisions with an understanding of the process as well as the costs involved.
.
For the example with crash sensors, if you had a control chart on this product characteristic and it showed that you were in control, you probably never made an accelerometer anywhere near a spec limit in your life, and every single one you scrapped was not just, perfectly OK, but darn near the target. On the other hand, if your control chart was out of control (as it sounds like from your description) - long-term you should have been working to attain control so that you didn't have the anxiety of knowing that measurement error could be allowing bad stuff to ship, at the potential cost of someone's life. Short-term you might use temporary rejection limits to make up for your lack of control.
.
It all has to do with decomposing the sources of variation (as I did in those graphs above). Which I bet was involved in your ANOVA-based approach. ;-)
.
That said, in some cases (e.g. aerospace, auto safety) due to non-mathematical considerations (and nothing is ever just a cost-benefit analysis), you might very well have crazy rejection limits. I am only saying you had better understand those costs and build them into the price.