Most quality practitioners are familiar with the Taguchi loss function, which contends that the cost of any deviation from the nominal follows a quadratic model. This is in contrast to the traditional goalpost model, where anything inside the specification limits is good, and anything outside them is bad.

ADVERTISEMENT |

This article will show where the quadratic loss model came from, its use to optimize the economic safety factor, and its extensive limitations. The Taguchi model is in fact a one-size-fits-all model for applications in which we lack probabilistic models for the quality characteristic in question. The Taguchi loss function is overly pessimistic in the absence of substantial variation, and its use may lead to excessive costs. Here we address only the nominal-is-best situation.

### Background

Matthew Barsalou cites a case study in which transmissions produced by the Ford Motor Co. were in specification but often caused warranty problems due to relatively high variation in comparison to transmissions produced by Mazda. “Although Ford-assembled transmissions were in specification, the deviations from nominal could be a problem when the tolerances stack up or are on opposite sides of the specification,” he writes. This doesn’t tell us there is a problem with the goalpost model, but rather that the goalposts may have been too wide. Tolerance stack-up shouldn’t be an issue for in-specification parts, and there are well-established statistical rules for determining where those specifications should be.

The Quality Council of Indiana offers an outstanding explanation of how the Taguchi loss function is actually used to locate the goalposts or acceptance limits. We’ll start, however, by showing where the loss function actually comes from.

### The Taguchi loss function

The equation below shows the Taguchi loss function for the nominal-is-best situation, where *x* is the measurement and *m* is the nominal.

On Page 441 of *Taguchi’s Quality Engineering* (John Wiley & Sons, 2004), authors Taguchi, Chowdhury, and Wu explain the functional limit as follows: “The exact middle condition is where the standard value cuts the distribution in half, for example, median and mode, and it is often abbreviated as LD50 (lethal dosage), that is, there is a 50-50 chance of life (L) or death (D)—50% of the people die if the quantity is less than the middle value, and the other 50% die if the quantity is greater.” It’s fair to set the cost of shipping a product at the functional limit as half the cost of an external nonconformance (in the hands of a customer) because there is a 50% chance of failure.

For example, consider a shaft with a nominal diameter of 0.500 in. It must fit a hole with a mean diameter of 0.504 in.; if the shaft is bigger than the hole, it won’t fit and the assembly will be nonconforming. However, if the shaft is 0.008 in. (8 mil) smaller than the hole, the fit will be too loose. Assume the hole diameter and shaft diameter follow the normal distribution, and both have a standard deviation of 1 mil. The result is a nominal-is-best application, and the functional limit for the shaft is a 4 mil deviation from the nominal, as shown in Figure 1. That is, if the shaft is exactly 4 mil above or below its nominal, there is a 50% chance it will not fit properly.

Assume the cost of a nonconforming assembly is $40, and it costs $2 to rework or replace a shaft that is outside whatever acceptance limits we select. Where should we set the acceptance limits for the shaft to minimize total costs? The Taguchi approach seeks to minimize the cost of poor quality by setting the probability of nonconformance *P(y)* times the cost of nonconformance equal to the cost of rework or replacement. In this case, we want to find *y* such that *P(y)* = 0.05, because this times the $40 cost of nonconformance equals $2.

### Source of the quadratic loss function

Where does the quadratic loss function come from in the first place? The Taguchi reference provides the explanation, starting on page 173. Define the loss function, which is unknown at the start, as *L(x)*. A Taylor series can model any function over a relatively short interval around a selected point, which will in this case be the nominal *m*. In this case, we use the equation below to model the loss function where *L'(m)* is the first derivative at *m*, *L"(m)* is the second derivative at *m*, and so on.

We can simplify this by defining *y* = *x–m* as the deviation from the nominal to get the equation below. *L(m)*, and also all the derivatives, are therefore evaluated at zero, which simplifies matters enormously.

We now have a differential equation in which there are two known quantities, *L(0)* and *L'(0)*, and a reasonable assumption.

The loss function ought to be zero at the nominal so *L(0)=0*. The reference indicates that, even if this isn’t the case, it’s a constant that can therefore be ignored.

The loss function is minimized when there’s no deviation from the nominal, so the first derivative *L'(0)* also is zero.

The cubic and higher terms also can be ignored, which is generally a valid assumption. This leaves us with the following equation, in which half of the second derivative at zero is a constant:

Now recall that *A0* is the cost of shipping a product at the functional limit, which is half the cost of a nonconformance. This gives us the following equation:

This explains fully the origin of the Taguchi quadratic loss model for the nominal-is-best application. This is, in turn, the source of the economic safety factor. Let’s look first, however, at what we are really modeling here. If we remove the cost element from the equation above, what we have left is simply the probability *P(y)* that the item will be nonconforming as shown by the following equation. The *2* gets into the denominator if we remember that *A0* is half the cost of a nonconformance.

This quadratic probability model for the chance of an external failure as a function of the deviation from the nominal becomes the loss model *L(y)* after we multiply it by the cost of the external failure in question. The elephant in the room is the fact that we know the probability model to be accurate only at the nominal, where it’s negligible; and the functional limit, where it’s 50% by definition. There’s a lot of room between those two points for substantial discrepancies.

### The economic safety factor

Suppose *A* is the cost of an internal failure, such as reworking the item or scrapping it before it reaches the customer. Where should we set acceptance limits, tolerance limits, or safety limits to minimize the cost of poor quality? Another way to say this is, “Where should we put the specification limits, which must be well inside the functional limits, to minimize the total cost of internal and external failures?” The deliverable is Taguchi’s economic safety factor.

The Taguchi reference treats *delta* as the value of the deviation from nominal where the Taguchi loss function equals the cost of rejecting a part prior to shipment, as shown in the equation below. The deliverable is the economic safety factor depicted by the Greek letter *phi*.

Let’s return to the shaft and its functional limits 0.500 +/– 0.004 in. Recall that the cost of an external failure is $40, and the cost to scrap or rework the part is $2. The functional limit *delta subscript 0* is therefore 4 mil. The expected cost of an item at the functional limit is therefore $20. The equation from above delivers the following:

This tells us we should reject any part that is more than 1.265 mil from its nominal.

Again, this equation shows that the probability that such a part will be too large or too small is 0.05 which, when multiplied by the $40 cost, is $2, the same as it costs to rework or replace the shaft. This is not, however, the best possible model because the Taylor series is just an approximation of the actual situation.

### Taguchi vs. probabilistic models

In *Reliability in Engineering Design* (Wiley, 1991), authors Kapur and Lamberson devote an entire chapter to probabilistic engineering design that warns, “In reality, the failure probability may vary from a low to an intolerably high value for the same safety factor.” Page 79 of this reference gives some examples of strength vs. stress, and the resulting reliability. The Taguchi economic safety factor may be equally unsuitable for a given application. In our example, we can find the probability, given any shaft diameter *x*, that it will be too large or too small as shown by

where *y* is the deviation from the nominal in mil. *Delta subscript 0*, the functional limit, is 4 mil. *Capital Phi* is the operator for the cumulative standard normal distribution.

The expected cost of accepting a shaft *y* mil from the nominal is then given by the equation below, where *G(y)* is the probabilistic cost of accepting the shaft. This is exactly $20 at the functional limit, the same as we get from the Taguchi model, because the cumulative standard normal distribution for zero is 50%.

Recall that the Taguchi economic safety factor gave us plus or minus 1.26 mil where the chance of a nonconformance is purportedly 0.05.

However, this equation

shows it to be 0.003072 when the standard deviation of the hole is 1 mil. The Taguchi acceptance limits are therefore, in practice, far too tight for this application.

Figure 2 compares the Taguchi and probabilistic models side by side. They have two points in common, namely at the functional limit, where the chance of failure is 50%; and zero, where no losses are expected. Those are, however, the only points they have in common, and the Taguchi model overstates the expected probability of failure and therefore loss inside the range of interest.

In this case, the probabilistic model begins to resemble the goalpost model. If there was no variation whatsoever in the hole size, the expected loss would be zero at less than 4 mil deviation from nominal, and $40 at more than 40 mil deviation from nominal. If the standard deviation of the hole diameter increases to 2 mil, the Taguchi and probabilistic models become almost indistinguishable, as shown below in Figure 3. The chief difference between them is, in fact, the nonzero probabilistic loss when the shaft is at its nominal where there is now a 0.02275 chance (tail area for two standard deviations from the mean of the hole diameter) that it will not fit. Remember that the Taguchi model assumes a zero loss at nominal, which is true unless the process capability is very bad. We would probably never, in practice, even consider actually operating a noncapable process of this nature.

### Probabilistic acceptance limit

Return to the premise that the hole size has a 1 mil standard deviation. Where should we put the shaft’s acceptance limit so the expected cost of a nonconformance (5% times $40) equals the cost of scrapping or reworking the shaft ($2)? This happens when the shaft is about 1.645 mil less than the nominal diameter of the hole at the upper end of the specification. The acceptable deviation from nominal is therefore 2.355 rather than 1.26 mil. The Taguchi limits are therefore far too conservative.

### Summary

The quadratic Taguchi loss function for nominal-is-best is based on a Taylor series approximation that fits two points: 1) zero deviation from nominal, where the loss function is minimized, and 2) the functional limit, where 50% of the items will fail. It requires no information about the actual probability of failure, in contrast to probabilistic reliability models that do. The Taguchi model looks overly conservative when there’s little variation in these conditions, but corresponds more closely to the probabilistic model as the variation increases.

We should therefore never take any model for granted. We can’t assume that if we put numbers into one end of the model, the deliverable that comes out of the other end will meet our real-world requirements. We must always look at the situation at hand and select the appropriate model for the job.

## Comments

## It's nice to see the Taguchi…

It's nice to see the Taguchi loss function discussed again. Don Wheeler wrote extensively on this subject; I highly recommend

The Process Evaluation Handbook. It discusses not only the quadratic loss function, but several other loss functions that are common in manufacturing and service; and has tables for calculating loss.When I first attended Wheeler's

Advanced Topicsseminar, one of the exercises we did was estimating the Taguchi loss from one segment of theJapanese Control Chartrecord. This was a process operating well within the specs (CPk>2), but had a run where the process center shifted off nominal for a period of time. The entire class gasped when we came up with the final number. I don't remember what the number was, but it was in the thousands of dollars.I do disagree with the statement that "tolerance stack-up shouldn't be an issue for in-specification parts." This is the thinking that caused U. S. auto manufacturers to lose out to the Japanese manufacturers for years until Deming, Wheeler and others taught them the error of their ways. That Batavia transmission study was a shock to Ford. When they went to gage the IDs in the valve body, the Japanese transmissions were so good that the inspector thought his gage must be off. He tried a second one, which was identical to the first...now he knew his gage was broken. He called a gage repairman, who came and verified that the gage was working and calibrated. The inspector checked the rest of the sample (10 valve bodies altogether), and they all looked identical and nearly perfect. This was when Ford finally had to admit that they had no idea how the Japanese were able to build the parts so much better. As one Ford executive put it, "While we were working hard to make all the parts in spec, they were working to make them all identical."

I worked at Ford in the 70s, and can tell you from personal experience that tolerance stack can matter a great deal. It was common practice in the production shops then to set up the machines at the high end of the spec. We knew it would take about an hour for tool wear to allow the parts to drift down to the lower end of the spec; we would check and adjust to bring the readings back up, then let it drift. We knew nothing at the time about SPC or Taguchi, but we did get curious one day about the tolerances. I happened to be working in an area that ground IDs on a circular part that also had an OD. We wondered, "What if our ID was as wide as it could be (still in spec), and the OD was as small as it could be? What if the part that fits inside it had an OD that was as small as it could be, and the part that our part fits inside had an ID that was as wide as it could be?" Pure tolerance stack-up.

So we tried it. The subassembly sounded like a washing machine full of bolts, and seized up after just about 40 seconds.

## Add new comment