Cost for QD employees to rent an apartment in Chico, CA. $1,200/month. Please turn off your ad blocker in Quality Digest

Our landlords thank you.

Management

Published: Monday, September 30, 2019 - 12:02

I’ve written a lot about how to evaluate the uncertainty measurements. My articles have ranged from basic introductions to metrology and uncertainty budgets, to more advanced topics such as sensitivity coefficients and Monte Carlo simulation. To date, all of the examples I’ve used have been for variable gauges. These are measurement instruments that give a numerical measurement result, on a scale, dial or digital display.

Attribute gauges are a different type of instrument that gives a binary pass/fail measurement result. Examples of attribute gauges include go/no-go plug gauges, slip gauges, and many visual inspection processes. If you were to apply my previous instructions on uncertainty evaluation directly to an attribute gauge, it’s likely that you would have some difficulty and possibly even conclude that this is not possible.

The difficulty arises in evaluating the repeatability and reproducibility (R&R) of these measurements. Since they do not produce a numerical result, it is not possible to calculate a standard deviation directly from repeated measurements of a single part. By extension of this, the normal gauge R&R ANOVA methods will also not work. However, as you will learn in this article, it is possible to calculate repeatability from a series of repeated measurements.

Attribute gauges are common in manufacturing quality control. It is therefore important to understand the accuracy and capability of these measurements. As I have discussed previously, uncertainty evaluation is the most rigorous way to evaluate the “accuracy” of a measurement.

Uncertainty evaluation involves considering all the quantities, or factors, influencing the measurement result. The uncertainty of each individual influence is evaluated. A mathematical model is used to evaluate how these influences combine to produce the final measurement result. There are a number of ways to perform these calculations, but the simplest and most intuitive is generally to use an uncertainty budget.

An uncertainty budget is a table that lists each influence quantity on an individual row. There are individual columns for:

• The names of each influence quantity

• The numerical value of the uncertainty for each influence. For example, a standard deviation, range, or half-width.

• The probability distribution that describes each uncertainty

• Divisors that convert all of the values into standard deviations

• Sensitivity coefficients that represent the effect that the influence has on the final measurement result

• The standard uncertainty for each influence. This is the uncertainty of the measurement that results from the influence being considered. Most uncertainty budgets will include some standard influences. These include calibration uncertainty, repeatability, and environmental factors. Each source’s uncertainty may be evaluated using Type A or Type B methods. Type A simply means the statistical evaluation of repeated measurements, while Type B is any other method. Repeatability is almost always evaluated as a Type A uncertainty. For a simple repeatability study of a variable gauge, this simply means measuring the same part a number of times and calculating the standard deviation of the results. Calibration uncertainty is usually evaluated as a Type B uncertainty by taking the uncertainty value from a calibration certificate.

Uncertainty evaluation for attribute gauges follows the same process as for variable gauges. The major difference is that the statistical evaluation of repeated measurements to give numerical values for repeatability, and also bias, is considerably more complex.

The basic principle of a repeatability study for an attribute gauge is best understood through an example with a simplified statistical analysis. This can be considered as an accuracy test because it does not go as far as evaluating all sources of uncertainty in the gauge. Remember that an attribute gauge produces a binary pass/fail measurement result. A given attribute gauge, therefore, has a single threshold value—i.e., parts with dimensions on one side of this threshold should be passed, and parts with dimensions on the other side of the threshold should be failed. The direction of pass and fail depends on the type of gauge.

It is common for gauges to be arranged in pairs to measure whether a feature is within a tolerance. For example, when checking the diameter of a hole, a go/no-go plug gauge may be used. This has a slightly smaller plug at one end, which will “go” into a hole that is larger than the lower specification limit. At the other end it has a larger plug, which will “no-go” (it won’t fit) if the hole is smaller than the upper specification limit. Therefore, if a hole is within tolerance, the go end will go into the hole, and the no-go end will not go into the hole. Each end of a go/no-go gauge must be considered as a separate gauge, requiring its own uncertainty evaluation.

In this example we will consider the calibration of a go/no-go plug gauge designed to test a nominally 12 mm diameter hole with an H8 tolerance. This means that to be in tolerance, the hole diameter must be between 12.000 mm and 12.027 mm. The go end of a go/no-go gauge should, therefore, fit into any hole that is greater than 12.000 mm in diameter. This is the threshold value that must be tested.

The first step in the test is to obtain a number of calibrated holes that are close to the threshold value. We might, therefore, inspect a number of machined holes with a high-accuracy coordinate measuring machine (CMM) and select 11 holes, ranging in diameter from 11.995 mm to 12.005 mm with 0.001 mm increments. We would then randomly sort the test pieces containing the holes and present them to an inspector for measurement one at a time. The inspector would insert the gauge into each hole and state whether the gauge goes into the hole or not, i.e., whether the gauge would pass or fail the hole. We would record this result next to the calibrated value of the hole. This process is repeated a number of times for each test piece, typically 25 times. (Yes, I know; this is a pretty tedious exercise.)

The final data obtained, in the form of a table or tally chart, record how many times each size of hole is passed or failed by the gauge. If there is negligible uncertainty associated with the repeatability of the gauge, then we would expect all of the holes that are smaller than 12 mm to fail and all of the holes that are larger than 12 mm to pass. We will also get this result if the increments between our test pieces were too large. In such an event, we could either repeat the test with improved test pieces or assume a worst-case uncertainty.

If we have chosen sensible increments for the calibrated test pieces, and there is a significant repeatability uncertainty, then we will see a result something like the chart below:

It is clear from these results that when the diameter of the hole is very close to the transitional value, the gauge does not give a repeatable result. The simplest statistical analysis of these results would be to take the range of values between 0-percent pass rate and 100-percent pass rate. In this case, this would give an accuracy of ±0.004 mm.

Looking at the test results given above, it isn’t obvious that a standard deviation can be calculated from these data. It is, however, possible to estimate the standard deviation by fitting a probability distribution to the observations. If we plot the results, then an “S” curve can be seen, which closely follows a plot of the cumulative normal probability density function (PDF).

If we can find the mean and standard deviation for the normal distribution that fits to our data, then we can determine the repeatability and bias of the gauge. This can be achieved using Excel.

In the above example, the fitted distribution uses the Excel function NORM.DIST with the cumulative PDF option (the final argument is set to true). The formulas in column F reference the corresponding value for the calibrated value in column B. The formulas in column F also reference the estimated values for the mean (C16) and standard deviation (C17). Note the use of the dollar signs for the estimated values. This allows the formulas to be dragged down the rows and always reference the same values.

Note that the fitted distribution in the above example is not a very close fit to the test results. This is because initial estimates have been used for the mean and standard deviation. One way to improve the fit would be to manually change these values and observe how the fit can be improved. If you want to get a feel for this, you could duplicate the above spreadsheet and give it a go. It’s quite difficult to do.

A more efficient way to fit the distribution to the test data is to minimize the sum of the squared differences, or residuals. To do this, add another column and enter a simple formula to calculate the difference between the fitted distribution and the test results. In this example, cell G3 contains the formula *=F3-E3*, and this formula is then copied down column G. In row 14, the sum of the squares of these residuals is then calculated using the formula *=SUMSQ(G3:G13)*. The distribution can then be fitted by optimizing the values of the mean and standard deviation so that the sum of the squared residuals is minimized.

This type of optimization can be carried out automatically using the *Solver*, found on the *DATA* tab. Note that the Solver is an add-in that is included with Excel but not turned on by default. If you don’t see the option, you will need to turn it on.

After selecting the solver button, a dialogue box will open to enter the solver parameters. The objective is to minimize the sum of the squared residuals in G14. This is to be performed by changing variable cells in C16:C17 (the mean and standard deviation). Other options can be left as default using a GRG nonlinear solver and no constraints.

When you select *Solve*, you should see the sum of squares value reduce and the fit visibly improved on the chart. A dialogue will open asking whether you want to keep the solver solution; click *OK*. In the example, the estimated standard deviation has changed significantly. This now appears to be a good estimate for the repeatability of the gauge. The difference between the estimated mean and the nominal mean is orders of magnitude smaller than the standard deviation. Therefore, it is safe to consider the bias as not being significant without applying any more sophisticated tests for significance. This is an area I will cover in more detail later.

If you didn’t get a good solution, there are a few possible reasons for this:

1. The calibrated values do not cover the range of the probability distribution with sufficient resolution. It is usually possible to spot this by looking at the plot of the test results. If this is because the repeatability uncertainty is very small, then it may be acceptable to use a worst-case estimate, as described above, rather than fitting a probability distribution. If this is not appropriate, then additional calibrated samples will need to be obtained and further testing carried out using these.

2. The initial starting values are not sufficiently close to the optimal solution. You should always start with a manual estimate with a mean roughly in the middle of the test results, and a standard deviation that is of the right order of magnitude. Using the chart as a visual sanity check is useful here.

3. The uncertainty does not follow a normal distribution. If you’re still struggling to get a good fit, it is worth considering this as an attempt to fit some other distributions.

I’ll cover some of the finer points of attribute gauge studies and uncertainty evaluation in part two. This will include fitting additional probability distributions, refining the resolution of calibrated test samples, and testing for the significance of bias detected by the analysis.

*First published May 2, 2019, on the engineering.com blog.*

## Comments

## Best explanation I have seen yet

This is the best explanation I have seen so far of attribute gage R&R. It shows that, even though the attribute gage returns only pass/fail data, if one measures the same part (or known standard) enough times, it is possible to get a percentage of items that pass in each case, and use the normal distribution to estimate the gage's mean and also its standard deviation.