Featured Product
This Week in Quality Digest Live
Metrology Features
Alberto Castiglioni
Technology combinations from FARO help users to focus more on actual measurement and less on measurement processes
Ryan E. Day
Scholarships and Manufacturing Day combine to address the skills gap
Jody Muelaner
Refining resolution of calibrated samples, testing and fitting non-Gaussian probability
Jody Muelaner
How to determine the accuracy and capability of gauges such as go/no-go
Ryan E. Day
Tri-State Plastics invests in FARO technology to take advantage of growing aerospace industry

More Features

Metrology News
Oct. 4, 2019, 10 a.m.–2 p.m., at Starrett corporate headquarters in Athol, MA
From Rockwell to Brinell and Vickers, 16 new systems added
Provides fast and precise external measurements on manufactured parts
As industry transforms with digital manufacturing technologies, skills training must evolve as well
Partnering with FARO Technologies
New capabilities, greater ease of use, and user experience improvements
Structured-light 3D scanner for industrial design professionals
Stereotactic robot helps identify target and deliver electrodes to target with submillimetric accuracy

More News

Jody Muelaner

Metrology

Attribute Gauge Uncertainty, Part 2

Refining resolution of calibrated samples, testing and fitting non-Gaussian probability

Published: Tuesday, October 1, 2019 - 12:02

Attribute gauges are a type of measurement instrument or process that gives a binary pass/fail measurement result. Examples of attribute gauges include go/no-go plug gauges, feeler gauges, and many other types of special-purpose hard gauges. Many visual-inspection processes may also be considered attribute gauges. They are commonly used in manufacturing for product verification. Knowing the accuracy and capability of these measurements is therefore vital for a comprehensive understanding of quality in manufactured goods.

Uncertainty evaluation must consider all the quantities, or factors, that might influence the measurement result. The uncertainties of these individual influence quantities are first evaluated and then combined to give the uncertainty of the measurement result. Calculating the combined uncertainty requires a mathematical model that gives the sensitivity of the measurement result to changes in the influence quantities. This can be done using either an uncertainty budget or a numerical simulation. In most cases important influences to consider will include calibration uncertainty, repeatability, and environmental factors.

Determining the uncertainty for individual influence quantities can be carried out by making repeated measurements and performing a statistical evaluation of the results, usually to calculate the standard deviation. This is known as a Type A evaluation. Repeatability is almost exclusively determined in this way. For other influence quantities, such as any calibration uncertainty, it may not be practical to evaluate the uncertainty in this way. Therefore, some other method must be used, referred to as a Type B evaluation. This could mean referencing an uncertainty value on a calibration certificate or even estimating the value in some cases.

In general, the evaluation of uncertainty of an attribute gauge is the same as for a variable gauge. The individual sources of uncertainty are evaluated and then combined using a mathematical model of the measurement. The key difference is in the way that repeatability is evaluated. Because attribute gauges do not give a numerical result, it is not possible to directly calculate a standard deviation from a series of results. Instead, attribute gauges simply give a pass or fail result.

I explained the basics of evaluating uncertainty for an attribute gauge in part one of this two-part series. It is possible to carry out a Type A evaluation of the repeatability uncertainty for an attribute gauge by first obtaining a number of calibrated reference parts that can be measured by the attribute gauge. These references must have a range of values close to the transitional value that separates pass from fail for the attribute gauge. In theory, if a reference were exactly at this transitional value, an attribute gauge could be expected to pass the reference 50 percent of the time and fail it 50 percent of the time.

References that are significantly larger or smaller than the transitional value would be either passed or failed by 100 percent of measurements. References close to the transitional value will have some other frequency. For each reference, the conventional true value can be obtained by calibration, and the frequency at which the attribute gauge passes it can be determined by repeated measurement.

A cumulative probability density function can, therefore, be fitted to the test results to determine the actual transitional value for the gauge and its repeatability expressed as a standard deviation. Part one presented the basic method but did not show how to refine the resolution of reference samples, determine the significance of bias, or fit non-Gaussian probability distributions.

Example attribute gauge

The examples in this article will be based on the same gauge as the previous example—a “go” plug gauge used to test a 12 mm hole with an H8 tolerance (a diameter of between 12.000 mm and 12.027 mm). The “go” end of the go/no-go gauge should fit into a hole that is greater than 12 mm in diameter and should not fit into a hole smaller than 12 mm in diameter.

12 mm is the threshold value, so calibrated reference holes must be used that are close to this value. In the previous example, the range of references initially selected allowed a good fit to the probability distribution for the gauge, and the analysis was relatively straightforward. Each reference was measured 25 times; the frequency with which the attribute gauge passed it was recorded, and a plot of calibrated size against frequency was made. The Excel Solver was then used to fit a normal distribution to this frequency plot using least squares minimization. For further details, see part one.


Figure 1: Results of repeatability study given as pass rate (frequency) and normal distribution fitted by least squares minimization

Refining the resolution of calibrated references

Imagine what would have happened if a different set of calibrated references had been used. Suppose that the references were at 0.005 mm increments. The results would be likely to look something like this:


Figure 2: Results with calibrated references not spaced closely enough around the transitional value

Clearly, it would not be possible to obtain any meaningful estimate of the standard deviation from these results. The standard deviation must be less than 0.005, but not much else can be determined; the standard deviation could just as easily be 0.002 or 0.00002. Carrying out a full gauge study with this many samples was a complete waste of time because sensible increments had not yet been established.

The trick here is to subdivide the region between 0-percent pass and 100-percent pass until some more meaningful results are seen. There are various ways to approach this, but probably the most efficient is with a binary search, where the intervals are halved until something sensible is seen. Allowing for our resolution, we decide to try intervals of 0.003 next. There is no need to try so many values at this stage because we are only trying to identify the approximate range of the distribution. This might produce the following results:


Figure 3: Results of binary search to find approximate limits of distribution

It seems that we are getting an idea of the size of the distribution now, but we might try a few more values to be sure:


Figure 4: Results with further references

Based on these results, we could safely proceed with a full attribute gauge study, choosing reference values with 0.001 mm increments between 11.995 mm and 12.005 mm. This might produce the results seen in the previous example, to which a distribution can be fitted with confidence:


Figure 5: Attribute gauge study results with well-chosen reference values allowing a distribution to be fitted with confidence

It is also possible that you might initially start with values that are too close together to give any meaningful results. In this case you should expand the values. In many cases you will already have a good idea of the uncertainty of the gauge you are studying, and this can guide the initial selection of references. In any case it’s always best to test a few samples and experiment a bit in this way before investing the time in a full study.

Determining the significance of bias in the gauge

Before we can determine the significance of any bias in the gauge, we must first fit a distribution to the study results. This is carried out using regression—for example, a least squares minimization, as explained in part one. A normal distribution has two parameters, mean and standard deviation, which are estimated in this process. The difference between the nominal dimension being measured by the gauge (or the gauge’s transitional value) and the mean for this fitted distribution is the bias. However, it is possible that this bias is not caused by any inherent bias in the gauge but is simply a result of random variation in the gauge. If the gauge study were repeated, random variation would produce a slightly different set of results. The mean for the fitted distribution would, therefore, be slightly different. In the case where there is no inherent bias in the gauge at all, these random variations would still produce some small bias.

Based on the standard deviation and the number of measurements used to evaluate it, we can estimate how much random variation in the mean we could expect. This is called the “standard error of the mean.” If the bias we see in the results could be expected based purely on the standard error, then the bias is said to be not significant. If the bias is larger than this, it is said to be significant.

Significant bias should be corrected, where possible. If the bias is not significant, then you should not attempt to correct for it because you will be simply chasing random variations and may actually make the gauge less accurate.

When dealing with values estimated by regression, determining standard errors can get quite complex since you may need to consider the standard error of the regression as well as the standard error of the mean. However, if we assume that we have obtained a good fit to our data, then we don’t need to worry too much about that. The standard error of the mean is given by:

In other words, it is the standard deviation divided by the square root of the number of repeat measurements used for each reference.

Typically, if the bias is less than two standard errors of the mean, it would not be considered significant (at 95% confidence).

Fitting different probability distributions

Often a normal or Gaussian distribution will explain the repeatability of a gauge well, but sometimes the random variation follows some other probability distribution. If you are unable to obtain a good fit to the normal distribution, then it is worth first looking graphically at the distribution and then trying some other distribution that might provide a better fit.

When working with the normal distribution, Excel makes life easy by providing two different functions. We can work directly with the standard normal distribution, which has a mean of zero and a standard deviation of one, using the function:
=NORM.S.DIST(z,1)

This function has a single variable, z, which is the z-score or the number of standard deviations, and a flag that is set to 1, to give the cumulative probability density function. In the previous example, we used a slightly different function for the parametric normal distribution, with the mean and standard deviation as its two parameters. This meant that the value of the calibrated reference could be entered directly into the Excel function, together with the mean and the standard deviation.

To use the standard normal distribution, we could modify the previous example so that the z-score is calculated from the calibrated reference ( _x_ ), the mean, and the standard deviation. The z-score is given by:

Using this method, a column for the z-score can be added, and the standard normal distribution can then be used instead of the parameterized form. Taking this approach means that other distributions can also be used.


Figure 6: Column added for the z-score allowing the standard normal distribution to be used as well as other standard distributions

The student’s t, or simply T distribution, is similar to the normal distribution but takes account of the increased uncertainty when small samples have been used to determine the standard deviation. With samples of less than 30, a T distribution should be used. This requires an additional parameter for the number of samples, _n_.

With the z-score now calculated, the fitted distribution column can use the formula for the T distribution in place of the one for the normal distribution.


Figure 7: Student’s T distribution fitted to the gauge study data

Some other distributions that may be relevant in gauge studies include the exponential distribution, which provides the best estimate for non-negative quantities:
=EXPON.DIST(x,lambda,cumulative)

And the gamma distribution, which is used where the measurement result is a number of objects counted:
=GAMMA.DIST(x,alpha,beta,cumulative) Gamma

This article has provided some more technical details on the important subject of attribute gauge uncertainty. This builds on part one, which presented the basic method but did not show how to refine the resolution of reference samples, determine the significance of bias, or fit non-Gaussian probability distributions.

First published Sept. 9, 2019, on the engineering.com blog.

Discuss

About The Author

Jody Muelaner’s picture

Jody Muelaner

Jody Muelaner, is a mechanical engineer with expertise in metrology and advanced manufacturing. Muelaner’s website provides information on topics ranging from the basics of metrology and measurement systems analysis to specific guides such as how to perform a gauge R&R study in Excel.