Defining uncertainty Merriam-Webster defines uncertainty as “lack of sureness about something.” This isn’t the kind of thing you want to consider when performing critical measurements. You want to be sure of your measurement, especially if the measurement is critical to the success, or failure of a part or the system it goes into.
Accuracy is defined by the same source as “degree of conformity of a measure to a standard or a true value.” This definition is based on the use of standards against which other things are compared. The accuracy of a measuring device is the degree to which it can match the true value of the standard.
Length is an easy parameter to consider when thinking about uncertainty of measurement of a known standard. If a steel gauge block standard supposedly is 50.000 mm long and the device used to measure it reports 50.002 mm, is the measuring device inaccurate or is the part actually 2 µm longer than claimed? There’s some uncertainty in the measurement due to this error. And, it’s not clear whether the uncertainty is in the part or in the measuring device.
Before I go much further, it’s important to know the difference between error and uncertainty. A measurement result that differs from the “true value” is an error. In theory, therefore, adding or subtracting the error from the measurement result can derive the true value. Uncertainty, on the other hand, is a range of values likely to be encountered during a series of measurements.
Interpreting errors My simple 2-µm error example is much more complex than it might appear. Let’s assume that the gauge block is, in fact, exactly 50.0000 mm long. Obviously there are conditions attached, because the gauge block had to be measured for its true length to be determined. We’re simply trying to measure it to confirm that our measuring device is accurate. Variation of the measurement from the true length is measurement error. But, is the measurement error due to inaccuracies in the measuring device, or is the part actually as long as it’s supposed to be?
For example, at the time of measurement the standard may not be the true length—that value is based on the measurement at a specific temperature. Consider the effects thermal influences have on the length of steel gauge blocks. Regardless of the temperature of the measuring environment, is the block at that temperature? For example, crudely applying the thermal expansion coefficient of steel shows that a 4°C higher temperature will make the 50-mm gauge block 2 µm longer.
If the gauge block was actually 4°C warmer than it was when its true value was derived, then due to thermal expansion, the measurement reading of 50.0020 mm is right on. But would you know that, or would you think that the measuring device simply had a 2-µm error?
Now let’s consider that the part is actually exactly 50.0000 mm long when we measured it one time, but the measured result is 50.0020 mm. Can we assume that the measuring device has a 2-µm error for all measurements? Not necessarily.
Linearity of accuracy? We’re measuring the 50 mm gauge block on a fictitious device with a measuring range of 0–500 mm. We verified performance by measuring the 50-mm block, which is well within the device’s measuring range. But, what does that tell us? All it tells us is that measurements of parts that are 50 mm long will be reported as being 2 µm longer. Does this systematic error mean that a 400-mm gauge block will measure 2 µm longer too? Don’t count on it.
First of all, the same environmental concerns we have about the gauge block apply to the measuring device. Simple things like powering up a cold device can cause it to expand as electronic components heat up, for example. Variations in the measuring device lead to variations in measurements from that device.
Now consider the measuring device and its environment to be at thermal equilibrium. Verifying its performance near one end of its measuring range doesn’t confirm its performance throughout its entire range. Are you willing to assume its operation is linear? How can you know that? The best way is to measure several standards of lengths within the measuring range. What you may find is that the value of the measurement uncertainty increases with part size, but the error as a percentage of the part length is constant. Let me explain.
Our 2-µm error is 0.004 percent of 50 mm. If that percentage were constant across the measurement range, the 400-mm gauge block would measure 16 µm longer. This kind of variation might be expected, as it can be difficult to measure ever increasing lengths to the same measurement resolution. However, it wouldn’t be unusual for a user of the device to assume a constant 2-µm offset/error across the entire measurement range.
What about uncertainty? According to the National Institute of Standards and Technology, “a measurement is only complete when accompanied by a quantitative statement of its uncertainty.” Why is this important? Because the uncertainty tells you whether the measurement result is appropriate for its intended purpose, and to judge if the result is consistent with similar results.
A measurement with uncertainty is a valid measurement because the uncertainty implies increased confidence in the validity of a measurement result.
Does uncertainty = error? At this point you might think that uncertainty and measurement error mean the same thing. They don’t. A measurement result that differs from the “true value” is an error. In theory, therefore, adding or subtracting the error from the measurement result can derive the true value. There are actually two kinds of error. Systematic errors are repeatable. Take the earlier 2-µm example. If every measurement read 2 µm high, that is systematic and can be adjusted to derive the true value. Random errors, on the other hand, fluctuate from measurement to measurement making them more difficult to deal with. Noise can be a source of random errors.
Systematic errors offset measurements is one direction while random errors displace them arbitrarily. In other words remeasuring a part will yield errors that are larger and smaller than the true value if they are random.
Uncertainty, on the other hand, is a range of values likely to be encountered during a measurement. A simple example is a format such as 10 mm ± 0.005 mm. This means that a measurement of 10 mm will have values ranging from 9.995 to 10.005 mm. And that range of results may be made up of both systematic and random errors.
Because there are many contributors to the repeatability of measurement, and thus to uncertainty, measuring devices include specifications which take uncertainty into account.
Next time: There are international standards for expressing uncertainty. Next time, more on that. In the meantime, try not to be uncertain about the errors of your ways. And as always, yes, measurement matters!
Add new comment