Featured Video
This Week in Quality Digest Live
Metrology Features
Dirk Dusharme @ Quality Digest
Edge computing, zero defects, and thriving in climate of constant change
Ryan E. Day
Best of show meets best in class
Keith Martin
Metric edition
Francesco Piscani
Component cleanliness is an integral part of industrial manufacturing processes
Fast material analysis for safer products

More Features

Metrology News
Topics for the Go Forth and Measure project are virtually unlimited
Ask questions, exchange ideas and best practices, share product tips, discuss challenges in quality improvement initiatives
Berkeley Lab and Magic Leap Inc. scientists create widely controllable ultra thin optical components
Designed for coordinate measuring machine inspection
Hexagon Manufacturing Intelligence hosts book signing of Gerald Carbone’s new book

More News

Henry Zumbrun


What Is Measurement Risk?

How to understand it and lower it

Published: Wednesday, April 5, 2017 - 12:03

Imagine that a satellite is launched into space and communications are intermittent. This happens because the satellite is wobbling, which causes connection problems in the receiver. The cause of the wobbling is identified: It is the result of not using a calibration provider with a low enough uncertainty.

The load cells used to measure the amount of fuel stored in the satellite must be almost perfect. However, if a calibration provider doesn’t have the right measurement capability, the load cells won’t be accurate enough to make the measurement. In this case, the result is a wobbling satellite and significant resources needed to fix the problem.

Understanding measurement risk

AS9100C defines risk as “[a]n undesirable situation or circumstance that has both a likelihood of occurring and a potentially negative consequence.” It further states that, “The focus of measurement quality assurance is to quantify and/or manage the ‘likelihood’ of incorrect measurement-based decisions. When doing so, there must be a balance between the level of effort involved in, and the risks resulting from, making an incorrect decision. In balancing the effort vs. the risks, the decision (direct risk) and the consequences (indirect risk) of the measurement must be considered.”

ANSI/NCSLI Z540.3-2006 defines measurement decision risk as probability that an incorrect decision will result from a measurement.

What does this really mean?


All measurements have a percentage likelihood of calling something good when it is bad, and something bad when it is good. You might be familiar with the terms “consumer’s risk” and “producer’s risk.” Consumer’s risk refers to the possibility of a problem occurring in a consumer-oriented product; a product that doesn’t meet quality standards passes undetected through a manufacturer’s quality control system and enters the consumer market.

An example of this would be the batteries in the Samsung Note 7 phone. The batteries can potentially overheat, causing the phone to catch on fire. In this case, the faulty battery/charging system of the phone device was approved through the quality control process of the manufacturer, which was a “false accept decision.” If you owned one of these phones, there was a risk of fire and potential damage and injury.



Graph showing the measurement risk that is the probability of false accept

Figure 1: Measurement risk graph

In metrological terms, consumer’s risk is like the false accept risk, or probability of false accept. The biggest difference is that in the metrology field, the false accept risk is usually limited to a maximum of 2 percent. In cases where the estimation of this probability is not feasible, there is a requirement for a test uncertainty ratio (TUR) to be 4:1 or greater to ensure lowering the probability of false accept to a low-risk level.

So, what does this mean for a metrology laboratory? It means that any lab making a statement of compliance, calling an instrument “in tolerance,” must consider measurement uncertainty and properly calculate TUR by considering the location of the measurement. In simplistic terms:

TUR = Tolerance required/Uncertainty of the measurement (at a 95% confidence interval)

If the uncertainty of the measurement is not less than the tolerance required, there will be a significant risk of false accept. In simplistic terms, a TUR that produces less than ±2 percent upper and lower risk would be required to ensure the measurement is valid (see figure 1).

How to lower your measurement risk

Keys to lowering measurement risk include having your calibration provider replicate how the instrument is used in the field, having competent technicians, using the right equipment, and lowering overall uncertainties by the calibration provider. There is quite a bit of difference between force measurement labs with calibration and measurement capabilities (CMCs) of 0.1 percent, 0.05 percent, 0.02 percent, 0.01 percent, 0.005, percent and 0.002 percent of applied force. Not using the laboratory with the right capability to meet your requirements is like using a ruler to calibrate a gauge block.


Figure 2: TUR table

Figure 2 shows the test uncertainty ratios (TUR) that force calibration labs with different calibration capabilities can provide for various levels of required tolerances. The far-left column represents the calibration standard required for force measurements. Deadweight primary standards are often required to achieve CMCs of better than 0.01 percent of applied force. A high-end load cell calibrated by deadweights would be required to achieve CMCs of better than 0.05 percent. This table indicates the best TUR that the labs can provide for the same load cell at similar conditions. Per this table, only calibration labs with CMCs around 0.02 percent or better can calibrate devices with a tolerance of 0.1 percent. They may still need to adjust the device to read closer to the nominal value.

The table was derived from TUR and uncertainty formulas found in JCGM 100:2008 and ANSI/NCSLI Z540.3-2006. The formulas used to determine TUR and Uncertainty are as follows:



TUR = Test uncertainty ratio
USL = Upper specification limit
LSL = Lower specification limit
u = standard uncertainty

(Note: We are using 4 and assuming k = 2; the proper formula would be two times whatever the actual k value is for a 95-percent confidence interval.)


The calculation of TUR for tolerances:

((Upper specification limit – Lower specification limit))/(4 * Standard uncertainty)

Combined uncertainty (u) = The square root of the sum of the squares of all the input quantity uncertainty components.



CMC = Calibration and measurement capability. This should be found on the calibration report.

Res = This is the resolution of the unit under test (UUT) The divisor for resolution will either be 3.464 or 1.732 (depending on how the UUT least significant digit resolves).

Rep = Repeatability of the unit under test (UUT). Repeatability of UUT must be used if repeatability studies were not previously accounted for in the CMC. If accounted for in the CMC, this would not be required.

Expanded uncertainty = Typically 2 times the standard uncertainty. However, the appropriate k value should be used to ensure a coverage probability of 95 percent, based on the effective degrees of freedom using the Welch Satterthwaite formula.

Is your calibration provider reporting pass/fail criteria properly?

If the calibration provider is accredited, it needs to follow the requirements per ISO/IEC 17025. ISO/IEC 17025:2005 Clause states that “When statements of compliance are made, the uncertainty of measurement shall be taken into account.”

This translates to minimizing the probability of false accept by applying a guard-banding method. The 2006 ANSI/NCSLI Z540.3 2006 Handbook discusses guard banding in section 3.3, which in section 3.3., paragraph two states:

“As used in the National Standard, a guard band is used to change the criteria for making a measurement decision, such as pass or fail, from some tolerance or specification limits to achieve a defined objective, such as a 2-percent probability of false accept. The offset may either be added to or subtracted from the decision value to achieve this objective.”

Measurement risk with guard banding

Assume we are testing a load cell at 10,000 pound force (lbf). The accuracy specification is 0.1 percent of reading (or ±10 lbf at this force), and the measured value was 9990. Is the device in tolerance? After all, the calibration laboratory applied 10,000 lbf, and the unit under test (UUT) read 9990. The bias is 10 lbf, and the device meets its accuracy specification (i.e., lab would accept the decision without taking the uncertainty of measurement into account). The report is issued, and the end user is happy.

However, the problem is that the end user should not be happy. If the CMC of the calibration laboratory using a specific reference standard was not considered, the end user will not know whether the device meets the accuracy specification required. Basically, this measurement was passed based on the assumption that the calibration provider’s reference was perfect, and the lab applied exactly 10,000 lbf to the load cell. However, this is a false assumption that neglected the uncertainty in the calibration provider’s measurement. No measurements are perfect, and that is why we estimate the uncertainty of measurement to quantify this “imperfection of the measurement.” Let’s assume that the standard uncertainty was calculated at 6.5 lbf for k = 1.

In the graph in figure 3, the item being calibrated would normally be considered “in tolerance” by a large percentage of calibration laboratories because the accuracy specification is 0.1 percent of reading, or ±10 lbf, and the measured value was within the accuracy specification at 9,990 lbf. There is a 50.1-percent chance of the calibration being accepted when it is not in tolerance.



Figure 3: A calibrated item considered “in tolerance” by a large percentage of calibration laboratories

The graph in figure 4 shows the risk when the measured value of the UUT reads 10,000 lbf. In this scenario, the bias or measurement error is 0. However, there is still a 12.39-percent chance that the UUT is not “in tolerance.” Simply put, there is too much risk. We need to lower the standard uncertainty to reduce the risk. Note that the TUR remains the same since it is a ratio not dependent on the location of the measurement.


 Figure 4: Risk when the measured value of the UUT reads 10,000 lbf

Lowering risk (probability of false accept) by lowering uncertainty

1. Use better equipment with a lower resolution and/or better repeatability, e.g., a higher quality load cell for force measurement.
2. Use a better calibration provider with a CMC low enough to reduce the measurement risk.
3. Pay attention to the uncertainty values listed in the calibration report issued by your calibration provider. Make sure to get proper TUR values for every measurement point (but pay attention to the location of the measurement).

The graph in figure 5 shows the same test instrument with a lower standard uncertainty. This was a real scenario where an instrument was modified from a 10 lbf resolution to a 2 lbf resolution. The total risk is now 0, and the device will be “in tolerance” with less than 2 percent total risk from a reading of 9,996 through 10,006 lbf. There are several acceptable methods for applying a guard band to obtain what the measured value needs to be in order to maintain less than 2 percent total risk.

These graphs comply with Method 5—“Guard bands based on expanded uncertainty” in the ANSI/NCSLI Z540.3 Handbook and is described in ISO 14253-1. It’s also included in ILAC G8 and various other guidance documents.


Figure 5: Test instrument with a lower standard uncertainty

After reading this article, you may find yourself at a crossroad wondering if any of this extra work is necessary. To the left is the same rough path you’ve been travelling all along. This is the path that says, “If it’s not broken, why fix it?” You might be thinking that measurement risk has not been an issue before, or you’ll just wait until an auditor questions you about it (or there is a train wreck). Yet, to the right is the road that fewer people realize will help solve their measurement problems today. This road is not more difficult; it’s just different from the current way you may be doing things.

Choosing to consider the effect of not doing things right—and making the decision to select the best calibration provider—will make all the difference. The rest is just putting formulas in place to report and know your measurement risk.

JCGM 100:2008—“Evaluation of measurement data—Guide to the expression of uncertainty in measurement”
ILAC P14:01/2013—“Policy for uncertainty in calibration”
ANSI/NCSLI Z540.3-2006—“Requirements for the calibration of measuring and test equipment”
ISO/IEC 17025:2005—“General requirements for the competence of testing and calibration laboratories”
AS9100C—“Quality management systems—Requirements for aviation, space, and defense organizations”


About The Author

Henry Zumbrun’s picture

Henry Zumbrun

Henry Zumbrun is president of Morehouse Instrument Co. where he has managed the force and torque calibration lab and services in the family-owned business since the 1990s. Morehouse helps labs lower their force measurement uncertainties and torque, resultin in more accurate measurements, which lowers costs, reduces risk, and increases quality. Morehouse designs and manufactures products in line with customer requirements, lean, Six Sigma, and best practices guidelines.


The graphs suggest a commercial program for the calculations.

Is there a commercial program to automate the calculations and present the graphics you depict in the article?

Commercial software


Take a look at Minitab. It should be useful for this type of data gathering and presentation.