Inside Metrology

  •  

  •  

  •  

  •  

     

     

  • SUBSCRIBE

Keith Bennett and Howard Zion  |  02/09/2005

Keith Bennett and Howard Zion’s default image

Calibrating Test and Measurement Equipment

Understanding test uncertainty ratio helps improve calibration service.

Test and measurement equipment (T&ME) must be periodically calibrated to ensure it’s operating within its specified parameters. Of course, the uncertainty of the system used to calibrate the equipment shouldn’t add appreciable error to this process.Test and measurement equipment (T&ME) must be periodically calibrated to ensure it’s operating within its specified parameters. And, of course, the uncertainty of the system used to calibrate the equipment shouldn’t add appreciable error to this processThe calibration process usually involves comparison of the T&ME to a standard that has similar functions with better accuracies. The comparison between the accuracy of the unit under test (UUT) and the accuracy of the standard is known as test accuracy ratio (TAR). However, this ratio doesn’t consider other potential sources of error in the calibration process.

Errors in the calibration process aren’t only associated with the specifications of the standard; such errors can also originate from sources such as environmental variations, other devices used in the calibration process, technician errors, etc. These errors should be identified and quantified to get an estimation of the calibration uncertainty. These are typically stated at a 95 percent confidence level (k = 2). The comparison between the accuracy of the UUT and the estimated calibration uncertainty is known as test uncertainty ratio (TUR). This ratio is more reliable because it accounts for possible sources of error in the calibration process that the TAR does not.

Also important is the selection of the test points, which should be chosen carefully to give a high degree of confidence that the UUT is operating within its specified parameters. The TUR should also be large enough to provide reliability of the calibration.

Some quality standards attempt to define what this ratio should be. ANSI/NCSL Z540-1-1994 states, “The laboratory shall ensure that calibration uncertainties are sufficiently small so that the adequacy of the measurement isn’t affected.” It also states, “Collective uncertainty of the measurement standards shall not exceed 25 percent of the acceptable tolerance (e.g., manufacturer specifications).” This 25 percent equates to a TUR of 4:1. Other quality standards have recommended TURs as high as 10:1. For some, a TUR of 3:1, 2:1 or even 1:1 is acceptable. Any of these may be acceptable to a specific user who understands the risks that are involved with lower TURs and builds these into the measurement process. When accepting a TUR less than 4:1, it’s important to consider the UUTs tolerance band, where its “as found” reading is determined to lie. It’s even more important to know where the UUT is left during the calibration process.

A 4:1 TUR is the point for which most high-quality calibration labs strive. It’s the point at which the level of in-tolerance probability stays at 100 percent the longest, with the best economies of scale.

In some cases, a 4:1 TUR may be unachievable. Factors that could cause a situation where the TUR is < 4:1 include:

  • Availability of adequate standards
  • The technology of the respective T&ME approaching the intrinsic level of the specific discipline

The user may accept the higher risk associated with the achievable TUR (e.g., 2:1) as opposed to demanding the achievement of a 4:1 TUR. In cases where a 4:1 TUR is necessary, the calibration provider may incur a substantial capital investment expense to purchase the appropriate lab standards. This might lead to an increase in the calibration price, which is the other alternative: choosing to pay higher costs for better measurement assurance (and reduced risk).

With a TUR of 1:1, the total uncertainty of the calibration process is as good as (but not better than) the tolerance of the UUT. If these two instrument readings match exactly and the UUT has no error, the risk of the UUT making a measurement outside its specification is limited to its drift. This is more prevalent with electronic T&ME. It’s now important to estimate, how long the UUT will maintain (repeat) the measured value. Most manufacturers determine the drift, or instability, of their product and match these with a recommended calibration cycle to ensure the UUT drift doesn’t exceed its specified tolerance during this cycle or calibration interval. The user needs to be aware of this increased risk. If drift occurs between calibrations, the potential for the UUT to operate outside of its specifications increase with lower TUR’s. T&ME that drifts outside of its designed specifications could proliferate incorrect measurements, which could have detrimental effects on products or systems. The result is that a 1:1 TUR carries a higher risk of the UUT operating outside its design specifications and increases the probability of making bad measurements. However, there are situations that can’t provide better than a 1:1 TUR. This is typically seen at the higher levels in the traceability chain, where complex statistical evaluations and calibration cycle algorithms are performed. Metrologists, physicists and engineers perform work at this level to mitigate the risks involved in transferring measurements between national measurement institutes such as the National Institute of Standards Technology.

Figure 1: In-Tolerance Probability for 1:1 TUR

Users of general purpose T&ME expect their calibration provider to verify that their instruments are operating within the design specifications. The reason for the recommendation of minimum TUR’s (such as 4:1 or 10:1) from quality standards is to ensure that the calibration process can provide a high level of confidence that the instrument is operating within its design specifications. Providing acceptable TURs isn’t always the practice of calibration providers. If the provider doesn’t give an acceptable TUR, the customer should be made aware of the achievable TUR so the customer can make the decision of whether to proceed with the calibration.

When an instrument is calibrated, it’s either found within or outside of its design tolerance. In the situation of a 1:1 TUR, with the instrument reading exactly at nominal, there’s a high probability (approaching 100%) that the instrument is in tolerance. If the instrument is found at the very top of its tolerance band, then there’s nearly a 50 percent chance that the instrument is outside of its design specification, regardless of the TUR. Instruments are rarely found exactly at nominal. As Figure 1 demonstrates, for a 1:1 TUR, as soon as the UUT reading deviates from nominal, there’s a high probability that the instrument could actually be outside its design specification, even though the reading implies that it’s within its specifications. Even if the instrument is found/left at nominal, there isn’t allowance for drift over time and the UUT will likely fail its subsequent calibration or drift outside its design specification during use.

At a 2:1 TUR, there’s a higher in-tolerance probability off nominal (which equates to about 35% of the tolerance band) with the in-tolerance probability dropping below 90 percent at 40 percent of the tolerance band, and gradually decreasing to 50 percent thereafter.

Figure 2: In-Tolerance Probability for 2:1 TUR

A 4:1 TUR is probably the most widely used guideline in the calibration industry. This is the point at which the level of in-tolerance probability stays at 100 percent the longest, with the best economies of scale.

At a 4:1 TUR, the in-tolerance probability stays flat at 100 percent (to about 50% of the instrument’s tolerance band), drops below 90 percent (at 70% of the tolerance band) and then cuts off sharply to 50 percent thereafter. A guard band adjustment level when the “as received” reading exceeds 70 percent of the tolerance would benefit the reliability of the calibration.

Figure 3: In-tolerance probability for 4:1 TUR

At a 10:1 TUR, the in-tolerance probability stays flat at 100 percent (through approximately 80% of the instruments tolerance band), at which point it drops very sharply to 50 percent. A guard band adjustment level when the “as received” reading exceeds 85 percent of the tolerance would benefit the reliability of the calibration.

A 10:1 TUR gives higher confidence than the others mentioned, but in many situations it may be impractical because of costs or limits in the technology. It took a change from 4:1 to 10:1 to gain a mere 15 percent improvement in the guard band level.

Figure 4: In-Tolerance Probability for 10:1 TUR

Furthermore, a TUR of 100:1 gives approximately 100 percent confidence that the unit is within its design specification throughout 98 percent of its tolerance band. However, this is quite impractical because of the cost associated with getting to this level of uncertainty, or the limits in the technology. To this point, it took a factor of 10 to gain only 15 percent of improvement in the guard band level.

Guard band adjustment by a calibration provider should be a defined process, whereas the calibration provider would provide an adjustment to the UUT at 70 percent of its tolerance band when a 4:1 TUR is used. This isn’t common practice with calibration providers. As a user of T&ME, you should question your provider as to whether this “guard band adjustment” process is part of their standard practice or, if not, this could be provided.

It isn’t always possible to provide adjustment to T&ME. Some T&ME that have a fixed value such as gage blocks, fixed mass and fixed resistors, wouldn’t be practical to adjust. For this type of equipment, the user must understand the implications and use either certified values in their measurement process or make adjustments in confidence levels of their measurement process. There may be other situations in which adjustment of T&ME isn’t possible because of the adjustment of a respective range of the T&ME. The lower end of the range may need to be adjusted to 80 percent of its tolerance so the higher end of the range meets its tolerance. In these situations, it would be advisable to demand a higher TUR from the calibration provider.

What values of TUR can realistically be achieved in practice?
This depends on the parameter being measured, the standard being used and the current technology available for the measurement. If the technology hasn’t been developed to achieve better than a 2:1 TUR for a specific parameter, that’s the best measurement that can be achieved. It’s left to the end-user of the UUT to apply this information properly to their process. As statistical process control becomes more prevalent in industry, TUR values can be achieved by identifying confidence levels in processes and, in turn, identifying limitations in T&ME used to control those processes. The assumption that T&ME is always giving exact measurements within their respective design specifications must be questioned. Each user must understand how the uncertainties associated with the measurements in their process relate through the chain of measurement traceability. Understanding and properly applying the right TUR can help users ensure adequate measurements to the requirements of the process, without spending more than needed to achieve accurate results.

Calibration service providers that have invested heavily in ISO/IEC 17025 understand the application of uncertainties in their measurement process. In most cases, best practice is to strive for a 4:1 TUR with adjustment made at 70 percent of the tolerance band. In some instances, the TUR of a given measurement process, as related to the UUTs tolerance, will be less than 4:1—particularly because of the limitations of the technology inherent in the standards that are available today. In these cases, when the TUR falls below this accepted 4:1 level, the actual TUR should be stated and adjustments made so that the in- tolerance probability is greater than 95 percent. As ISO/IEC 17025 gains momentum, more and more calibration labs will begin to report the TUR. It’s good to always question what the TUR is when receiving calibration services.

Discuss

About The Author

Keith Bennett and Howard Zion’s default image

Keith Bennett and Howard Zion

Keith Bennett has been in the metrology field for 25 years and is proficient in multiple disciplines within the field. Keith spent 10 years in the U.S. Air Force, primarily working in the physical/dimensional, primary DC/low frequency, and RF/microwave areas. After the military, Bennett spent 10 years at Compaq Computer Corp. focused on analytic metrology. He is currently the director of metrology for Transcat Inc., a leading North American provider of calibration services. He is responsible for planning, initiating and directing all activities associated with achieving and maintaining accreditation to ISO/IEC 17025 for all 10 commercial calibration laboratories.

Howard Zionhas served in progressive roles of responsibility over the course of 22 years in metrology. From his fundamental and advanced PMEL training in the U.S. Air Force through his on-the-bench experience with Martin Marietta and with NASA-contractors at the Kennedy Space Center, to his engineering experience with Philips Electronics, Howard has collected a wealth of knowledge in many metrology disciplines. He is currently technical director of operations for Transcat Inc.

You can create content!

  • Classifieds
  • File Share
  • Forum Topic
  • Events
  • Links

Sign In to get started!