Several years back, I was asked this question: Does measurement software have the ability to determine scale without the use of scale bars? On numerous occasions in a variety of ambient conditions, I have tested and compared the results of initially setting up a system using temperature/material software compensation. Next, I would check the accuracy by measuring a NIST traceable scale bar (length adjusted for ambient temp). Most of the time, the results compared within 0.0005 in. – 0.0015 in. So here is the million dollar question. Which one is truly correct?
ADVERTISEMENT |
Most likely neither. Swearing by either one as “metrology law” is most likely grounds for an argument.
Second question: Is either one sufficient to use with confidence? Probably yes. After all, this is done on a daily basis. Currently, I am unaware of any in-depth scientific studies comparing the two methods. Each method has its pros and cons, and the variables seem endless when everything is taken into consideration and factored into the “equation.”
Scale bars are calibrated in a controlled environment under lab conditions and are truly accurate and repeatable under those same conditions. Nonetheless, in the “real” world, their length must be adjusted relative to ambient temperature. If the temperature changes throughout the day, the accuracy of the system changes as well, and must be reset. For example, if the temperature changes a mere 3°F, an 8 ft. scale bar will change almost 0.002 in., and an aluminum scale bar will be double of that.
Now let’s look at the scale bar itself. Does the bar expand and contract in a straight manner? Does it bend, twist and bow with temperature change? Does it come back to its exact position after such movements? Does the scale bar use CBs (construction balls) mounted in the ends of the scale bar as the reference medium or a bushing to place a pin nest? CBs have a manufacturing tolerance of several tenths. The use of two, one on each end, has the potential to double this and create stack-up error. The SMR (spherically mounted retroreflector), if being used, has multiple components, a hollowed out ball bearing and three mirrors orthogonal to each other. The ball itself has a tolerance as well as the positioning of the mirrors to true center.
What about the pin nest? Is it fixed, or does it fit into bushings? Is it a light press or do the bushings have a little “slop” to them? Is the pin truly centered on the nest? Was the bar calibrated using the pin nest and SMR that you are using? The bushings and pins all have tolerance. Again, tolerance stack-up could exceed the stated accuracy specs of the instrument being used.
Now factor in the shop atmosphere. As you can see, the variables are numerous just reading the scale bar. There is one more piece to the puzzle. If the tool is a mix of different materials, aluminum-steel composite, which type of scale bar do you use? Flip a coin? I am not discounting the use of scale bars, by any means. I do use them religiously, but I only want to point out that a lot must be taken into consideration when setting system scale, maintaining it, and repeating at a later date.
What about software compensation for setting scale? What can you use to “prove” it when you are on the job site? Hmmm, a traceable scale bar! Which one will be truly correct? They can both be argued for. One positive aspect of software scaling is that it is a continual real-time process, which is a real plus. As temperature changes, so does the scaling. But then as scaling changes, does the point of origin for the reference system change as well? If you re-measure a control point within 0.001 in. or so, what is really correct? Did the point of origin indeed change? Do you need to reset the system? Again, tolerance stack-up may be influential.
Let’s consider another component in the mix: thermometers. Calibrated thermometers are used both in a tracker’s weather station and also to read part temperature. They also have a tolerance and may lead to inaccuracies. One must also look at the material compensation for the scale bar itself. There are many types of compositions for steel, aluminum, composite, and iron as well. Is the algorithm being used a close approximate or spot on?
Laser trackers push the envelope of large-scale close-tolerance measurement and a lot of claims of accuracy are stated. But if you look at the tolerance of all components, plus ambient and environmental conditions, most claims of true accuracy in the real world border on being somewhat questionable. But laser trackers do continue to be one of the key instruments used for large-scale metrology with a high degree of accuracy.
Accuracies typically are stated relative to lab conditions. It would be impossible to factor all of the possible tolerances and stack-ups that can affect a system. Nothing is perfect, yet accuracies are stated with definition. We live in a “material” cut-and-dried world when it comes to stating accuracy and repeatability. So, is there a clear-cut answer? Not in my opinion. But using both methods in conjunction with each other (I often do) can be used to prove both accuracy and repeatability with a high degree of certainty, proving that an acceptable scale has been achieved.
Add new comment