



© 2022 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Published: 04/24/2017
When I entered college in the fall of 1970, I had a nice slide rule (or “slipstick” as some of us called it) that I proudly carried in a leather case to my engineering and chemistry classes. Virtually everyone at North Carolina State University had a slide rule then, but by the time I was a senior, if you didn’t have an electronic calculator, you would have a hard time competing.
[ad:30037]
Unfortunately, the demise of the slide rule brought with it the demise of “significant digits.” The topic of significant digits was actually taught in many classes because the scales on a slide rule did not allow one to make calculations that would result in three or more digits after a decimal point. As a result, answers to homework or test problems were rendered in “scientific notation” such as 7.13 x 10^3. The electronic calculator would give the answer as 7,132.435674... wow! The calculator is “more accurate” than the slide rule! Or is it? I actually had professors who would deduct points if you gave answers that looked like the latter example and exceeded the proper number of significant digits.
As a somewhat absurd but illustrative example, suppose a geology professor takes an old fossil to class and announces, “This fossil is two million years old.” The following year, is the professor going to bring the same fossil to class and announce, “This fossil is two million and one years old”? Well, it depends on what the professor meant. Does “two million years old” mean 2,000,000 years or 2,000,000.00 years?
In the former case, the fossil is 2,000,000 years old with just one significant digit (the 2). All other digits are essentially random numbers. Depending on rounding conventions, the fossil could be anywhere between one and a half million and two and a half million years old. In the latter case, we have nine significant digits, and the fossil was having its “two millionth birthday” while being presented to the class for the first time. The following year the professor would be justified in adding one year to the fossil’s age.
As a more practical example, suppose a test lab measures some physical property of a sample submitted to it by a manufacturer, and the test is repeated five times. The test results are: 25.3, 24.9, 25.1, 24.9, and 25.0. The lab reports the average as 25.040 because sometime in the past, someone wanted to know the value to three decimal places to be “more accurate” or “more precise.” The best value for the answer is 25.0; 25.040 is neither more accurate nor more precise. The 4 and the last zero are random numbers.
The following are the most commonly used rules for determining the number of significant digits:
• All nonzero digits are significant (e.g., 87 and 56.98 have two and four significant digits, respectively).
• All zeroes between nonzero digits are significant (e.g., 12.012 has five significant digits).
• Leading zeroes are not significant (e.g., 0.0123 has three significant digits).
• Trailing zeroes after a decimal point are significant (i.e., 108.0200 has seven significant digits).
• Trailing zeroes in a number without a decimal point can be ambiguous in terms of the number of significant digits. As a general rule, do not count the trailing zeroes as significant (e.g., 2,000,000 has one significant digit).
• Scientific notation can be used to eliminate some ambiguity by denoting 1300 as either 1.3 x 10^3 (two significant digits) or 1.300 x 10^3 (four significant digits), depending on the context of the number 1300.
• When computing (multiplying, dividing, trig. functions, etc.), the number of significant digits in the computed value may only be equal to the least number of significant digits in any number used as input to the computation (e.g., 4.4 x 6.64 = 29.2, not 29.216).
When adding or subtracting quantities, the answer should contain no more decimal places (not significant digits) than the least number of decimal places found in any of the numbers being added or subtracted.
When multiplying or dividing by a whole number, the whole number is considered to have an indeterminate (or infinite) number of significant digits. For example, if two identical paper clips weigh 1.02 g, then their total weight is computed as 1.02 g x 2 = 2.04 g.
The advent of calculators and high-speed computers have caused many of us to ignore the simple fact that a calculated quantity cannot be more “accurate” or “precise” than the least accurate or precise quantity used in the calculation. Is it just human nature to deliver an answer like 2.34543 to appear intelligent, when the best answer is just 2.3?