Statistics Article

Anthony Chirico’s picture

By: Anthony Chirico

Aerospace standard AS9138—“Quality management systems statistical product acceptance requirements” was issued this year (2018), a few years after its accompanying guidance materials in section 3.7 of the International Aerospace Quality Group’s (IAQG) Supply Chain Management Handbook. The new aerospace standard supersedes the aerospace recommended practice of ARP9013 and, related to MIL-STD-105 (ANSI/ASQ Z1.4), claims to shift focus from the producer’s risk to the consumer’s risk with sampling plans having an acceptance number of zero (c=0).

Somewhere along this evolutionary path, the sampling procedures of MIL-STD-105 have fallen out of favor, even though the consumer risks of MIL-STD-105 at their designed lot tolerance percent defective (LTPD) point are superior to most plans found within AS9138.

Dirk Dusharme @ Quality Digest’s picture

By: Dirk Dusharme @ Quality Digest

In this all-manufacturing episode, we look at the STEM pipeline into manufacturing, supplier development, how to make sense of manufacturing data and, no, manufacturing is not dead.

“Strengthening the STEM Workforce Pipeline Through Outreach”

NIST does more than just research and come up with really cool technology. It also has STEM outreach programs designed to help young people and minorities get excited by all this coolness.

“Making Sense of Manufacturing Data”

Are you collecting enough data? Are you collecting too much data? Do you know the difference between data and information? Doug Fair of InfinityQS does. He told us.

Donald J. Wheeler’s picture

By: Donald J. Wheeler

Parts One and Two of this series illustrated four problems with using a model-building approach when data snooping. This column will present an alternative approach for data snooping that is of proven utility. This approach is completely empirical and works with all types of data.

The model-building approach

When carrying out an experimental study we want to know how a set of input variables affect some response variable. In this scenario it makes sense to express the relationships with some appropriate model (such as a regression equation). The experimental runs will focus our attention on these specific input and response variables, and the model will answer our questions about the relationships.

Davis Balestracci’s picture

By: Davis Balestracci

During the early 1990s, I was president of the Twin Cities Deming Forum. I had a wonderful board whose members were full of great ideas. One member, Doug Augustine, was a 71-year-old retired Lutheran minister and our respected, self-appointed provocateur. He never missed an opportunity to appropriately pull us right back to reality with his bluntly truthful observations and guaranteed follow through on every commitment he made.

After W. Edwards Deming’s death in 1993, we tried to keep the forum alive, but monthly meeting attendance started to drop off significantly. We were offering innovative meetings to grow in practice of Deming’s philosophy, but our members wanted static rehashing and worship of “the gospel according to Dr. Deming.”

The last straw was a meeting where a self-appointed Deming expert/consultant went too far: He lobbed one too many of his predictable, pedantic “gotcha! grenades” at the speaker for alleged deviations from “the gospel.” His persistence blindsided and visibly upset our wonderful guest speaker. I was furious and publicly told him to stop. It did not go over well—except with my board.

Laurel Thoennes @ Quality Digest’s picture

By: Laurel Thoennes @ Quality Digest

In the foreword of Mark Graban’s book, Measures of Success: React Less, Lead Better, Improve More (Constancy Inc., 2018), renowned statistician, Donald J. Wheeler, writes about Graban: “He has created a guide for using and understanding the data that surround us every day.

“These numbers are constantly changing,” explains Wheeler. “Some of the changes in the data will represent real changes in the underlying system or process that generated the data. Other changes will simply represent the routine variation in the underlying system when nothing has changed.”

The problem is in deciding whether data changes are “noise” or signals of real changes in the system.

“Mark presents the antidote to this disease of interpreting noise as signals,” adds Wheeler. “And that is why everyone who is exposed to business data of any type needs to read this book.”

William A. Levinson’s picture

By: William A. Levinson

Quality and manufacturing practitioners are most familiar with the effect of variation on product quality, and this is still the focus of the quality management and Six Sigma bodies of knowledge. Other forms of variation are, however, equally important—in terms of their ability to cause what Prussian general Carl von Clausewitz called friction, a form of muda, or waste—in production control and also many service-related activities. This article shows how variation affects the latter as well as the former applications.

W. Edwards Deming’s Red Bead experiment was an innovative, hands-on exercise that demonstrated conclusively the drawbacks of blaming or rewarding workers for unfavorable and favorable variation, respectively, in a manufacturing process. The exercise consisted of using a sampling paddle to withdraw a certain number of beads from a container. White beads represented good parts, and red beads nonconforming parts. Results for five workers might, for example, be as follows for 200 parts, of which 3 percent are nonconforming. (You can simulate this yourself with Excel by means of the Data Analysis menu and Random Number Generation. Use a binomial distribution with 200 as the number of trials, and nonconforming fraction p = 0.03.)

Douglas C. Fair’s picture

By: Douglas C. Fair

A few months back, I was reading a really good article from The Wall Street Journal, titled “Stop Using Excel, Finance Chiefs Tell Staffs.” Even though it was geared toward accounting and corporate operations, the message of the article struck home: Excel shouldn’t be used as an enterprisewide financial platform. That’s as true for the manufacturing quality world as it is for finance.

One quote from the article stuck with me. An executive working on cutting Excel out of the process at his firm stated “I don’t want financial planning people spending their time importing and exporting and manipulating data, I want them to focus on what is the data telling us.” And I thought, “That’s what manufacturing organizations need to start thinking.” Manufacturers need to stop juggling spreadsheets and paper and start focusing on the information that can be found in the data they collect.

NACS’s picture


The phrase, “Are we there yet?” has long been associated with boring summer road trips. However, a new study shatters that myth, as it shows that 69 percent of people say traveling to their destination is often as fun as the actual vacation destination. The Summer Drive Study by the NACS, a trade association that represents the convenience store industry, provides a glimpse into the typical American’s car during a summer road trip—including what they eat, why they argue, and how they spend time.

The survey reveals that most vacationers will travel by cars (85%), followed by airplanes (36%) and trains (8%). Long hours don’t deter road trippers; of those traveling by car, the highest percentage of Americans plan to travel 12 or more hours round trip (32%), followed by 4–6 hours (24%), 7–11 hours (23%) and 0–3 hours (21%).

Scott A. Hindle’s picture

By: Scott A. Hindle

I recently got hold of the set of data shown in figure 1. What can be done to analyze and make sense of these 65 data values is the theme of this article. Read on to see what is exceptional about these data, not only statistically speaking.

Figure 1: Example data set.

A good start?

While I was attending a training class several years ago, a recommended starting point in an analysis was to use the “Graphical Summary” in the statistical software, which is in the options for “Basic Statistics.” The default output for figure 1’s data set is shown in figure 2.

Figure 2: Output of the Graphical Summary

Scott Weaver’s picture

By: Scott Weaver

The Atlantic hurricane season is now upon us, and the National Oceanic and Atmospheric Administration (NOAA) has just released its 2018 seasonal hurricane outlook which calls for a slightly above average season. The potential range of activity indicates that we could expect 10 to 16 named storms, with five to nine becoming hurricanes and one to four becoming major hurricanes.

But remember, it only takes one bad storm to wreak havoc, or in the case of the 2017 hurricane season, three.

Wait a minute. Why are you reading about hurricanes and NOAA in a NIST blog post? Let’s back up for a moment and start at the beginning.

Hurricane Harvey, seen here from the NOAA GOES-16 satellite on Aug. 25, 2017, was the first Category 4 hurricane to make landfall along the Texas Coast since Carla in 1961. An average Atlantic hurricane season produces 12 named tropical storms, six of which become hurricanes, and two of which become major hurricanes. Credit: NOAA

Syndicate content