Statistics Article

Laurel Thoennes @ Quality Digest’s picture

By: Laurel Thoennes @ Quality Digest

In the foreword of Mark Graban’s book, Measures of Success: React Less, Lead Better, Improve More (Constancy Inc., 2018), renowned statistician, Donald J. Wheeler, writes about Graban: “He has created a guide for using and understanding the data that surround us every day.

“These numbers are constantly changing,” explains Wheeler. “Some of the changes in the data will represent real changes in the underlying system or process that generated the data. Other changes will simply represent the routine variation in the underlying system when nothing has changed.”

The problem is in deciding whether data changes are “noise” or signals of real changes in the system.

“Mark presents the antidote to this disease of interpreting noise as signals,” adds Wheeler. “And that is why everyone who is exposed to business data of any type needs to read this book.”

William A. Levinson’s picture

By: William A. Levinson

Quality and manufacturing practitioners are most familiar with the effect of variation on product quality, and this is still the focus of the quality management and Six Sigma bodies of knowledge. Other forms of variation are, however, equally important—in terms of their ability to cause what Prussian general Carl von Clausewitz called friction, a form of muda, or waste—in production control and also many service-related activities. This article shows how variation affects the latter as well as the former applications.

W. Edwards Deming’s Red Bead experiment was an innovative, hands-on exercise that demonstrated conclusively the drawbacks of blaming or rewarding workers for unfavorable and favorable variation, respectively, in a manufacturing process. The exercise consisted of using a sampling paddle to withdraw a certain number of beads from a container. White beads represented good parts, and red beads nonconforming parts. Results for five workers might, for example, be as follows for 200 parts, of which 3 percent are nonconforming. (You can simulate this yourself with Excel by means of the Data Analysis menu and Random Number Generation. Use a binomial distribution with 200 as the number of trials, and nonconforming fraction p = 0.03.)

Douglas C. Fair’s picture

By: Douglas C. Fair

A few months back, I was reading a really good article from The Wall Street Journal, titled “Stop Using Excel, Finance Chiefs Tell Staffs.” Even though it was geared toward accounting and corporate operations, the message of the article struck home: Excel shouldn’t be used as an enterprisewide financial platform. That’s as true for the manufacturing quality world as it is for finance.

One quote from the article stuck with me. An executive working on cutting Excel out of the process at his firm stated “I don’t want financial planning people spending their time importing and exporting and manipulating data, I want them to focus on what is the data telling us.” And I thought, “That’s what manufacturing organizations need to start thinking.” Manufacturers need to stop juggling spreadsheets and paper and start focusing on the information that can be found in the data they collect.

NACS’s picture


The phrase, “Are we there yet?” has long been associated with boring summer road trips. However, a new study shatters that myth, as it shows that 69 percent of people say traveling to their destination is often as fun as the actual vacation destination. The Summer Drive Study by the NACS, a trade association that represents the convenience store industry, provides a glimpse into the typical American’s car during a summer road trip—including what they eat, why they argue, and how they spend time.

The survey reveals that most vacationers will travel by cars (85%), followed by airplanes (36%) and trains (8%). Long hours don’t deter road trippers; of those traveling by car, the highest percentage of Americans plan to travel 12 or more hours round trip (32%), followed by 4–6 hours (24%), 7–11 hours (23%) and 0–3 hours (21%).

Scott A. Hindle’s picture

By: Scott A. Hindle

I recently got hold of the set of data shown in figure 1. What can be done to analyze and make sense of these 65 data values is the theme of this article. Read on to see what is exceptional about these data, not only statistically speaking.

Figure 1: Example data set.

A good start?

While I was attending a training class several years ago, a recommended starting point in an analysis was to use the “Graphical Summary” in the statistical software, which is in the options for “Basic Statistics.” The default output for figure 1’s data set is shown in figure 2.

Figure 2: Output of the Graphical Summary

Scott Weaver’s picture

By: Scott Weaver

The Atlantic hurricane season is now upon us, and the National Oceanic and Atmospheric Administration (NOAA) has just released its 2018 seasonal hurricane outlook which calls for a slightly above average season. The potential range of activity indicates that we could expect 10 to 16 named storms, with five to nine becoming hurricanes and one to four becoming major hurricanes.

But remember, it only takes one bad storm to wreak havoc, or in the case of the 2017 hurricane season, three.

Wait a minute. Why are you reading about hurricanes and NOAA in a NIST blog post? Let’s back up for a moment and start at the beginning.

Hurricane Harvey, seen here from the NOAA GOES-16 satellite on Aug. 25, 2017, was the first Category 4 hurricane to make landfall along the Texas Coast since Carla in 1961. An average Atlantic hurricane season produces 12 named tropical storms, six of which become hurricanes, and two of which become major hurricanes. Credit: NOAA

Anthony D. Burns’s picture

By: Anthony D. Burns

Quality is related to processes. A process is “a series of actions or steps taken in order to achieve a particular end.” It doesn’t matter whether the process is the handling of invoices, customers in a bank, the manufacture or assembly of parts, insurance claims, the sick passing through a hospital, or any one of thousands of other examples. A process involves movement and action in a sequential fashion.

Every quality professional is concerned about the improvement of processes. By making processes better, we get less waste, lower costs, and happier customers.

Photo used with permission of Samuel Ferrara.

The image above depicts two opposed states: a dynamic, changing state and a static state. The lake is static, unchanging. We might take temperature measurements in the lake at different depths and come back tomorrow to find no difference. We might take samples of the lake water to compare with other lakes at a later date when we travelled to them.

Cheryl Pammer’s picture

By: Cheryl Pammer

Confidence intervals show the range of values we can be fairly, well, confident, that our true value lies in, and they are very important to any quality practitioner. I could be 95-percent confident the volume of a can of soup will be 390–410 ml. I could be 99-percent confident that less than 2 percent of the products in my batch are defective.

Demonstrating an improvement to the process often involves proving a significant improvement in the mean, so that’s what we tend to focus on—the center of a distribution.

Defects don’t fall in the center, though. They fall in the tails. You can find more beneficial insights in many situations through examining the tails of the distribution rather than just focusing on the mean.

Let’s take a look at some nontraditional confidence intervals that are particularly useful in estimating the percent that fall inside or outside the specification limits.

Rip Stauffer’s picture

By: Rip Stauffer

A lot of people in my classes struggle with conditional probability. Don’t feel alone, though. A lot of people get this (and simple probability, for that matter) wrong. If you read Innumeracy by John Allen Paulos (Hill and Wang, 1989), or The Power of Logical Thinking by Marilyn vos Savant (St. Martin’s Griffin, 1997), you’ll see examples of how a misunderstanding or misuse of this has put innocent people in prison and ruined many careers. It’s one of the reasons I’m passionate about statistics, but it’s hard for me, too, because it’s not easy to work out in your head. I always have to build a table.

The best thing to do is to be completely process-driven; identify what’s given, then follow the process and the formulas religiously. After a while, you can start to see it intuitively, but it does take a while.

In my MBA stats class, one of the ones that always stumped the students was a conditional problem:

Mike Richman’s picture

By: Mike Richman

There are many subjects that we cover regularly here at Quality Digest. Chief among these are standards (ISO 9001 or IATF 16949, for example) methodologies (such as lean, Baldrige, or Six Sigma), and test and measurement systems (like laser trackers or micrometers). One topic, however, is consistently at the very top of the list when it comes to audience popularity—industrial statistics, including statistical process control (SPC).

It’s no secret why statistics hold such a place of honor among QD readers. Quite simply, without exquisite measurement and the ability to understand if a process is in or out of control, continuous improvement is impossible. Statistical analysis is the very underpinning of all things quality, and it touches on everything from proper management to understanding how to best leverage technological innovation.

With that in mind, I recently had the opportunity to exchange thoughts with Neil Polhemus, Ph.D., the chief technology officer for Statgraphics Technologies Inc. Late last year he released the book, Process Capability Analysis: Estimating Quality. A slightly edited version of our conversation follows.

Syndicate content