All Features

Donald J. Wheeler
Fourteen years ago, I published “Do You Have Leptokurtophobia?” Based on the reaction to that column, the message was needed. In this column, I would like to explain the symptoms of leptokurtophobia and the cure for this pandemic affliction.
Leptokurtosis is a Greek word that literally means “thin…

Douglas C. Fair, Scott A. Hindle
Data overload has become a common malady. Modern data collection technologies and low-cost database storage have motivated companies to collect data on almost everything. The result? Data overload. Unfortunately, few companies leverage the information hidden away in those terabytes of data.
There…

Scott A. Hindle, Douglas C. Fair
We are one year away from the 100th anniversary of the creation of the control chart: Walter Shewhart created the control chart in 1924 as an aid to Western Electric’s manufacturing operations. Since it’s almost prehistoric, is it now time to leave the control chart technique—that started out using…

Donald J. Wheeler
In last month’s column, we looked at how process-hyphen-control algorithms work with a process that is subject to occasional upsets. This column will consider how they work with a well-behaved process.
Last month we saw that process adjustments can reduce variation when they are reacting to real…

Douglas C. Fair, Scott A. Hindle
Today’s manufacturing systems have become more automated, data-driven, and sophisticated than ever before. Visit any modern shop floor and you’ll find a plethora of IT systems, HMIs, PLC data streams, machine controllers, engineering support, and other digital initiatives, all vying to improve…

Donald J. Wheeler
Many articles and some textbooks describe process behavior charts as a manual technique for keeping a process on target. For example, in Norway the words used for SPC (statistical process control) translate as “statistical process steering.” Here, we’ll look at using a process behavior chart to…

Donald J. Wheeler
As we learned last month, the precision to tolerance ratio is a trigonometric function multiplied by a scalar constant. This means that it should never be interpreted as a proportion or percentage. Yet the simple P/T ratio is being used, and misunderstood, all over the world. So how can we properly…

Harish Jose
The success run theorem is one of the most common statistical rationales for sample sizes used for attribute data.
It goes in the form of:
Having zero failures out of 22 samples, we can be 90% confident that the process is at least 90% reliable (or at least 90% of the population is conforming).
Or…

Donald J. Wheeler
A simple approach for quantifying measurement error that has been around for over 200 years has recently been packaged as a “Type 1 repeatability study.” This column considers various questions surrounding this technique.
A Type 1 repeatability study starts with a “standard” item. This standard…

Kari Miller
Since 2010, citations for insufficient corrective action and preventive action (CAPA) procedures have been at the top of the list of the most common issues within the U.S. Food and Drug Administration (FDA) inspections, particularly for the medical device industry. Issues can occur while…

Donald J. Wheeler
Chunky data can distort your computations and result in an erroneous interpretation of your data. This column explains the signs of chunky data, outlines the nature of the problem that causes it, and suggests what to do when it occurs.
When the measurement increments used are too large for the job…

Donald J. Wheeler
The keys to effective process behavior charts are rational sampling and rational subgrouping. As implied by the word rational, we must use our knowledge of the context to collect and organize data in a way that answers the interesting questions. This column will show the role that sample frequency…

Harish Jose
I’m looking at a topic in statistics. I’ve had a lot of feedback on one of my earlier posts on OC curves and how one can use them to generate a reliability/confidence statement based on sample size (n), and rejects (c). I provided an Excel spreadsheet that calculates the reliability/confidence…

Donald J. Wheeler
Ever since 1935 people have been trying to fine-tune Walter Shewhart’s simple but sophisticated process behavior chart. One of these embellishments is the use of two-sigma “warning” limits. This column will consider the theoretical and practical consequences of using two-sigma warning limits.…

Donald J. Wheeler
As the foundations of modern science were being laid, the need for a model for the uncertainty in a measurement became apparent. Here we look at the development of the theory of measurement error and discover its consequences.
The problem may be expressed as follows: Repeated measurements of one…

Donald J. Wheeler, Al Pfadt
In memory of Al Phadt, Ph.D.
This article is a reprint of a paper Al and I presented several years ago. It illustrates how the interpretation and visual display of data in their context can facilitate discovery. Al’s integrated approach is a classic example not only for clinical practitioners but…

Donald J. Wheeler
The shape parameters for a probability model are called skewness and kurtosis. While skewness at least sounds like something we might understand, kurtosis simply sounds like jargon. Here we’ll use some examples to visualize just what happens to a probability model as kurtosis increases. Then we’ll…

Alan Metzel
Almost seven years ago, Quality Digest presented a short article by Matthew Barsalou titled “A Worksheet for Ishikawa Diagrams.” At the time, I commented concerning enhancements that provide greater granularity. Indicating that he would probably have little time to devote to such a project,…

Donald J. Wheeler
The computation for skewness does not fully describe everything that happens as a distribution becomes more skewed. Here we shall use some examples to visualize just what skewness does—and does not—involve.
The mean for a probability model describes the balance point. The standard deviation…

Tony Boobier
Does your use of probabilities confuse your audience? Sometimes even using numbers can be misleading. The notion of a 1-in-a-100-year flood doesn’t prevent the possibility of flooding occurring in consecutive years. This description is no more than a statistical device for explaining the likelihood…

Donald J. Wheeler
There are four major questions in statistics. These can be listed under the headings of description, probability, inference, and homogeneity. An appreciation of the relationships between these four areas is essential for successful data analysis. This column outlines these relationships and…

Donald J. Wheeler
The cumulative sum (or Cusum) technique is occasionally offered as an alternative to process behavior charts, even though they have completely different objectives. Process behavior charts characterize whether a process has been operated predictably. Cusums assume that the process is already being…

Donald J. Wheeler
Last month we found that capability and performance indexes have no inherent preference for one probability model over another. However, whenever we seek to convert these indexes into fractions of nonconforming product, we have to make use of some probability model. Here, we’ll look at the role…

Donald J. Wheeler
Many people have been taught that capability indexes only apply to “normally distributed data.” This article will consider the various components of this idea to shed some light on what has, all too often, been based on superstition.
Capability indexes are statistics
Capability and performance…

Donald J. Wheeler
Walter Shewhart made a distinction between common causes and assignable causes based on the effects they have upon the process outcomes. While Shewhart’s distinction predated the arrival of chaos theory by 40 years, chaos theory provides a way to understand what Shewhart was talking about.…