Content By Donald J. Wheeler

Donald J. Wheeler’s picture

By: Donald J. Wheeler

Lean production is built on the explicit assumption that each step is operated predictably and capably. Predictable operation can only be achieved and maintained by using process behavior charts. But short production runs and multiple products make it hard to see how to use process behavior charts in a lean environment. So what can be done? This paper will show how to use a zed chart for individual values to track your process while it is making different products.

Unit 12

Unit 12 is used to make Products 1201 and 1202. The final characteristic for Product 1201 has a target value of 19.0, while the final characteristic for Product 1202 has a target value of 8.0. Batches 43 through 72 contain short runs of both products.


Figure 1: Data for Unit 12

Of course a naive approach would be to simply place the data from Unit 12 on an XmR chart as shown in figure 2.

Donald J. Wheeler’s picture

By: Donald J. Wheeler

In the past two months we have looked at how three-sigma limits work with skewed data. This column finds the power functions for the probability limits of phase two charts with skewed probability models, and compares the trade-offs made by three-sigma limits with the trade-offs made by the probability limits.

Phase two charts

Ever since 1935, there have been two approaches to finding limits for process behavior charts. There is Walter Shewhart’s approach using fixed-width limits, and there is Egon Pearson’s fixed-coverage approach based on probability models. (For more on these two schools of thought, see “The Normality Myth,” Quality Digest, Sept. 19, 2019.) About the year 2000, some of my fellow statisticians tried to reconcile these two approaches by talking about “phase one and phase two control charts.”

Phase one charts use Shewhart’s fixed-width, three-sigma limits. These charts are used to help identify assignable causes of exceptional variation so that the process can be adjusted or fixed as needed. Then, under the assumption that once a process is fixed it will stay fixed, it is time for phase two.

Donald J. Wheeler’s picture

By: Donald J. Wheeler

Last month I looked at how the fixed-width limits of a process behavior chart filter out virtually all of the routine variation regardless of the shape of the histogram. In this column I will look at how effectively these fixed-width limits detect signals of economic importance when skewed probability models are used to compute the power function.

Power functions

A power function provides a mathematical model for the ability of a statistical procedure to detect signals. Here we shall use power functions to define the theoretical probabilities that an X chart will detect different sized shifts in the process average. To compute a power function we begin with a probability model to use, and a shift in location for that model. Figure 1 shows these elements for a traditional standard normal probability model.


Figure 1: Normal model with a 1.0-sigma shift in location

 

Donald J. Wheeler’s picture

By: Donald J. Wheeler

The oldest myth about process behavior charts is the myth that they require “normally distributed data.” If you have ever heard this idea, or if you have ever taught this to others, then you need to read this article.

While this myth dates back to 1935, and while Walter Shewhart exposed this idea as a myth in 1938, it continually reappears in various forms even today. For example, a white paper put out by a software company recently called the process behavior chart a “normality control chart.” And a blurb for a workshop advertised the Western Electric zone tests as “depending upon a normal distribution.”

As I consider how these myths are perpetuated I do not detect any malicious intent, just unconscious confusion. Nevertheless, those that continue to spread these myths fall into three groups. One group spreads these myths because of their special interests, another group spreads these myths because of their unexamined assumptions, and the third group spreads these myths because they were taught that the myths are true.

Donald J. Wheeler’s picture

By: Donald J. Wheeler

Recently I have had several questions about which bias correction factors to use when working with industrial data. Some books use one formula, other books use another, and the software may use a third formula. Which one is right? This article will help you find an answer.

Before we can meaningfully discuss different bias correction factors we need to understand what they do. To this end we must make a distinction between parameters for a probability model and statistics computed from the data. So we shall go back to the origin of our data and move forward.

A statistic is simply a function of the data. Data plus arithmetic equals a statistic. Since arithmetic cannot create meaning, it is the context for the data that gives specific meaning to any statistic. Thus, we will have to begin with the progression from a physical process to a probability model, and then we can look at how the notion of a probability model frames the way we use our statistics.

Assume that we have a process that is producing some product, and assume that periodic checks are made upon some product characteristic. These checks will result in a sequence of values that could be written as:

Donald J. Wheeler’s picture

By: Donald J. Wheeler

During the past three months James Beagle and I presented columns that made extensive use of analysis of means techniques. Since these techniques may be new to some, this column explains when to use each technique and where to find tables of the appropriate scaling factors.

In 1967, Ellis R. Ott published his analysis of means technique (ANOM) for comparing treatment averages with their grand average. This technique is a generalized version of the average and range chart. However, the assumption that allows this generalization also imposes a restriction of where this technique can be used. The generalization allows us to compute limits with a fixed overall alpha level (the user-specified risk of a false alarm). The restriction is that we can only use ANOM for the one-time analysis of a finite amount data (such as occurs in experimental studies).

Donald J. Wheeler
By: James Beagle III, Donald J. Wheeler

In Parts One and Two we defined the equivalence of instruments in terms of bias and measurement error based on studies using a single standard. Here we look at comparing instruments for differences in bias or differences in measurement error while using multiple standards.

Donald J. Wheeler
By: Donald J. Wheeler, James Beagle III

Last month we provided an operational definition of when measurement systems are equivalent in terms of bias. Here we will look at comparing the within-instrument measurement error between two or more systems.

Once again we must emphasize that it makes no sense to seek to compare measurement systems that do not display a reasonable degree of consistency. Consistency must be demonstrated, it cannot be assumed, and a consistency chart is the simplest way to do this. 

Donald J. Wheeler
By: Donald J. Wheeler, James Beagle III

As soon as we have two or more instruments for measuring the same property the question of equivalence raises its head. This paper provides an operational definition of when two or more instruments are equivalent in practice. 

Churchill Eisenhart, Ph.D., while working at the U.S. Bureau of Standards in 1963, wrote: “Until a measurement process has been ‘debugged’ to the extent that it has attained a state of statistical control it cannot be regarded, in any logical sense, as measuring anything at all.” Before we begin to talk about the equivalence of measurement systems we need to know whether we have yardsticks or rubber rulers. And the easiest way to answer this question is to use a consistency chart.

Donald J. Wheeler’s picture

By: Donald J. Wheeler

Managers the world over want to know if things are “in control.” This usually is taken to mean that the process is producing 100-percent conforming product, and to this end an emphasis is placed upon having a good capability or performance index. But a good index by itself does not tell the whole story. So, if you want to learn how to be sure that you are shipping 100-percent conforming product, read on.

Capability and performance indexes

There are four capability and performance indexes that are in common use today. While many other ratios have been proposed, these four indexes effectively summarize the relationship between a process and the product specifications.

The capability ratio uses the difference between the watershed specifications to define the space available and compares this with the generic space required by any process that is operated with minimum variance. This generic space required is computed as six times an appropriate within-subgroup measure of dispersion, Sigma(X).