Featured Product
This Week in Quality Digest Live
Quality Insider Features
Anne Trafton
Could help doctors choose better treatments for some patients
Anthony Veal
A life of long weekends is alluring, but the shorter working day may be more practical
Tom Taormina
The context of the organization
Phanish Puranam
Five questions to ask about whether a business problem is ‘AI-solvable’
Jennifer Lauren Lee
NIST builds Additive Manufacturing Metrology Testbed to better understand additive manufacturing processes

More Features

Quality Insider News
Michigan Metrology’s upcoming short course will be held April 1–2, 2020, in Livonia, MI
Florence Community Plaza features life-size statues of a firefighter, a public service worker, and a police officer
Program to provide tools to improve school performance and enrollment
Full range of Starrett force software solutions are now interchangeable with different Starrett frame series
To be held Feb. 11–13, 2020, at Anaheim Convention Center, in the Automation Technology Expo
Liquid-entrenched smooth surface (LESS) coating repels bacteria that stink and infect
C-THRU4.0 collects and processes data from multiple machine tools equipped with Artis machine monitoring systems
High-speed Microstar platform takes advantage of infinite positioning and autonomous measuring of the Renishaw system
Serving as a Baldrige examiner—an unparalleled professional development experience

More News

Steve Moore

Quality Insider

The Misuse and Abuse of Process Behavior Charts

Nine common pitfalls to avoid

Published: Thursday, December 6, 2012 - 11:46

More than 30 years ago, I became aware of the power of statistical process control (SPC) and what we used to call control charts (now called process behavior charts). As technical director of the company where I was employed at the time, I promoted the idea of SPC, got upper management “support,” developed training materials, trained people, and waited for wonderful things to happen… and waited… and waited… and waited.

Since that time, I have seen many “improvement efforts,” “programs,” and “initiatives,” come in like a lion and go out like a lamb in various organizations. Until about 15 years ago, the lack of portable computer hardware and too few user-friendly software applications were the two primary barriers to widespread use of SPC. When I got started with SPC in the late 1970s, most of my work had to be done with a No. 2 pencil, a pad of paper, a slide rule, and the appropriate tables of bias factors. No wonder people were reluctant to institute the use of process behavior charts in the real world. It was a lot of work.

Today, anyone with a laptop or desktop computer can generate process behavior charts in seconds, once the data has been entered. (My slide rule cannot compete with this, and thus has not seen the light of day in many years.) The majority of process behavior charts that I’ve seen pinned to bulletin boards or illustrated in monthly reports and company newsletters are useless at best and misleading at worst. The aim of this article is to help you avoid the most common forms of process behavior chart abuse.

Too many data points

One side effect of the data acquisition and computing power we have at our fingertips is that we often want to plot hundreds or thousands of data points. The result is a mess offering little or no insight into the process. Figure 1 shows a chart constructed from important quality data collected eight to 10 times a day over several months. Such a chart is virtually useless except as a report card (maybe). Process behavior charts are most helpful when used by management to aid decision making or by operators in real time to understand when special causes are present and need to be removed or made permanent.

Figure 1: Process behavior chart with too many data points

Too many charts

Virtually any time-ordered data collected from a process can be plotted on a process behavior chart, and with the computing power we now have, it is easy to go wild and post so many charts that nobody pays much attention to them. It is far better to post a few charts representing results that were directly influenced by the intended audience (presumably, the workforce).

Autocorrelated data

Another side effect of today’s data acquisition power is autocorrelation. Autocorrelation occurs when successive data points from time-ordered data are too similar to each other—they are highly correlated. Virtually all data from a production process displays some degree of autocorrelation. Process behavior charts work quite well with data from production processes unless the autocorrelation is severe, with a coefficient of say 0.70 or greater.

Sometimes we induce autocorrelation by sampling the process too frequently—more frequently than the process itself can change. For example, suppose your plant has a large chemical reactor requiring a temperature control within a 10° span, and a process behavior chart has been constructed to monitor the reactor’s temperature. An operator checks the temperature every hour and updates the chart to make sure the temperature is stable and within the specified limits. A new process engineer joins the company and decides that with some new software the temperature can be monitored and plotted every 15 seconds. Surely this is a huge advancement in controlling the reactor temperature. No.

When taking the temperature of a large reactor every 15 seconds, the data will be highly autocorrelated. Plotted on a process behavior chart, autocorrelated data create upper and lower natural process limits that are too tight. Why? Because the average moving range from which the limits are calculated will be very small. With limits that are too tight, over-control and increased variation will occur.

Figure 2 is an example of highly autocorrelated data on a process behavior chart. In this chart I show the daily closings of the Dow Jones Industrial Average for 2012. This data is obviously autocorrelated (autocorrelation coefficient = 0.95) partially because each day’s activity starts from the previous day’s final number. Therefore each daily number is not randomly distributed between two limits.

Figure 2: Process behavior chart of highly autocorrelated data

Irrational subgrouping

Irrational subgrouping is very common. Sampling the temperature of a large reactor every 15 seconds may be irrational subgrouping. Perhaps randomly selecting a data point from a five-minute span at the end of every hour would make more sense and eliminate the problem of autocorrelation.

In general there are no hard and fast rules to avoid irrational sampling—all processes are different. Sometimes trial and error will help select the right subgrouping. It’s prudent to consider how rapidly the process can change, and by how much, to assure that sampling is not more frequent than the ability of the process to change.

For another example, suppose you manage production where large industrial manufacturing equipment runs 24/7. One of your key metrics is the overall equipment effectiveness (OEE), which is found by the simple formula: % standard rate × % uptime × % first pass quality. A process behavior chart would be very useful in understanding if improvement efforts are truly having an effect or if the OEE is deteriorating.

The question becomes, how often do you calculate OEE? Is it calculated once per year, once per quarter, or monthly, weekly, or daily? The answer depends on the ability of the system (and management) in responding to special causes of improvements or problems. In general, most companies look at plotting OEE numbers every month. I have seen a few production managers calculate OEE on a weekly basis.

The wrong chart for the job

Some types of process behavior charts require data to be characterized by a particular distribution (i.e., normal, binomial, Poisson). The most common are c charts and u charts requiring the Poisson distribution, also p charts and np charts, which require the binomial distribution. The best rule is to start with the XmR (individuals) chart. This chart requires no distribution assumptions and is considered the “Swiss Army Knife” of process behavior charts. Use the XmR chart and you will almost always be right—almost always. There are no guarantees in the world of statistics.

Chunky data

I once visited an industrial plant that had a very successful safety system in place. Only a few OSHA-recordable injuries occurred each year. Figure 3 represents a process behavior chart that was posted in the employee entrance. Because the data is “chunky,” the chart indicates a special cause when there was none. Data for the XmR chart is chunky when there are fewer than four possible range values (including zero) on the range chart below the upper control limit of that chart.

Figure 3: XmR chart with chunky safety data

Chunky data can often be remedied by looking for a more rational subgrouping of the data. The safety data in figure 3 could be converted from “Recordable Injuries Per Month” to “Man-Hours Worked Between Recordable Injuries” or “Days Between Recordable Injuries.” The XmR chart can now have many more values than three and the data will no longer be chunky.

There are other causes of data being chunky such as too much rounding off, measurement tools with inadequate resolution, or wrong measurement units. As an example, you probably would not control the weights of 6 oz. packets of candy by measuring to the nearest pound. Chunky is good for chocolate, but not for process behavior charts. Donald Wheeler has written more on this in his Quality Digest column, “What Is Chunky Data?

Too many rules

Many SPC software packages contain a dozen or more rules for out-of-control signals. With enough rules, even a data set from a random number generator can have a false signal frequency that is too high for practical use. A good remedy is to turn off all rules in the software package except the four Western Electric Zone rules which strike a good balance between too many false signals and signals missed. Even better, it is good to turn off all rules initially and use some grey matter between one’s ears to examine the process’s behavior in light of your intimate knowledge of the process being studied. Then reconstruct the chart with the Western Electric rules to help determine where signals might exist.

Use of action limits and spec limits

I have often seen the two-sigma limits on process behavior charts labeled as “action limits.” In this context, what they really are is “tampering limits,” which I have written about in a previous article. Walter Shewhart noted several times in his landmark book, Economic Control of Manufactured Product (ASQ, 1980) that the three-sigma limits provide the most economic decision points for when to look for special causes. Taking action without first knowing if there is a true signal of special cause variation will increase variation and will itself be a special cause of variation.

Sometimes the specification limits are placed on a process behavior chart and used as natural process limits—a major mistake. While adding the spec limits to an XmR process behavior chart can help visualize the capability of the process to meet the specs, the spec limits themselves have nothing to do with where the process is centered and how much variation is present, and should never be used as the upper and lower control limits.

Lack of annotation

The main purpose of process behavior charts is to gain insight into the underlying process. A chart posted on a bulletin board without annotation to explain process upsets or positive process shifts in performance do not help the intended audience (again presumably, the workforce) understand how their efforts affected the process. I once saw a great example at a plant that tracked efficiency of each production machine. A positive shift in efficiency on one machine was noted on the chart with the words “Task team implemented new changeover procedure—congratulations!”

Conclusion

My aim in this article was to help you avoid common mistakes made when constructing process behavior charts, especially for public view. If carefully done, process behavior charts can be a great tool for communicating key process metrics to a wide audience such as the workforce that ultimately produces those metrics. However, with the computing power most of us now find at our fingertips, new pitfalls to the use and understanding of the charts have become common.

The nine pitfalls I have outlined here do not comprise an exhaustive list. I invite you to provide other examples in the comments section.

Discuss

About The Author

Steve Moore’s picture

Steve Moore

After 47 years, Steve Moore is retired from the pulp and paper industry. He is a graduate of North Carolina State University with a pulp and paper degree, and holds a master's degree from the Institute of Paper Chemistry in Appleton, Wisconsin. He has held various research and development, technical, engineering, and manufacturing positions in the paper industry. He has been a student, teacher, and practitioner of statistical methods applied to real-world processes for the past 35 years.

Comments

Another Pitfall - not understanding "important" shifts to detect

I/MR charts are frequently touted on this board as the best overall chart to start with (for continuous data).  I prefer to ask the question, "how much of a process shift are we hoping to detect with our specific charting application? and then select an appropriate chart.  Depending on the size of the shift we need to detect (may depend on current process capablity, or effect of variation on product performance) as well as the natural variation in the process - an Individuals chart may be completely inadequate to detect the types of process changes  we desire to detect.  In these cases, Xbar (if rational samples are possible) or CUSUM charts (among others) may be much more appropriate. 

So....I suggest that a pitfall of SPC is not understanding or asking what the purpose of the chart even is?  How much of a process change are we trying to detect (from a practical perspective).  The chart design should align with these goals.