Now There’s a Voice From the Past
Recently I was asked to comment on an excellent summary of Dorian Shainin’s work put together by Professor Jock MacKay of the University of Waterloo. I chuckled as MacKay recalled the “exaggerated claims and hyperbole” of the results attributed to Shainin’s methods and his infamous “precontrol,” a discussion of which would cause fist fights.
Why haven’t we heard much of his methods since the Six Sigma/lean craze has taken hold? Could it be that customers who are aware of those philosophies aren’t willing to accept product that’s “good enough?” Shainin was obviously well-respected by many, accomplished much, and came up with a solution that worked relatively well. But has his time passed--as well as the underlying conditions that made his methods successful?
His Shainin System (SS) was developed for, and is best suited to, problem solving for medium- to high-volume processes--mostly in parts and assembly operations--where data are cheaply available, statistical methods are widely used, intervention in the process is difficult, and “conforming to specification” thinking is the norm.
Shainin’s basic theory is that of a “dominant cause,” but it seems to rely on applying an ad hoc, accompanying set of tools that, to me, are simplistic and naïve--and overly prescriptive. However, they’re packaged well with slick names (red x, pink x, pale-pink x, green y) and a very seductive simplicity that relies mainly on observational studies and small, off-line experiments, which is fine. Just don’t expect deep answers. SS will get the product out the door, and customers, for the most part, will probably be satisfied, except, maybe, those who might be trying to implement Six Sigma or lean.
The Shainin System relies heavily on observational studies, which isn’t necessarily a bad thing if planned well. Whether they qualify, as his system alleges, as “statistically designed experiments” is open to question. In fact, a hidden benefit might be that it stops a lot of impulsive, poor experimentation. However, not once in his technique of “progressive search” did I hear mentioned the simple tool of plotting the observational data in its naturally occurring time order (in essence, a process capability study) and how this allows one to mine a hidden wealth of information--if planned correctly. Shainin’s tendency seems to be to jump right to a histogram. You need that green y, you know… which would be invalid if the process were in statistical control.
Speaking of which, I didn’t see any mention of the concept of statistical control. Understanding Statistical Process Control , by Donald J. Wheeler and David S. Chambers (SPC Press Inc., 1992), is second to none for planning an observational study to evaluate one’s process to look at myriad sources of underlying variation.
If you’d like to incorporate specification thinking into it, then use their four states of a process:
• Ideal state--in statistical control (i.e., stable and predictable) and meeting specifications
• Threshold state--in statistical control but out-of-specification product being produced
• Brink of chaos--not in statistical control but producing all product within specification
• Chaos--not in statistical control and producing out-of-specification product
Note that a key piece of data in concluding the evaluation is assessing your current state of statistical control. It also presents a baseline from which one can measure the effects of interventions.
Experimental designs eventually factor heavily into the fourth and fifth stages of the SS algorithm, and it’s here that I was truly perplexed. In spite of itself, SS would be set up for mild-to-moderate success if it utilized the potential power of a three-to-five variable response surface design, which would provide an indisputable road map of the process terrain and show the most stable regions within which to operate in a very reasonable number of experiments. (See my article, “Using Design of Experiments as a Process Road Map,” in the February 2006 issue of Quality Digest .) Yet Shainin’s preferred strategy is a series of small experiments in two factors that are naïve in consideration of potential interactions and neglect to take advantage of the hidden replication and sequential nature inherent in factorial and response surface designs. If Shainin was so statistically enlightened, how could he have missed that?
The next part of the algorithm talks about the use of precontrol and can be dismissed in the following comment by MacKay: “In SS language, if the dominant cause resides in the part-to-part family, no form of feedback control can be effective in reducing variation. If this is the case, then precontrol will not be effective, and the algorithm provides no guidance as to how to proceed.” Which is probably most of the time.
The need for simple answers continues. We may be statistically more sophisticated, but are we any better off ?
Davis Balestracci is a member of the American Society for Quality and past chair of its statistics division. He would love to wake up your conferences with his dynamic style and unique, entertaining insights into the places where process, statistics, and quality meet. Visit his web site at www.dbharmony.com .