Featured Product
This Week in Quality Digest Live
Management Features
Naresh Pandit
Enter the custom recovery plan
Anton Ovchinnikov
In competitive environments, operational innovation could well be the answer to inventory risk
Julie Winkle Giulioni
The old playbook probably won't work
Sarah Schiffling
But supply chains will get worse before they get better
Gleb Tsipursky
Musk’s unrealistic desire for control doesn’t consider current workplace data

More Features

Management News
Recent research finds organizations unprepared to manage more complex workforce
Attendees will learn how three top manufacturing companies use quality data to predict and prevent problems, improve efficiency, and reduce costs
More than 40% of directors surveyed cite the ability of companies to execute as one of the biggest threats to improving ESG performance
MIT Sloan study shows that target-independent compensation systems can be superior
Steps that will help you improve and enhance your employee recruitment, retention, and engagement
300 Talent acquisition leaders and HR executives from companies gather in Kansas City
FedEx demonstrates commitment to customer-focused continuous improvement
SONY-based 8MP color, UVC USB, high-speed camera provides high dynamic range and dual-stream support

More News

Davis Balestracci


Despite 35+ Years of Evidence to the Contrary...

Ingenious ways to mess up factorial designs

Published: Monday, July 18, 2016 - 16:32

Today I want to concentrate on the foundation of what is most commonly taught as design of experiments (DOE)—factorial designs.

Elsewhere I’ve mentioned three of C.M. Hendrix’s “ways to mess up an experiment.” After 35 years of teaching DOE, I’ve concluded that he pretty much captures the universal initial class experience I described in these additional ways to mess up:
• No comprehensive strategy, i.e., attacking one dependent variable at a time
• Too many experiments concentrated in one region of experimental space, or too few variables
• Attempting to optimize by manipulating one variable at a time
• Failure to appreciate the role of interactions

Regarding interactions, main effects may predominate, but interactions tend to be more the rule than the exception. To explain interaction and the dangers of one-variable-at-a-time, here’s the real tar contour plot again (based on a real situation).

What if one group decided to initially hold the copper sulfate concentration constant at 26 percent and experiment in the 55° to 65° range? From the contour plot, the ultimate “effect” (for the quadratic French curve equation approximation) would be the difference of the tar value at 65° (~14) minus the tar value at 55° (~50): a 36-percent drop in tar as copper sulfate goes from 55° to 65° (with copper sulfate set at 26 percent for all runs). Quite a temperature effect!

What if another group decides to initially hold the copper sulfate concentration constant at 31 percent? Regardless of the temperature, the true tar value is 11 percent, a difference over the range of... zero! What? No temperature effect?

Who’s right? They’re both right. The effect of going from 55° to 65° depends on the specific copper sulfate concentration. This is known as an “interaction.” One-at-a-time experiments can’t detect them, at least not efficiently, and usually only due to luck.

To complicate matters, the results above are all based on the true values from the contour plot. Each experimental result is this value clouded with the additional process standard deviation (+4), which in this case manifests as +8–10 on this true value for any condition.

Especially in the case of the second group, interpreting the effect would be merely reacting to the random variation (like the class did during the initial exercise with the nitrite concentration, which had no effect).

To further complicate matters, it wouldn’t be unusual for two temperatures’ results to differ by as much 14!

People have often come to me with similar results and asked me to fit a high-order polynomial equation. So, if they performed experiments at five different temperatures, they expect a fifth-order polynomial—“because our process is very complicated.”

People hate variation, don’t understand it, and try to model it. Hendrix calls this “mess up”—i.e., too much emphasis on possible curvature (a belief in complexity rather than simplicity). Or what I sometimes call “I have the most complicated process in the world” syndrome.

Enter factorial designs

By running the eight points (2 × 2 × 2) formed by the cube of all possible “low-high” combinations of the three variables, one can obtain the three main effects and the three interactions of the quadratic French curve equation approximation--in fewer experiments. But remember: You have to finish all eight runs to benefit.

In this case of three variables, eight experiments do the work of 12, with the additional benefit of efficiently getting all three interactions.

There’s a common tendency for people to think:
Two variables? Run a 2 × 2 factorial (four experiments). “We can study the other variables later.” (What did I say about interactions?)
Three variables? Run a 2 × 2 × 2 factorial (eight experiments). “We can study the other variables later.” (What did I say about interactions?)
Four variables? Run a 2 × 2 × 2 × 2? “No way my boss would let me do 16 experiments!”

The analysis? Their handy computer package calculates the main effects and interactions, then gives the p-value of each result to determine statistical significance. Many come to me expecting my blessing of their analysis. I then do a couple of calculations in my head and ask a question that usually gets the incredulous response: “What do you mean, ‘Repeat the design?’”

More about that next time, which will address the famous question, “How many experiments should I run?”  (Answer:  It depends!)


About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.


Messing with Production

I do wonder with such an abysmal lack general of understanding of control charts, what a mess folk must be making of production, with ANOVA.  For Pete's sake, many still believe in the uncontrolled process drift/shift of +/- 1.5 sigma, in the "long term" of 25 samples.  With this happening, there's certainly more important things to do than ANOVA.