Featured Product
This Week in Quality Digest Live
Six Sigma Features
Taran March @ Quality Digest
Next-gen SPC won’t be accomplished on paper
Ken Levine
Lean Six Sigma professionals have the objective orientation, methods, and tools to help
Gleb Tsipursky
Use this three-step process to prevent disasters in implementing decisions
Celia Paulsen
Downtime provides an opportunity to refocus before driving forward again

More Features

Six Sigma News
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers
Making lean Six Sigma easier and adaptable to current workplaces
Gain visibility into real-time quality data to improve manufacturing process efficiency, quality, and profits
Makes it faster and easier to find and return tools to their proper places
Version 3.1 increases flexibility and ease of use with expanded data formatting features
Provides accurate visual representations of the plan-do-study-act cycle

More News

Steven Ouellette

Six Sigma

Don’t Design the Experiment Until You Research the Process

Applied research the smart way

Published: Tuesday, June 14, 2011 - 05:30

Although we may use the define, measure, analyze, improve, control (DMAIC) mnemonic to help guide us through our problem solving, that doesn’t really give us a lot of specific direction (as I bemoan in my Top 10 Stupid Six Sigma Tricks No. 4). Good experimental design technique is critical to being able to turn problems into solutions, and in my experience Black Belts have not been introduced to a good process to do this. If you know someone whose first thought is, “Let’s go collect some data to see what is going on,” then read on to avoid losing millions of dollars in experimental mistakes.

OK, so we all know that Six Sigma is mostly repackaging things that have been around for awhile. The stated intention was to make using statistics and experimental design easy enough so that process experts could use them to solve problems in their processes. A laudable goal, but as you probably can guess by now, I think we have gone too far at times and ended up with “Black Box Black Belts” who have only been trained to enter data, click on buttons, and go with the answer that comes out of the software. This will work occasionally (giving the poor Black Belts a sense of false confidence), but will eventually turn around and bite them in the… back.

One of those areas that got dropped was the process of performing research—industrial or otherwise. If someone doesn’t know about this process (or even that there is one), he is at grave risk of doing a lot of work only to have nothing in the end to show for it, or a “solution” that is worse than the problem.

When do we use an experimental design process? Well, as you will see below, since it is compatible with DMAIC (or whatever problem-solving process you prefer), it is applicable to the entire project. But let's say that during the course of solving the big problem (scrap rate, for instance), you also find that you need to study a smaller component of the process with data (maybe the concordance of those scrapping the product). So here is the rule: Any time you set out to collect data, you should be more than halfway through your research design process.

Yes, I said “any time,” and I mean it. It doesn’t take long to do, and it will save you tons of time and money.

So let’s go over a process for designing research. There are others, but this is the one I think is best. In later articles, I’ll explore some of the specific (and overlooked) tools used in the steps. This time, I’ll just do an overview of the process.

To start with, here is the research design process I’ll be using:

actstudyplando no arrow

 

Figure 1: The research design process (from Design of Experiments in Quality Engineering, by Jeffrey T. Luftig and Victoria S. Jordan, McGraw-Hill, 1998)

As Douglas Adams wrote, “Don’t panic!” There is nothing here that is conceptually difficult.

First off, take a look at the size of each of the slices, which are more or less proportional to the time that should be spent on it during research. In Black Belt training, we spend a lot of time on the statistical tools because it takes time for people to learn everything they need to know to be effective and not dangerous. But in terms of actually working on a project, a far higher proportion of time should be spent on planning the experiment. If you do your planning correctly, the analysis is a very small part of the study.

Note also that the data are not collected until after the planning phase. I know it seems obvious, but I have talked with more than one Black Belt who told me stories of starting off collecting data or designing an experiment before they really understood what the problem was.

Another thing that should be obvious now that we are looking at a picture of the process, is that you don’t design the experiment until you have figured out the variables you want to vary (treatments) as well as the variables for which you need to control (all other independent variables). Many an experiment has not been confirmed because of a failure to do the latter.

Finally, everything in the research process follows the plan-do-study-act process popularized by W. Edwards Deming. Of course this is no surprise because PDSA and DMAIC are nothing more than different interpretations of the scientific method.

Well, that gets us started, anyway. Next month I’ll delve more into the individual steps and highlight some underutilized tools that will save your sanity, as well as a lot of money. If you can’t wait, well, buy yourself a copy of the book I referenced. It doesn’t teach you how to do the stats, just how to actually run an experiment in the real world once you do know the stats.

Discuss

About The Author

Steven Ouellette’s picture

Steven Ouellette

Steven Ouellette is the Lead Projects Consultant in the Office for Performance Improvement at the University of Colorada, Boulder. He has extensive experience implementing the systems that allow companies and organizations to achieve performance excellence, as well as teaching Master's-level students the tools used in BPE. He is the co-editor of Business Performance Excellence with Dr. Jeffrey Luftig. Ouellette earned his undergraduate degree in metallurgical and materials science engineering at the Colorado School of Mines and his Masters of Engineering from the Lockheed-Martin Engineering Management Program at the University of Colorado, Boulder.

Comments

What is you defintion of "data?"

What is you definition of "data?" Does the process of observation result in data?

Understanding the Process...


 


 



I'm sure that as in everything, the quality of the Six Sigma BB training varies dramatically. I have been to several DOE training classes and many other statistical related classes. But the Six Sigma training I received through the Whirlpool Corporate Training Center was a phenomenal, mind-blowing experience.


 


The basic premise that the entire 5 weeks of training was centered around the following saying, "Practical, Graphical and Analytical".


It is my belief that if the project planning is done properly and at a level of detail befitting a "Black Belt", that many of the problems could be solved by using the "Practical" aspect of the Black Belt approach without spending wasted time with ineffective DOE's. But that depends on the level of the detail done as part of investigation into the process. If done properly, the BB will have identified, not just the inputs, transformations and outputs, but the interactions, interdependencies, controllable variables and the noise variables. If a process mapping approach is used, then the known’s and unknown’s will also be identified. This also includes a detailed investigation into what the perceived problem or project really is. I believe that if this phase is done correctly, the scope of the project or even what problem is being address might change some from the initial starting point.


 


I also believe that another common mistake is that organizations are convinced that a single DOE will provide a magical answer or the absolute perfect process. One of the first things I learned in the Whirlpool training program was that doing a DOE was a process. An iterative process that might only give you a hint of the direction you need to head, and rarely the optimal answer on the first DOE.


Too many times I have seen people quick to run a DOE to solve a problem they didn't even fully understand. Then when the analytical aspect of the DOE did not yield and magical solution, they would become skeptical of DOE's in general.


 


To me, the Black Belt training I received would still be well worth the effort if we had spend almost no time on the Analytical aspect of the training program. The planning, the understanding of the process, the critical thinking involved in a well done thought map has benefited me almost on a daily basis. The statistical and analytical knowledge I gained has been extremely helpful, but the initial aspects of approaching a problem and the planning and process investigation has forever change the way I approach things on a daily basis.


 


I'm not trying to negate the value of the analytical aspect but so many times, the use of the analytical tools only will not of much value without the initial part of the process.


 


thanks,


Terry A. Henson


 

Necessary, but Sometimes not Sufficient

  • Terry, that should be a really important part of the training - you are absolutely correct, as I think I have written about in a number of other articles.
  • I think you will see as we go forward through this process that what it is consisent with what you have described.
  • The "seven basic tools" by themselves can find huge savings in areas that have not typically been thought of as a process before - support processes, service provision, transactional industries, etc.  In my training I refer to this as not "low-hanging fruit" but "fruit that we are tripping over!"  Now nothing in this process precludes using just those - it is applicable to every investigation.  But I caution you - if you skip any of these steps you are opening yourself up for trouble.  There are a lot of words there, but I am confident that you will agree you probably do at least some of this already no matter how limited in scope the investigation is.  My point is that as a discipline, something like this research process should be used at all times.  Really, it is nothing more than the Scientific Method once again.  And you will see me raise the point (probably in the next article) that DOE (as we commonly refer to it) and "experiment" or "study" are not equal.
  • Also, you will note that the process I present is a PDSA *cycle* - you may have to go through it more than once to acheive your objective.
  • At some point though the seven basic tools are not enough.  Even confirming that a simple process is affected by a two-way interaction is impossible without the use of ANOVA (and damn hard to spot in a column of data or as you run a process).  So for the short-term, those basic graphical and quasi-analytical tools will work, but at some point the company will need more advanced tools (well I suppose one *could* keep messing up your process so it is always straightforward to fix, but hopefully most organizations learn better than that!).
  • Anyway, I think we agree.  I am just intending to give you a discipline that will serve you well for any investigation, whether it is a DOE or not.
  • Let me know if I accomplish that!

A very good question!

(Bullets used to separate paragraphs...)

  • This is actually a really good question, and the answer is maybe not obvious.
  • The way I define data is "numbers that are related by a measurement system to an event of interest."
  • So a list of random numbers is not data since it is unrelated to an event.  Importantly, this definition also implies that your measurement system links to the event somehow, highlighting the importance of good measurement system analysis.
  • So, the process of observation can result in data, but without understanding the relationship your observations have back to the property of interest (as determined by your measurement system), observation does not necessarily result in data.
  • Any measurement system (discrete or continuous) needs to be assessed for stability through time (control), agreement between systems (operator/machine etc), and agreement with a standard.
  • Say a process generates discrete data (say a 1-10 rating on surface quality)  You would need to analyze the measurement system that generates that number. (Do the four inspectors using the Mark I Eyeball generating the rating agree with each other?  Do they agree with a standard?  Are these ratings repeatable through time?)
  • I have seen measurement systems that themselves were not much better than random number generators, or whose measurements changed with time.  Those numbers were not data, even though they might have come from multi-million dollar instruments with digital readouts to four decimal places!
  • Oh, as an aside, the relationship that the numbers (data) have with the event of interest as recorded by the measurement system dictates the measurement level the data are (nominal, ordinal, interval, ratio, absolute), which in turn determines what you are allowed to do with the data statistically.   So it is critical that you understand that relationship!

<edited to correct ambiguities>