Featured Product
This Week in Quality Digest Live
Quality Insider Features
Gleb Tsipursky
Believing things will normalize soon, many companies are unprepared for new waves of restrictions
Harry Hertz
Communities of Excellence 2026 can help communities work together using a Baldrige-based, community-centric framework
Michael Weinold
GE’s exit from the lighting business is a warning to other players in the sector
Ken Voytek
Productivity will be even more critical as we recover from the current health and economic crisis
Jeffrey Phillips
Putting data to work

More Features

Quality Insider News
3D scans help Chicago Jet Group retrofit old Dassault Falcon avionics system
Real-time data collection and custom solutions for any size shop, machine type, or brand
Lloyd Instruments launches the LS5 high-speed universal testing machine
Measure diameter, ovality of wire samples, optical fibers and magnet wire, including transparent products
Training, tips and tricks, unboxing, and product videos provide additional information for users
How to develop an effective strategic plan and make the best major decisions in the context of uncertainty and ambiguity
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Laser scanning also used to help create safety covers for credit card readers
A complimentary webinar for novices to experts on May 27-28, 2020

More News

Steven Ouellette

Quality Insider

Planning the Research Study: Part 1

Six Sigma practitioners are not trained, or don’t use, a process for research design. They should.

Published: Wednesday, July 13, 2011 - 05:30

Last month I showed you a process to use to save money, time, and sanity when doing any type of research, including applied problem solving and quality improvement (“Don’t Design the Experiment Until You Research the Process”), However, I didn’t have room to go through the steps to show you how it works. That’s exactly what I am going to do now.

Again, here is the cycle I’ll be using.


Figure 1: The research design process (from Design of Experiments in Quality Engineering, by Jeffrey T. Luftig and Victoria S. Jordan, McGraw-Hill, 1998)

Remember, my experience is that Six Sigma practitioners are not trained, or don’t use, a process for research design. This process is more than define, measure, analyze, improve, control (DMAIC), although you will see how it integrates into that.

And although this process pertains to the “big” project that you are working on, it also is needed for those “little” questions that come up during the course of the main project. For example, in your quest to reduce yield losses, you might find that you need to bring your gauging into control. At both levels, you should use the research design process to guide you through the necessary steps. As you will see, it won’t take long to follow the steps, and the discipline of doing so will save you time, frustration, and money.

Plan phase overview, part 1

All right, let’s move into the steps. I am only going to hit the major outputs of these; for more detail I recommend the Luftig and Jordan book itself.

Step No. 1: Refining and developing the problem statement. In this step, you create a short description of what you are researching so that anyone who comes along later will know exactly what you intended to do. It is phrased in a way that lends itself to developing the research you will need to answer the question you are faced with. 

Luftig and Jordan give a number of examples of good research problem statements, so here is one of theirs:

“The purpose of this study is to determine whether average axial load values (a measure of strength) for cans produced with chrome tooling are different from those for cans produced with standard steel tooling. The cans of interest for this study are NewCola and NewCola Light cans produced on all bodymakers (machines used to draw material into the shape of beverage cans) with material supplied by two suppliers: USAL and CanCo.”

Even though we were not part of this team, we know what it is trying to understand, and if necessary (and given further process knowledge) we could develop a plan to find this out. If the research problem statement had instead been something like, “The purpose of this study is to improve our can strength,” we could enjoy endless debates about what exactly the team was working on. Those team members probably did, too, wasting time, money, and their limited reserves of sanity.

Step No. 2: Defining the framework of the study. This next step is a really critical one that I think most practitioners miss, so kudos to you if you don’t because it is the source of many misunderstandings about applied research in business.

Defining the framework of the study means to understand the theoretical basis for what you are planning to do. If you don’t understand this, you are very likely to make significant errors in designing, running, and interpreting your study. 

Not every research study will be an experiment. (As I have harped on before, an experiment allows you to make statements about causality. Nonexperiments can give you clues to investigate, or supporting evidence for hypotheses, but are not sufficient by themselves to rule out other causes.) Nonexperimentally, you might be data mining to find potential factors for an experiment by looking at historical data and finding associations, or you might be trying to determine if two different labs give you the same test results.

Luftig and Jordan break down the major types of studies as follows in figure 2.

General Category

Type of Research Design

Category of Study/Inquiry


Analytical Research



Agreement Research

Consensus Analysis
Instrument or System Concordance


Descriptive Research

Status Study
Longitudinal Study
Case Study
Cross-Sectional Study


Relational Research

Concurrent Correlation
Predictive Correlation
Causal Comparative


Single Factor Research Studies (Comparative and Blocked)



Multiple Factor Research Studies


Figure 2: Types of research designs—historical and philosophical (from Design of Experiments in Quality Engineering, by Jeffrey T. Luftig and Victoria S. Jordan, McGraw-Hill, 1998)


I’ll leave more detailed discussion about these for another time, but after glancing at this you can see that a lot of what we do on our journey to solve our problem is nonexperimental. That is not to denigrate the usefulness of the work—it is often a necessary step to even get to a list of factors to include in an experiment, and is often more complicated and requires a tighter research design to do properly.

Why do you need to know the basis for the study? Well, if you were to data-mine some existing process data (say CHAID and correlation analyses), you might find any number of significant factors that are associated with the output you are trying to optimize. If you didn’t realize that you were doing a nonexperimental analytical study, you might put process changes into place that really have nothing to do with solving the problem. (You could read my article “Statistical Correlation Does Not Always Prove Cause” for examples.)

Step No. 3: Write the research questions and hypothesises. Writing the research questions and hypothesises is absolutely critical in designing the correct study. Basically, we are translating the research problem statement into one or more specific things to test. This is a powerful aid to help maintain external validity. It is important to note that we may very well need multiple research questions and hypotheses in order to investigate our original research problem.

A research question is created when we are doing nonexperimental investigation. Because we can’t make statements about causality, we don’t state what we expect: We ask a question that we hope to answer. If we are using an experimental approach, we would write one or more research hypotheses. Since it is an experiment and can establish causality, these are statements of what we expect to find and include what is to be compared and the basis of the comparison.

Teams that skip this step end up missing assumptions that can have huge consequences. For example, if we were designing an experiment for our can strength, we might be tempted to say that our only research hypothesis was something like, “Cans manufactured with chrome tooling yield a significantly higher average axial load than cans manufactured with the standard steel tooling,” and we could run an experiment that only tested that.

This leads to a classic experimental blunder, though. The team could do all its stats correctly and find a statistical difference between the chrome tooling and the steel tooling, and could assume that the tooling type was the cause. Because we didn’t think to test if the two aluminum suppliers were equal, it could be that the increased strength was due to, say, the fact that we ran all the chrome tooling with USAL aluminum, which just happens to have a higher strength. If we were to make the process change over to chrome, we would find that the added cost made not one whit of difference, and we would have to start all over again.

To be continued…

There might be one or two readers out there who will think that our job is to get answers and get them quickly, that this is too complicated, and it’s unrealistic to expect a team to follow it or a Six Sigma Black Belt to know it. My response for such thinking is that it’s much more realistic to plan at this level than to just hope that things turn out OK, and that once you get used to it, it goes pretty quickly. The more important and urgent a problem is, the more you don’t want to go all the way through a project only to come up with nada and have to go back to the beginning and start again.

Following this process will keep you from making really bad mistakes, so catch up with me next month for more on this subject.


About The Author

Steven Ouellette’s picture

Steven Ouellette

Steven Ouellette is the Lead Projects Consultant in the Office for Performance Improvement at the University of Colorada, Boulder. He has extensive experience implementing the systems that allow companies and organizations to achieve performance excellence, as well as teaching Master's-level students the tools used in BPE. He is the co-editor of Business Performance Excellence with Dr. Jeffrey Luftig. Ouellette earned his undergraduate degree in metallurgical and materials science engineering at the Colorado School of Mines and his Masters of Engineering from the Lockheed-Martin Engineering Management Program at the University of Colorado, Boulder.