Featured Product
This Week in Quality Digest Live
Quality Insider Features
Gleb Tsipursky
Believing things will normalize soon, many companies are unprepared for new waves of restrictions
Harry Hertz
Communities of Excellence 2026 can help communities work together using a Baldrige-based, community-centric framework
Michael Weinold
GE’s exit from the lighting business is a warning to other players in the sector
Ken Voytek
Productivity will be even more critical as we recover from the current health and economic crisis
Jeffrey Phillips
Putting data to work

More Features

Quality Insider News
3D scans help Chicago Jet Group retrofit old Dassault Falcon avionics system
Real-time data collection and custom solutions for any size shop, machine type, or brand
Lloyd Instruments launches the LS5 high-speed universal testing machine
Measure diameter, ovality of wire samples, optical fibers and magnet wire, including transparent products
Training, tips and tricks, unboxing, and product videos provide additional information for users
How to develop an effective strategic plan and make the best major decisions in the context of uncertainty and ambiguity
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Laser scanning also used to help create safety covers for credit card readers
A complimentary webinar for novices to experts on May 27-28, 2020

More News

Steven Ouellette

Quality Insider

From Seemingly Unsolvable to Solved with Methodical Research

How it’s done is not typically how it’s taught

Published: Friday, December 16, 2011 - 10:49

Last article, I wrote about the importance of correctly classifying variables as part of the research design process, and discussed the benefits of the hugely useful, but oft-neglected, blocked variables. As part of my ongoing crusade against poor experimental designs, and the people who love them, let’s finish this one up.

If you were otherwise engaged in a covert mission to preserve the free world and missed my other articles on this, you can catch up here:
Don’t Design the Experiment Until You Research the Process
Planning the Research Study: Part 1
Planning the Research Study: Part 2
Blocking Out the Nuisance, Part One
Blocking Out the Nuisance, Part Two

Again, here’s a reminder of the research design process I am using:

https://lh6.googleusercontent.com/X_xrCeQj0CRoUwFexjUd71IiIBHwZ0rE6fTECV6n5e-qtZXlriJdVqMOLDlDmxeA34EFPhryQiXS_R0UpA4mKGWEOBHEUTwq4zHy-P6vCkMel2wnB0U
Figure 1: The research design process from Design of Experiments in Quality Engineering, by Jeffrey T. Luftig and Victoria S. Jordan (McGraw-Hill, 1998). Click here for larger image.

Last time I showed how properly classifying all variables (i.e., treatment, independent, and nuisance variables) can make it a lot easier to see what is really going on. This is a step that is, in my experience, often left out by people who learn experimental design as a purpose, instead of a means to an end. Ironically, leaving this out can vastly decrease the chance that you learn something from your experiment.

I kind of skipped ahead to show you the consequences of blocking or not blocking, so let’s go back to the next step.

Efficient experimental design

Now that you have identified all possible factors, selected those you want to include in your experiment, and controlled for the others, you are ready to think about the design. This is a complicated task, so I’ll only mention a few things to consider… this is not a complete list.

If you are working on a fairly mature process where the problem has been around for a while, you are probably looking for an interaction. Interaction effects are notoriously difficult to detect just by looking at the data. (Take a look at my article, “Will Google Earn a Black Belt” for an example.) Use a design that allows you to examine as many of these potential interactions as you can.

If you are faced with a long list of potential factors, you are probably looking at using a screening experiment, like a fractional factorial. These are useful at winnowing down the list to something more manageable. Keeping in mind the previous paragraph, include as many likely interactions as you can.

While you could design an experiment by randomly choosing factors (and maybe even learn something), it is probably not very efficient. Leverage the knowledge of your process experts to choose factors and potential interactions to include in your design. I have been around a number of times when the “expert” was wrong and the hourly frontline workers’ ideas proved to be correct. It gets even more interesting when the list of factors identified by your experts is tested, and nothing shows up to be significant—interesting because at this point the real learning can begin.

If it is possible, a dependent design (e.g., repeated measures) can increase the sensitivity to detect subtle effects.

Don’t forget blocking (either to handle nuisance factors or repeated measures) as a way to reduce the error term, as we talked about last time.

Design the sampling plan

There are a variety of techniques for generating a useful random sample—too many to cover here. But if you refer to Luftig & Jordan p. 134, you can see a table listing the different types of sampling, as well as their advantages and disadvantages. How you take your sample has a massive impact on what you can infer from it. Just going out and grabbing a sample “randomly” in no way allows you to infer anything about a process. Randomness, a prerequisite for statistical inference, is often quite hard to attain, and requires planning and process knowledge to achieve.

The size of the sample is a calculation, not an opinion, as I talk about in “(Sample) Size Matters.” If you choose too small of a sample size, you risk not being able to see what you are doing the experiment to detect—an effect if one is present. You would be making a Type II or β error. Making a β error is costly, not only because you ran an experiment and learned nothing, but also because you move on to try other solutions, when the one you tried and rejected actually had the effect you wanted to find.

Other critical concerns for the sampling plan are: who is going to do the sampling, where is it going to be done, and making sure that how the data are gathered is known by all those collecting it. These concerns, as well as the design selected, are going to be very important for the logistics of performing the study.

Assess the data collection instruments

Measurement is the process of turning an event in which you are interested into some sort of number. The higher the relationship between the real event (aka the dependent variable) and the numbers, the more statistical procedures you can use to suss out what is happening.

Of course, the greatest statistical tools in the world are not going to help you if you have a poor measurement system. If your measurement system is out of control (unstable through time), any conclusions that you make from it are only related to reality by accident. I was almost involved in a situation like this—we were preparing to design an experiment based on a test that later turned out to be, at best, a random number generator. What it measured had no relationship to what we were trying to improve. Sadly, we had been using that measurement system for about 10 years to pass or fail product. It would have been even worse had we based an experimental analysis on that gauge—we might have either found nothing or changed the process only to find no, or unexpected, effects. Even if you demonstrate control, you might have high measurement error compared to the process. That makes it harder to detect process shifts, thus requiring larger sample sizes and more cost (and risk).

For more on how I do measurement system analysis, see my articles:
The Mystery Measurement Theatre
Performing a Short-Term MSA Study
Performing a Long-Term MSA Study
Destructive Gauges and Measurement System Analysis

Conduct the study

Finally at this point you have properly planned your experiment. Notice how far into the process data-collection actually occurs. Communication and well-planned logistics are key factors to conduct successful studies. Experiments are complicated departures from business-as-usual, so ensure that you have full participation in the planning process by those who will be affected by the experiment.

Design or select the appropriate analysis plan

As you perform the study, you may gain additional information about the process, or you may lose samples that will change how you will do the analysis. You might find that certain assumptions you made earlier are not met, so you might choose to move from a parametric to a nonparametric analysis. (Don’t fear this: Nonparametric analyses do lose a little bit of power compared to their parametric brethren, but eliminate the unquantifiable risk of violating critical assumptions. The trade-off is worth it.) A study that is planned for equal samples sizes might have to be changed to account for an unequal sample size. Worst case, you might lose an entire cell (unique combination of settings) and have to replan what it is you are actually able to test. Do try to avoid that.

Analyze the data

With the analysis plan, modified if needed, you can sit down and see what you have learned. There are a thousand different tests, but at this stage, remember that every analysis should be accompanied by an appropriate visual representation and statistical conclusion (with p-values). Also remember that, in order to answer one research question, you may have multiple statistical hypotheses to answer first. To answer our previous research question about the effect of different process factors on expansion, we will need to test for shape—an assumption for analysis of variance (ANOVA), but not critical if the sample size is large—but more important, knowing shape is necessary when calculating conformance or capability. And we’ll need to test for spread. (Do the different factors affect the variation experienced, which is also affecting capability, and also an assumption of ANOVA, to which ANOVA is robust if the sample sizes are the same?) And then test for means. Performing a pre-test-post-test control group design would lead to even further hypotheses.

If you reject the null hypothesis, you would perform any required post-hoc (after the fact) tests needed to completely answer the research question. (For example, if you reject an ANOVA test on means, you know that one or more of the means are different, but not which ones. Post-hoc tests tell you this and allow you to make appropriate point estimates.) Regardless of whether you accept or reject the null, you should report results as confidence intervals, not just point estimates. This is fairer to those making a decision based on your work, because they will be able to see what the precision of your estimate really is.

Answer the research question

With all the statistical hypotheses answered, you can now answer the original research question. At this point, if you have followed the entire research process, you have minimized sources of unquantified systematic error. No experiment is perfect, of course, but you will have eliminated or controlled all of these that you were able to identify. You still have to recognize α or β error (depending on whether you reject or accept the null hypothesis, respectively), but at least those are at your selected levels.

Provide results, conclusions, and recommendations

Writing up the research so that others in the future know exactly what you did and why you made those decisions is critical for the institution to learn, and for keeping those problems fixed.

Failing to do this last step means that, sooner or later, someone comes along after you who has no clue why the standard operating procedure requires pre-dipping the piece, eliminates that step to save money, and gets the pleasure of going through the entire process again.

By assessing the progress you have made at this point, you can also determine if you are done (time for a party!) or if you need to go through the process again to capture further improvements in order to complete your mission.

Also, when you are involved in this type of activity, you are bound to see a lot of other opportunities for future improvements that are outside the scope of this particular project. A final report is an opportunity to capture this information for future use. Early in my career, I was involved in a capability study where we happened to notice the way materials were moved around the plant was really inefficient. We joked that the material should get frequent flyer miles. We were there to quantify a product characteristic, but that material-movement issue was later addressed with its own project, which saved the company money through increased efficiency and a reduction of scrap due to damage from navigating tight corners. This is the way that the knowledge you gained doing the research project feeds into future activities.

Conclusions

Well, that brings us to the end of this high-level overview of a tried-and-true process for performing research. As I said, many experiments fail before they are even started because the people involved know how to analyze an experiment, but not how to actually design and perform it in a real situation. There is a huge amount of detail and knowledge involved in each step; knowledge that is not typically taught to future practitioners of industrial research.

Please, follow a process to perform your research in a methodical and well-planned approach. Don’t fall into the engineer’s trap of “ready, fire, aim!” By solving seemingly unsolvable problems, you probably stand to make your company more money than an Indy race-car driver makes winning a race. Spend a portion of your time making sure that your research goes as smoothly as the service performed by a well-trained pit crew.

Discuss

About The Author

Steven Ouellette’s picture

Steven Ouellette

Steven Ouellette is the Lead Projects Consultant in the Office for Performance Improvement at the University of Colorada, Boulder. He has extensive experience implementing the systems that allow companies and organizations to achieve performance excellence, as well as teaching Master's-level students the tools used in BPE. He is the co-editor of Business Performance Excellence with Dr. Jeffrey Luftig. Ouellette earned his undergraduate degree in metallurgical and materials science engineering at the Colorado School of Mines and his Masters of Engineering from the Lockheed-Martin Engineering Management Program at the University of Colorado, Boulder.