



© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Published: 06/07/2017
I
n part one of this two-part series, I described the need for empiricism in root cause analysis (RCA). Now, I’ll explain how to achieve empiricism when performing a RCA by combining the scientific method and graphical explorations of data.
[ad:30222]
The statistician John Tukey believed data should be viewed graphically and came up with ideas as a basis for further testing. He called this exploratory data analysis (EDA) in contrast with confirmatory data analysis (CDA), where the objective is to evaluate a hypothesis.1 The scientific method can be supported by the use of Tukey’s EDA to generate data that can be empirically investigated. Tukey’s EDA explores data graphically to gain new insights.
The available data are used to form a tentative hypothesis or multiple hypotheses. The principle of Occam’s razor is used to choose between competing hypotheses, and the simpler of the two is selected. Remember that a good hypothesis should be simple, general, and avoid making too many assumptions. Also, it should be refutable, that is, testable.2 A hypothesis that can’t be refuted can’t be evaluated. In fact, Karl Popper warns against accepting any hypothesis as truly proven because it can always be disproved later when new evidence emerges. However, a hypothesis is more likely to be correct if it has survived rigorous testing that attempted to disprove it.3 The hypothesis should also make a prediction that can be evaluated.
Experimentation is sometimes necessary during an RCA. Attempting to re-create a failure under simulated conditions can often be informative. The experiment may not lead directly to the root cause, but it could eliminate potential root causes that are not the actual cause of the problem under investigation. It is essential to control your variables when performing an experiment; don’t change all variables at once.
George Box’s iterative inductive-deductive process uses cycles of deduction and induction for discovery.4 Deduction forms a conclusion based on a general premise, and induction uses empirical data to form a general conclusion. This means deduction is used to form a hypothesis based on what is known; the hypothesis is then evaluated empirically, and then induction is used to form a general conclusion based on empirical data. The process is repeated until the root cause is discovered.
W. Edwards Deming’s plan, do, check, act (PDCA), also known as plan, do, study, act (PDSA), is an iterative process that is often used for quality improvement. It can also be applied to RCA as a framework for the scientific method. The four steps of PDCA for RCA are:
• Plan: Describe the problem and gather data to form a tentative hypothesis.
• Do: Test the hypothesis.
• Check: Check the results and form conclusions.
• Act: Repeat or verify the root cause and begin improvements.
Tukey’s EDA and the scientific method can be combined with Box’s iterative inductive-deductive process as a part of PDCA.
The first step is to collect data. Quoting Sherlock Holmes, “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.” The data should then be explored graphically. A hypothesis should be formed using deduction, and then the hypothesis needs to be evaluated empirically. Induction is then used to form a new hypothesis based on information gained during the experiment if the root cause is not identified.
The combination of PDCA, the scientific method, EDA, and the iterative inductive-deductive process is shown as the RCA helix in figure 1. Data are collected and explored graphically in the plan phase. A tentative hypothesis is then formed and evaluated empirically during the do phase. The method of evaluation varies depending on the problem being investigated. A quick look at a failed component may be sufficient to identify an obvious cause of failure; other situations may require long-term testing of many sample parts.
The results of the evaluation are then interpreted. The plan phase is repeated using any new information from the evaluation if the root cause is not identified. The root cause needs to be verified if it has been found. Improvement actions are planned and then implemented if the root cause has been confirmed. Improvements may also be necessary for other products or processes that may potentially have the same problem.
Figure 1: RCA helix. © 2014 Matthew Barsalou
The investigation into the failure of vibration sensors was facilitated by the use of a hammer. The sensor consisted of a metal casing with a spring-mounted magnetic mass moving within a coil to generate a signal. Multiple units were returned from the customer due to lack of a signal, and the top hypothesis was “dent in the casing from mounting screw is restricting movement of the magnetic mass.” This hypothesis was quickly rejected by intentionally denting a functioning unit by hitting it with a hammer. The hammer dent was far deeper than that of the mounting screws, yet the unit continued to generate a signal. This quick and crude test made it possible to discard an incorrect hypothesis before more time and effort were expended in a line of inquiry that would turn out to be a dead end.
In another situation, a plastic bushing mounted in a metal bracket was failing. The failure was resulting in free play in the system, but it was not possible to directly observe the bushing being mounted in a bracket. The mounting process was simulated by fixing a bracket in a vise and then pushing a bushing in. The simulated insertion operation showed the bushing was losing material as it was inserted into the bracket. Minimal material loss did not cause free play; however, repeated trials showed free play resulted when there was heavy material loss on both sides of the bushing. This knowledge led to a containment action until a more robust bushing was introduced.
There are many potential approaches to RCA. It is essential that some methodology is used, and that methodology must include empirical methods. Simply brainstorming potential causes and then implementing a solution without empirical evidence risks a reoccurrence of the problem because the RCA team’s favorite root cause may not be the actual root cause. The RCA helix and its combination of PDCA, the scientific method, EDA, and the iterative inductive-deductive process provides an empirical methodology for RCA.
This article is based on a conference paper for the 2015 ASQ World Conference on Quality and Improvement.
References
1. Tukey, John W. Exploratory Data Analysis. Reading, MA: Addison-Wesley, 1977.
2. Quine, W. V. and J. S. Ullian. The Web of Belief (10th ed.). New York: Random House, 1978.
3. Popper, Karl. The Logic of Scientific Discovery. London: Routledge, 2007.
4. Box, George E. P., Stuart Hunter, and William G. Hunter. Statistics for Experimenters: An Introduction to Design, Data Analysis and Model Building (2nd ed.). Hoboken, NJ: John Wiley & Sons, 2005.
Links:
[1] https://www.qualitydigest.com/inside/six-sigma-article/empirical-root-cause-analysis-part-1-060617.html