Our PROMISE: Our ads will never cover up content.

Our children thank you.

I have written about sample size calculations many times before. One of the most common questions a statistician is asked is, “How many samples do I need—is a sample size of 30 appropriate?” The appropriate answer to such a question is always, “It depends!”

In today’s column, I have attached a spreadsheet that calculates the reliability based on Bayesian Inference. Ideally, one would want to have some confidence that the widgets being produced is *X*-percent reliable, or in other words, it is *X*-percent probable that the widget would function as intended. The ubiquitous 90/90 or 95/95 confidence/reliability sample size table is used for this purpose.

I have been writing about kaizen a lot recently. It is a simple idea: change for the better. Generally, kaizen stands for small incremental improvements. Here I’m going to look at what is the best kind of kaizen.

A few posts back, I talked about the order for kaizen, including the idea of equipment kaizen or *setsubi* kaizen. To introduce the concept of the best kind of kaizen I will share a story from Masayasu Tanaka, dealing with equipment kaizen. He tells of a plant that manufactured steamed dumplings (*manju* in Japanese). They were trying to automate the entire process of making steamed dumplings, a directive that had come directly from the president of the company.

The last step of the process was to make a twist on the top of the dumpling. All the previous steps were easily automated; however, the twisting of the top stumped them. Finally, company engineers were successful in creating a machine that could indeed twist the top of the dumpling. Everybody was happy, and they cheered the smart engineers for their hard work.

However, in the midst of all the celebration, someone asked, “Why is there a twist on the dumpling, anyway?”

In today’s column, I will be looking at kaizen and *kaikaku* through the lens of the explore/exploit model. Kaizen is often translated from Japanese as “continuous improvement” or “change for better.” *Kaikaku,* another Japanese term, is translated as “radical change or improvement.” *Kakushin* is another Japanese word that means “innovation” and is used synonymously with *kaikaku*.

*Kaikaku* got more attention from lean practitioners when Katsuaki Watanabe, Toyota’s former president and CEO, said in 2007, “Toyota could achieve its goals through kaizen. In today’s world, however, when the rate of change is too slow, we have no choice but to resort to drastic changes or reform: *kaikaku*.”

It’s not easy to find topics to write about, and even if I find good topics, it has to pass my threshold level. As I was meditating on this, I started to think about procrastination and ambiguity. So my column today is about the importance of “fuzzy concepts.” I am using the term in a loose sense and will not go into depth or specifics.

We like to think in boxes or categories. It makes it easy for us to make inferences and aids in decision making. “She is tall” or “He is short”; “This is hard” or “This is easy.” This is a reductionist approach and from a logic standpoint, this type of thinking is called “Boolean logic” and is based on a dichotomy of true or false (0 or 1). Something is either “X” or “not X.” This type of thinking has its merits sometimes.

In contrast, fuzzy logic helps us in seeing the in between. The fuzzy logic approach utilizes a spectrum viewpoint. It starts as 0 at one end and slowly increases bit by bit all the way to 1. We can express any point between 0 and 1 as a decimal value.

I recently read Jordan Ellenberg’s wonderful book, *How Not To Be Wrong: The Power of Mathematical Thinking* (Penguin Books, 2014). I found the book to be enlightening and a great read. Ellenberg has the rare combination of being knowledgeable and capable of teaching in a humorous and engaging way. One of the gems in the book is, “Which way you should *go* depends on where you *are*.”

This lesson is about the dangers of misapplying linearity. When we are thinking in terms of abstract concepts, the path from point A to point B may appear to be linear. After all, the shortest path between two points is a straight line. This type of thinking is linear thinking.

To illustrate this, let’s take the example of poor quality issues on the line. The first instinct to improve quality is to increase inspection. In this case, point A = poor quality, and point B = higher quality. If we plot this incorrect relationship between quality and inspection, we might assume it is a linear relationship—i.e., increasing inspection results in better quality.

Today I will look at epistemology at the *gemba.* Epistemology is the part of philosophy that deals with the theory of knowledge. It tries to answer the questions, “How do we know things, and what are the limits of our knowledge?” I have been learning about epistemology for a while now, and I find it an enthralling subject.

The best place to start this topic is with Meno’s paradox. Plato wrote about Meno’s paradox as a conversation between Socrates and Meno in the book aptly called *Meno.* This is also called the “paradox of inquiry.” The paradox starts with the statement that if you know something, then you do not need to inquire about it. Further, if you do *not* know something, then the inquiry is not possible, because you do not know what you are looking for. Thus, in either case inquiry is useless. Plato believed that we are all born with complete knowledge, and all we need to do is recollect what we know as needed.

It’s been a while since I’ve written about statistics. So in this column, I will be looking at the rules of three and five. These are heuristics, or rules of thumb, that can help us out. They are associated with sample sizes.

Let’s assume that you are looking at a binomial event (pass or fail). You took 30 samples and tested them to see how many passes or failures you get. The results yielded no failures. Then, based on the rule of 3, you can state that at a 95-percent confidence level, the upper bound for a failure is 3/30 = 10%; in other words the reliability is at least 90 percent. The rule is written as:

p = 3/n

where p is the upper bound of failure, and n is the sample size.

Thus, if you used 300 samples, then you could state with 95-percent confidence that the process is *at least *99-percent reliable based on p = 3/300 = 1%. Another way to express this is to say that with 95-percent confidence, fewer than 1 in 100 units will fail under the same conditions.

This rule can be derived from using binomial distribution. The 95-percent confidence comes from the alpha value of 0.05. The calculated value from the rule-of-three formula gets more accurate with a sample size of 20 or more.

In today’s column, I will be looking at process validation and the problem of induction. Yesterday, I looked at process validation through another philosophical angle by using the lesson of the Ship of Theseus.

The U.S. Food and Drug Administration (FDA) defines process validation as “the collection and evaluation of data, from the process design stage through commercial production, which establishes scientific evidence that a process is capable of consistently delivering quality product.”

My emphases on the FDA’s definition are the two words “capability” and “consistency.” One of the misconceptions about process validation is that once the process is validated, then it achieves almost an immaculate status. One of the horror stories I have heard from my friends in the medical devices field is about a manufacturer that stopped inspecting its product since the process was validated.

There is a great Greek paradox/puzzle called the Ship of Theseus. There are multiple versions and derivations to it. My favorite version is as follows (highly watered down).

Theseus bought a new ship. Each day he replaced one part of the ship. Plank by plank, sail by sail, and oar by oar. Finally, no part of the original ship remained. Now the paradox is this: Is the ship the same as the original ship now that every part has been replaced? This is a great thought experiment about identity and understanding of self. If we go one step further and build a new ship with all the parts that were replaced from the original ship, is the new ship the same as the original ship?

When I read about this great paradox, my mind started thinking about process validation. We get a new piece of equipment, say a pouch sealer, and during the course of multiple years, many of the parts get worn down and replaced. Is the sealer the same as the original sealer? *Is the original validation still valid?*

I have been reading a lot these days about Western philosophy. The most recent book, *All Life is Problem Solving* (Routledge, 2001), is by Karl Popper, one of the great philosophers of the 20th century. This is a collection of Popper’s writings. One of the great teachings from Popper is the concept of “falsification,” which means that as a scientist one should always try to disprove a theory rather than try to confirm it.

A classic example is the case of black swans (not Nicholas Taleb’s black swan). If one were to theorize that all swans are white, based on the empirical evidence of observing only white swans, then that is simply confirming the theory. The observer is not actively trying to disprove his theory. When a black swan is discovered, his theory now breaks down.