# Quality Digest

Featured Product
This Week in Quality Digest Live
Lean Features
Chris Caldwell
Significant breakthroughs are required, but fully automated facilities are in the future
Megan Wallin-Kerth
Or, how mistakes factor into a kaizen mindset
Eric Whitley
Manufacturing methods and technologies that improve waste management
Donna McGeorge
Design the day for maximum productivity with this Nano Tool
Scott A. Hindle
Part 2 of our series on SPC in a digital era
Lean News
Embrace mistakes as valuable opportunities for improvement
Introducing solutions to improve production performance
Helping organizations improve quality and performance
Quality doesn’t have to sacrifice efficiency
Weighing supply and customer satisfaction
Specifically designed for defense and aerospace CNC machining and manufacturing
From excess inventory and nonvalue work to \$2 million in cost savings
Tactics aim to improve job quality and retain a high-performing workforce
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA Bio

Lean

## Deriving the Success Run Theorem

### Any statistical statement we make should reflect our lack of knowledge

Published: Monday, August 28, 2023 - 11:03

The success run theorem is one of the most common statistical rationales for sample sizes used for attribute data.

It goes in the form of:

Having zero failures out of 22 samples, we can be 90% confident that the process is at least 90% reliable (or at least 90% of the population is conforming).

Or:

Having zero failures out of 59 samples, we can be 95% confident that the process is at least 95% reliable (or at least of 95% of the population is conforming).

The formula for the success run theorem is given as:

n = ln(1 – C)/ ln(R), where n is the sample size, nl is the natural logarithm, C is the confidence level, and R is reliability.

The derivation is straightforward and we can use the multiplication rule of probability to derive it. Let’s assume that we have a lot of infinite size, and we are testing random samples out of the lot. The infinite size of the lot ensures independence of the samples. If the lot was finite and small, then the probability of finding good (i.e., conforming) or bad (nonconforming) parts will change from sample to sample if we’re not replacing the tested sample back into the lot.

Let’s assume that q is the conforming rate (probability of finding a good part).

Let’s calculate the probability of finding 22 conforming products in a row. In other words, we’re testing 22 random samples, and we want to determine the probability of finding 22 good parts. This is also the probability of not finding any bad product in the 22 random samples. For ease of explanation, let’s assume that q = 0.9, or 90%. This rate of conforming product can also be notated as the reliability, R. The success run theorem is one of the most common statistical rationales for sample sizes used for attribute data. Credit: Mathieu Turle on Unsplash.

Using the multiplication rule of probability:

p(22 conforming products in a row) = 0.9 x 0.9 x 0.9 ...
x 0.9 = 0.9 ^22 = 0.10 = 10%

If we find zero rejects in the 22 samples, we’re also going to accept the lot. Therefore, this is also the probability of accepting the lot.

The complement of this is the probability of not finding 22 conforming products in a row, or the probability of finding at least one nonconforming product in the 22 samples. This is also the probability of rejecting the lot.

p(rejecting the lot) = 1 – p(22 conforming products in a row) = 1 – 0.10 = 0.90 = 90%

This can also be stated as the confidence that if the lot is passing our inspection (if we found zero rejects), then the lot is at least 90% conforming.

In other words, C = 1 – R^n.

Or R^n = 1 – C

Taking logarithms of both sides:

n * ln(R) = ln(1 – C)

Or n = ln(1 – C)/ln(R)

Using the example, if we tested 22 samples from a lot, and there were zero rejects, then we can with 90% confidence say that the lot is at least 90% conforming. This is also a form of lot tolerance percent defective (LTPD) sampling in acceptance sampling. We can get the same results using an OC curve.

Using a similar approach, we can derive a one-sided, nonparametric tolerance interval. If we test 22 samples, we can say with 90% confidence level that at least 90% of the population is above the smallest value of the samples tested.

Any statistic we calculate should reflect our lack of knowledge of the parameter of the population. Using the confidence/reliability statement is one way of doing it. I’m calling this the epistemic humility dictum:

Any statistical statement we make should reflect our lack of knowledge of the “true” value/nature of the parameter we’re interested in.

First published July 4, 2023, in Harish’s Notebook.

### Discuss 