{domain:"www.qualitydigest.com",server:"169.47.211.87"} Skip to main content

User account menu
Main navigation
  • Topics
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Videos/Webinars
    • All videos
    • Product Demos
    • Webinars
  • Advertise
    • Advertise
    • Submit B2B Press Release
    • Write for us
  • Metrology Hub
  • Training
  • Subscribe
  • Log in
Mobile Menu
  • Home
  • Topics
    • 3D Metrology-CMSC
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Login / Subscribe
  • More...
    • All Features
    • All News
    • All Videos
    • Contact
    • Training

Deriving the Success Run Theorem

Any statistical statement we make should reflect our lack of knowledge

Credit: Mathieu Turle on Unsplash

Harish Jose
Mon, 08/28/2023 - 12:03
  • Comment
  • RSS

Social Sharing block

  • Print
Body

The success run theorem is one of the most common statistical rationales for sample sizes used for attribute data.

ADVERTISEMENT

It goes in the form of:

Having zero failures out of 22 samples, we can be 90% confident that the process is at least 90% reliable (or at least 90% of the population is conforming).

Or:

Having zero failures out of 59 samples, we can be 95% confident that the process is at least 95% reliable (or at least of 95% of the population is conforming).

The formula for the success run theorem is given as:

n = ln(1 – C)/ ln(R), where n is the sample size, nl is the natural logarithm, C is the confidence level, and R is reliability.

The derivation is straightforward and we can use the multiplication rule of probability to derive it. Let’s assume that we have a lot of infinite size, and we are testing random samples out of the lot. The infinite size of the lot ensures independence of the samples. If the lot was finite and small, then the probability of finding good (i.e., conforming) or bad (nonconforming) parts will change from sample to sample if we’re not replacing the tested sample back into the lot.

 …

Want to continue?
Log in or create a FREE account.
Enter your username or email address
Enter the password that accompanies your username.
By logging in you agree to receive communication from Quality Digest. Privacy Policy.
Create a FREE account
Forgot My Password

Add new comment

Image CAPTCHA
Enter the characters shown in the image.
Please login to comment.
      

© 2025 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute Inc.

footer
  • Home
  • Print QD: 1995-2008
  • Print QD: 2008-2009
  • Videos
  • Privacy Policy
  • Write for us
footer second menu
  • Subscribe to Quality Digest
  • About Us
  • Contact Us