Statistics Article

Minitab LLC’s picture

By: Minitab LLC

Process validation is vital to the success of companies that manufacture pharmaceutical drugs, vaccines, test kits, and a variety of other biological products for people and animals. According to FDA guidelines, process validation is “the collection and evaluation of data, from the process design state through commercial production, which establishes scientific evidence that a process is capable of consistently delivering a quality product.”

The FDA recommends three stages for process validation. Let’s explore the stage goals and the types of activities and statistical techniques typically conducted within each. You can use Minitab Statistical Software to run any of the analyses here. If you don’t yet have Minitab, try it free for 30 days.

Stage 1: Process design

Goal: Design a process suitable for routine commercial manufacturing that can consistently deliver a product that meets its quality attributes.

It is important to demonstrate an understanding of the process and characterize how it responds to various inputs within process design.

David Currie’s picture

By: David Currie

This is the second article in a three-part series to help readers distinguish good metrics from bad. In part one we discussed good metrics. Here, we will look at a bad metric and consider how to change it into a useful, good metric. A bad metric is one that fails in one or more of the attributes of a good metric and is often not usable for the purpose it was intended.

Attributes of a good metric

A good metric:
• Supports the goals and objectives of the quality system
• Contains data with sufficient detail to allow analysis of specific defects
• Contains data that have been carefully collected, and checked for accuracy and completeness
• Contains data that are combined in a way that clearly represents the process
• Uses a data-collection process that is clearly understood
• Demonstrates a clear relationship between the process and the data being used
• Has a metric-review interval that matches the response time for corrections
• Results in process improvement and overall cost savings

Anthony Chirico’s picture

By: Anthony Chirico

Perhaps the reader recognizes d2 as slang for “designated driver,” but quality professionals will recognize it as a control chart constant used to estimate short-term variation of a process. The basic formula shown below is widely used in control charting for estimating the short-term variation using the average range of small samples. But what exactly is d2 and why should we care?

L.H.C. Tippett

To find some answers to this question, we need to consult the 1925 work of L.H.C. Tippett.1 Leonard Henry Caleb Tippett was a student of both Professor K. Pearson and Sir Ronald A. Fisher in England. Tippett pioneered “Extreme Value Theory,” and while advancing the ideas of Pearson’s 1902 paper of Galton’s Difference Problem,2 he noted that the prior work of understanding the distribution of the range for a large number of samples was deficient.

Tippett proceeded to use calculus and hand calculations to integrate and determine the first, second, third, and fourth moments of the range for samples drawn from a standard normal distribution. That is, he calculated the mean, variance, skewness, and kurtosis for sample sizes of size two through 1,000 by hand.

Minitab LLC’s picture

By: Minitab LLC

Choosing the correct linear regression model can be difficult. Trying to model it with only a sample doesn’t make it any easier. Let’s review some common statistical methods for selecting models, complications you may face, and look at some practical advice for choosing the best regression model.

It starts when a researcher wants to mathematically describe the relationship between some predictors and the response variable. The research team tasked to investigate typically measures many variables but includes only some of them in the model. The analysts try to eliminate the variables that are not related and include only those with a true relationship. Along the way, the analysts consider many possible models.

They strive to achieve a Goldilocks balance with the number of predictors they include.  
Too few: An underspecified model tends to produce biased estimates.
Too many: An overspecified model tends to have less precise estimates.
Just right: A model with the correct terms has no bias and the most precise estimates.

Dirk Dusharme @ Quality Digest’s picture

By: Dirk Dusharme @ Quality Digest

In this episode we look at a history of quality, how you serve your customer in the housing industry, and what makes a good review.

“Young couples ‘trapped in car dependency’”

Building entry-level housing along highways may give couples the chance to buy a home, but at what cost to them and the environment?

“The Quality Profession: Where It's Been, Where It's Going,” an interview with Barbara Cleary of PQ Systems

We talk to a 35-year veteran of the quality industry about the changes she has seen in the industry.

“Perfect Information: Customer Reviews That Influence Purchasing Decisions”

Want to leave a customer review that is meaningful and gets readers attention? Here’s how.

David Currie’s picture

By: David Currie

Metrics are an important part of an effective quality management system (QMS). They are necessary to understand, validate, and course-correct the QMS. They should be used to verify that it is achieving the goals and objectives defined by management. In an ISO 9001 system, metrics must be available to assess risk, and validate changes made to the QMS and individual processes. Metrics are also used to validate improvement and verification of corrective action implementation during the management review.

I have seen and used many metrics in the past, and in my experience not all metrics are equally good; in fact, many are totally inappropriate for the purpose for which they are being used. This article, the first in a three-part series, will help readers distinguish good metrics from bad—or as the title suggests, the downright ugly. Once the characteristics of a good metric are known, a bad metric can be converted into a good metric. This series is divided into three parts: Part one explains what a good metric is, part two identifies bad metrics and explains how to convert them, and part three looks at ugly metrics and explains why they have no hope for conversion.

Dirk Dusharme @ Quality Digest’s picture

By: Dirk Dusharme @ Quality Digest

In this episode we look at data, data, more data, and then... engineering the perfect human?

“Your Data Are Your Most Valuable Assets”

Just what the heck is Quality 4.0? Remember this acronym: CIA. No, not that CIA. Nicole Radziwill explains.

“Applying Smart Manufacturing Technology to Conduct Smart Inspections”

There is an easier way to do inspections on the shop floor than using a clipboard and pencil (remember those skinny yellowey-orange things?). It's called your mobile device.

Anthony Chirico’s picture

By: Anthony Chirico

In my first article, the merits and cautions of AS9138 c=0 sampling plans were discussed and a simple formula was provided to determine the required sample size to detect nonconforming units. In the second article, the process control properties of MIL-STD-105 c>0 sampling plans were demonstrated, and the connectivity to other process control techniques was discussed. Here, a third alternative will be explored that applies the procedures of MIL-STD-105 to “imaginary limits” which are set proportionally inside the real “engineering specification.” This imaginary limit procedure thereby does not allow nonconforming units in the sample and has superior detection capabilities.

Anthony Chirico’s picture

By: Anthony Chirico

In my previous article, I discussed the merits and cautions of the “acceptance number” equal zero (c=0) sampling plans contained within AS9138. A simple formula was provided to determine appropriate sample size, and it was illustrated that twice the inspection does not provide twice the consumer protection. Although there is an undeniable emotional appeal to implement sampling procedures that have an acceptance number of zero, readers must not jump to the conclusion that c=0 sampling procedures provide better consumer protection at the designed lot tolerance percent defective (LTPD) point.

In this article the merits and limitations of MIL-STD-105 will be illustrated, and its link to process control will be demonstrated. Before discussing the technical merits of MIL-STD-105, its impressive evolution deserves some recognition.

Anthony Chirico’s picture

By: Anthony Chirico

Aerospace standard AS9138—“Quality management systems statistical product acceptance requirements” was issued this year (2018), a few years after its accompanying guidance materials in section 3.7 of the International Aerospace Quality Group’s (IAQG) Supply Chain Management Handbook. The new aerospace standard supersedes the aerospace recommended practice of ARP9013 and, related to MIL-STD-105 (ANSI/ASQ Z1.4), claims to shift focus from the producer’s risk to the consumer’s risk with sampling plans having an acceptance number of zero (c=0).

Somewhere along this evolutionary path, the sampling procedures of MIL-STD-105 have fallen out of favor, even though the consumer risks of MIL-STD-105 at their designed lot tolerance percent defective (LTPD) point are superior to most plans found within AS9138.

Syndicate content