Statistics Article

Dirk Dusharme @ Quality Digest’s picture

By: Dirk Dusharme @ Quality Digest

In this episode we look at a history of quality, how you serve your customer in the housing industry, and what makes a good review.

“Young couples ‘trapped in car dependency’”

Building entry-level housing along highways may give couples the chance to buy a home, but at what cost to them and the environment?

“The Quality Profession: Where It's Been, Where It's Going,” an interview with Barbara Cleary of PQ Systems

We talk to a 35-year veteran of the quality industry about the changes she has seen in the industry.

“Perfect Information: Customer Reviews That Influence Purchasing Decisions”

Want to leave a customer review that is meaningful and gets readers attention? Here’s how.

David Currie’s picture

By: David Currie

Metrics are an important part of an effective quality management system (QMS). They are necessary to understand, validate, and course-correct the QMS. They should be used to verify that it is achieving the goals and objectives defined by management. In an ISO 9001 system, metrics must be available to assess risk, and validate changes made to the QMS and individual processes. Metrics are also used to validate improvement and verification of corrective action implementation during the management review.

I have seen and used many metrics in the past, and in my experience not all metrics are equally good; in fact, many are totally inappropriate for the purpose for which they are being used. This article, the first in a three-part series, will help readers distinguish good metrics from bad—or as the title suggests, the downright ugly. Once the characteristics of a good metric are known, a bad metric can be converted into a good metric. This series is divided into three parts: Part one explains what a good metric is, part two identifies bad metrics and explains how to convert them, and part three looks at ugly metrics and explains why they have no hope for conversion.

Dirk Dusharme @ Quality Digest’s picture

By: Dirk Dusharme @ Quality Digest

In this episode we look at data, data, more data, and then... engineering the perfect human?

“Your Data Are Your Most Valuable Assets”

Just what the heck is Quality 4.0? Remember this acronym: CIA. No, not that CIA. Nicole Radziwill explains.

“Applying Smart Manufacturing Technology to Conduct Smart Inspections”

There is an easier way to do inspections on the shop floor than using a clipboard and pencil (remember those skinny yellowey-orange things?). It's called your mobile device.

Anthony Chirico’s picture

By: Anthony Chirico

In my first article, the merits and cautions of AS9138 c=0 sampling plans were discussed and a simple formula was provided to determine the required sample size to detect nonconforming units. In the second article, the process control properties of MIL-STD-105 c>0 sampling plans were demonstrated, and the connectivity to other process control techniques was discussed. Here, a third alternative will be explored that applies the procedures of MIL-STD-105 to “imaginary limits” which are set proportionally inside the real “engineering specification.” This imaginary limit procedure thereby does not allow nonconforming units in the sample and has superior detection capabilities.

Anthony Chirico’s picture

By: Anthony Chirico

In my previous article, I discussed the merits and cautions of the “acceptance number” equal zero (c=0) sampling plans contained within AS9138. A simple formula was provided to determine appropriate sample size, and it was illustrated that twice the inspection does not provide twice the consumer protection. Although there is an undeniable emotional appeal to implement sampling procedures that have an acceptance number of zero, readers must not jump to the conclusion that c=0 sampling procedures provide better consumer protection at the designed lot tolerance percent defective (LTPD) point.

In this article the merits and limitations of MIL-STD-105 will be illustrated, and its link to process control will be demonstrated. Before discussing the technical merits of MIL-STD-105, its impressive evolution deserves some recognition.

Anthony Chirico’s picture

By: Anthony Chirico

Aerospace standard AS9138—“Quality management systems statistical product acceptance requirements” was issued this year (2018), a few years after its accompanying guidance materials in section 3.7 of the International Aerospace Quality Group’s (IAQG) Supply Chain Management Handbook. The new aerospace standard supersedes the aerospace recommended practice of ARP9013 and, related to MIL-STD-105 (ANSI/ASQ Z1.4), claims to shift focus from the producer’s risk to the consumer’s risk with sampling plans having an acceptance number of zero (c=0).

Somewhere along this evolutionary path, the sampling procedures of MIL-STD-105 have fallen out of favor, even though the consumer risks of MIL-STD-105 at their designed lot tolerance percent defective (LTPD) point are superior to most plans found within AS9138.

Dirk Dusharme @ Quality Digest’s picture

By: Dirk Dusharme @ Quality Digest

In this all-manufacturing episode, we look at the STEM pipeline into manufacturing, supplier development, how to make sense of manufacturing data and, no, manufacturing is not dead.

“Strengthening the STEM Workforce Pipeline Through Outreach”

NIST does more than just research and come up with really cool technology. It also has STEM outreach programs designed to help young people and minorities get excited by all this coolness.

“Making Sense of Manufacturing Data”

Are you collecting enough data? Are you collecting too much data? Do you know the difference between data and information? Doug Fair of InfinityQS does. He told us.

Donald J. Wheeler’s picture

By: Donald J. Wheeler

Parts One and Two of this series illustrated four problems with using a model-building approach when data snooping. This column will present an alternative approach for data snooping that is of proven utility. This approach is completely empirical and works with all types of data.

The model-building approach

When carrying out an experimental study we want to know how a set of input variables affect some response variable. In this scenario it makes sense to express the relationships with some appropriate model (such as a regression equation). The experimental runs will focus our attention on these specific input and response variables, and the model will answer our questions about the relationships.

Davis Balestracci’s picture

By: Davis Balestracci

During the early 1990s, I was president of the Twin Cities Deming Forum. I had a wonderful board whose members were full of great ideas. One member, Doug Augustine, was a 71-year-old retired Lutheran minister and our respected, self-appointed provocateur. He never missed an opportunity to appropriately pull us right back to reality with his bluntly truthful observations and guaranteed follow through on every commitment he made.

After W. Edwards Deming’s death in 1993, we tried to keep the forum alive, but monthly meeting attendance started to drop off significantly. We were offering innovative meetings to grow in practice of Deming’s philosophy, but our members wanted static rehashing and worship of “the gospel according to Dr. Deming.”

The last straw was a meeting where a self-appointed Deming expert/consultant went too far: He lobbed one too many of his predictable, pedantic “gotcha! grenades” at the speaker for alleged deviations from “the gospel.” His persistence blindsided and visibly upset our wonderful guest speaker. I was furious and publicly told him to stop. It did not go over well—except with my board.

Laurel Thoennes @ Quality Digest’s picture

By: Laurel Thoennes @ Quality Digest

In the foreword of Mark Graban’s book, Measures of Success: React Less, Lead Better, Improve More (Constancy Inc., 2018), renowned statistician, Donald J. Wheeler, writes about Graban: “He has created a guide for using and understanding the data that surround us every day.

“These numbers are constantly changing,” explains Wheeler. “Some of the changes in the data will represent real changes in the underlying system or process that generated the data. Other changes will simply represent the routine variation in the underlying system when nothing has changed.”

The problem is in deciding whether data changes are “noise” or signals of real changes in the system.

“Mark presents the antidote to this disease of interpreting noise as signals,” adds Wheeler. “And that is why everyone who is exposed to business data of any type needs to read this book.”

Syndicate content