Statistics Article

Donald J. Wheeler’s picture

By: Donald J. Wheeler

Most of the world’s data are obtained as byproducts of operations. These observational data track what happens over time and have a structure that requires a different approach to analysis than that used for experimental data. An understanding of this approach will reveal how Shewhart’s generic, three-sigma limits are sufficient to define economic operation for all types of observational data.

Management requires prediction, yet all data are historical. To use historical data to make predictions, we will have to use some sort of extrapolation. We might extrapolate from the product we have measured to a product not measured, or we might even extrapolate from the product measured to a product not yet made. Either way, the problem of prediction requires that we know when these extrapolations are reasonable and when they are not.

The structure of observational data

Before we talk about prediction, we need to consider the structure of observational data. For any one product characteristic we can usually list dozens, or even hundreds, of cause-and-effect relationships which affect that characteristic. Some of these causes will have larger effects than the others. So, if we had perfect knowledge, we could arrange the causes in order according to their effects to obtain a Pareto like Figure 1.

Anthony D. Burns’s picture

By: Anthony D. Burns

I’m a chemical engineer. The fundamentals of the chemical engineering profession were laid down 150 years ago by Osborne Reynolds. Although chemical engineering has seen many advances, such as digital process control and evolutionary process optimization, every engineer understands and uses Reynold’s work. Most people have heard of the Reynolds number, which plays a key role in calculating air and liquid fluid flows. There are no fads. Engineers use the fundamentals of the profession.

Fads, fads, fads

By contrast, in the past 70 years, “quality” has seen more than 20 fads. The fundamentals have been forgotten and corrupted. Quality has been lost. Quality managers engage in an endless pursuit of magic pudding that will fix all their problems.

Alarmingly, the latest “quality” fad, Agile, has nothing to do with quality. It’s a software development fad that evolved from James Martin’s rapid application development (RAD) fad of the 1980s. This in turn grew into the rapid iterative processing (RIP) fad. When it comes to quality today, anything will do, no matter how unrelated.

W. Edwards Deming’s picture

By: W. Edwards Deming

Editor’s note: The following is from a transcript of a forgotten speech given in Tokyo in 1978 by W. Edwards Deming for the Union of Japanese Scientists and Engineers (JUSE). Because the original was a poor photocopy, there are small portions of text that could not be transcribed. Transcript courtesy of Mike McLean.

The spectacular leap in quality of most Japanese manufactured products, from third-rate to top quality and dependability, with astounding economy in production, started off in 1950 with a meteoric flash, and still continues. The whole world knows about Japanese quality and the sudden surge upward that began in 1950, but few people have any idea how it happened.

It seems worthwhile to collect in one place the statistical principles of administration that made possible the revolution of quality in Japan, as even at this date, most of these principles are not generally understood or practiced in America. It is for this reason that the title speaks of new principles.

The relative importance of some of the principles explained here have of course changed over the years since 1950. Some principles stated here have emerged as corollaries of earlier principles. Other corollaries could be added, almost without end.

William A. Levinson’s picture

By: William A. Levinson

Part one of this article showed that it is possible, by means of a Visual Basic for Applications program in Microsoft Excel, to calculate the fraction of in-specification product that is rejected by a non-capable gage, as well as the fraction of nonconforming product that is accepted. This calculation requires only 1) the process performance metrics, including the parameters of the distribution of the critical to quality characteristic, which need not be normal; and 2) the gage variation as assessed by measurement systems analysis (MSA).

Part 2 of the series shows how to optimize the acceptance limits to either minimize the cost of wrong decisions, or assure the customer that it will receive no more than a specified fraction of nonconforming work.

William A. Levinson’s picture

By: William A. Levinson

IATF 16949:2016 clause requires measurement systems analysis (MSA) to quantify gage and instrument variation. The deliverables of the generally accepted procedure are the repeatability or equipment variation, and the reproducibility or appraiser variation. The Automotive Industry Action Group1 adds an analytic process with which to quantify the equipment variation (repeatability) of go/no-go gages if these come in specified dimensions, or can be adjusted to selected dimensions.

The anvils of a snap gage can, for example, be set accurately to specified dimensions with Johansson gage blocks. Pin gages (also known as plug gages), on the other hand, come in small but discrete increments. If the precision to tolerance (P/T) ratio is greater than the generally accepted target, the gage cannot distinguish reliably between good and nonconforming product near the specification limits. This means nonconforming work will reach internal or external customers, while good items will be rejected, as shown in figure 1 below.

Saligrama Agnihothri’s picture

By: Saligrama Agnihothri

Health-tracking devices and apps are becoming part of everyday life. More than 300,000 mobile phone applications claim to help with managing diverse personal health issues, from monitoring blood glucose levels to conceiving a child.

But so far the potential for health-tracking apps to improve healthcare has barely been tapped. Although they allow a user to collect and record personal health data, and sometimes even share it with friends and family, these apps typically don’t connect that information to a patient’s digital medical chart, or make it easier for healthcare providers to monitor or share feedback with their patients.

William A. Levinson’s picture

By: William A. Levinson

The first part of this series introduced measurement systems analysis for attribute data, or attribute agreement analysis. AIAG1 provides a comprehensive overview, and Jd Marhevko2 has done an outstanding job of extending it to judgment inspections as well as go/no-go gages. Part two will cover the analytical method, which allows more detailed quantification of the gage standard deviation and also bias, if any, with the aid of parts that can be measured in terms of real numbers.

Part one laid out the procedure for data collection as well as the signal detection approach, which identifies and quantifies the zone around the specification limits where inspectors and gages will not obtain consistent results. The signal detection approach can also deliver a rough estimate of the gage’s repeatability or equipment variation. Go/no-go gages that can be purchased in specific dimensions, or set to specific dimensions (e.g., with gage blocks) do indeed have gage standard deviations even though they return pass/fail results.

James Bossert’s picture

By: James Bossert

When we talk about measurement system analysis (MSA), people tend to focus on attribute agreement analysis because it is usually quicker and easier to do than a gauge repeatability and reproducibility (gauge R&R) study. This article is a review of the fundamentals for gauge R&R to remind us why it is so critical. We will review the basic definitions, go through a process for preparing a study, and then review the output in Minitab to make sure we understand what is going on in the analysis.

Why do we do a gauge R&R study in the first place? We do it for two reasons. One is to validate that the measurement process is acceptable. The second is to feel comfortable about the data. We want to avoid the potential for embarrassment by presenting data in a meeting and having it challenged. We want to be able to show what we did to validate it and to have confidence the data are good. Doing MSA also helps us to understand what is in the collection process and helps convince the people collecting the data why it’s important to do so. If the data collector understands why the study is being done, then he will be more likely to identify problems when they occur.

William A. Levinson’s picture

By: William A. Levinson

Measurement systems analysis (MSA) for attributes, or attribute agreement analysis, is a lot like eating broccoli or Brussels sprouts. We must often do things we don't like because they are necessary or good for us. While IATF 16949:2016, Clause—“Measurement systems analysis,” does not mention attribute agreement analysis explicitly, it does say that MSA shall be performed to assess “variation present in the results of each type of inspection, measurement, and test equipment system identified in the control plan.” It does not limit this requirement to the familiar real-number measurements with which we are comfortable.

Common sense says, meanwhile, that it is beneficial to understand the capabilities and limitations of inspections for attributes. The last thing we want to hear from a customer is, for example, “Your ANSI/ASQ Z1.4 sampling plan with an acceptable quality level of 0.1 percent just shipped us a lot with 2-percent nonconforming work.” Samuel Windsor describes how an attribute gage study saved a company $400,000 a year, which is a powerful incentive to learn about this and use it where applicable.1 Jd Marhevko has done an outstanding job of extending attribute agreement analysis to judgment inspections as well as go/no-go gages.2

Ryan McKenna’s picture

By: Ryan McKenna

To date, this series focused on relatively simple data analyses, such as learning one summary statistic about our data at a time. In reality, we’re often interested in a slightly more sophisticated analysis, so we can learn multiple trends and takeaways at once and paint a richer picture of our data.

In this article, we will look at answering a collection of counting queries—which we call a workload—under differential privacy. This has been the subject of considerable research effort because it captures several interesting and important statistical tasks. By analyzing the specific workload queries carefully, we can design very effective mechanisms for this task that achieve low error.

Syndicate content