Statistics Article

NIST’s picture

By: NIST

NIST scientists have devised and improved a prototype instrument the size of a loaf of bread that can substantially increase the accuracy of length measurements in commerce. Perhaps surprisingly, it does so by achieving the most accurate measure of the refractive index of air reported to date.

An enormous share of U.S. industry depends on precise dimensional measurements, frequently on the scale of nanometers (nm, billionths of a meter) or even smaller. Such measurements can only be made by using light, typically laser light, employing a device called an interferometer, which measures lengths in terms of the wavelength of light. For example, the diameter of a pinhead is about 2,400 wavelengths of red light.

TÜV SÜD America’s picture

By: TÜV SÜD America

The revised ISO 9001:2015 standard was published on Sept. 23, 2015. The release of this standard has many organizations preparing for the transition from ISO 9001:2008, and with transition comes uncertainty: How long will it take? What will it cost my organization? Where do I start? These and other questions are far from uncommon.

Fred Schenkelberg’s picture

By: Fred Schenkelberg

The planning of environmental or reliability testing becomes a question of sample size at some point. It’s probably the most common question I hear as a reliability engineer: How many samples do we need?

Also, when evaluating supplier-run test results, we need to understand the implications of the results, again based on the number of samples in the test. If the supplier runs 22 samples without failure over a test that replicates the shipping set of stresses, then we need a way to interpret those results.

We often use success testing (no expected or actual failures during the testing) to minimize the number of samples required for a test and still show some level of confidence for a specified reliability level. The basis for success testing is the binomial distribution. The result of the applied stress results in the product either working or not. Binary results.

Recently I received a request to explain where the success-testing sample size formula comes from, or it’s derivation. First here’s the formula:

Where, C is confidence and R is the lower limit of the reliability.

Douglas C. Fair’s picture

By: Douglas C. Fair

Sponsored Content

Statistical process control (SPC) software has been around for decades, used by manufacturers across industries to help monitor process behavior and control quality on the shop floor. Like any technology, the software has evolved over the years into something much more than a tool for collecting quality data from a single manufacturing site.

In addition to its intrinsic ability to collect quality data, today’s most advanced SPC software can serve as a solution for managing global, enterprisewide quality. However, many manufacturers are hesitant to embrace this expanded functionality. They are either reluctant to move away from their familiar, yet likely antiquated, SPC system, or are simply unsure what other value the technology has to offer. So, what else can (and should) your quality software do? Consider the following checklist.

Eston Martz’s picture

By: Eston Martz

I read trade publications that cover everything from banking to biotech, looking for interesting perspectives on data analysis and statistics, especially where they pertain to quality improvement.

Recently I read a great blog post by Tony Taylor, an analytical chemist with a background in pharmaceuticals. In it, he discusses the implications of the FDA's updated guidance for industry analytical procedures and methods validation. His audience comprises analytical chemists and pharmaceutical researchers, people who are technologically savvy and adept at solving problems. The kind of people you’d imagine are very capable and eager to collect some data and figure out what they mean.

Or maybe not.

What Taylor’s post makes clear is that even a highly educated, scientifically inclined audience like this doesn’t necessarily appreciate the value of statistical analysis—or at least, doesn’t really enjoy actually doing it.

Eston Martz’s picture

By: Eston Martz

Whatever industry you’re in, you’re going to need to buy supplies. If you’re a printer, you’ll need to purchase inks, various types of printing equipment, and paper. If you’re in manufacturing, you’ll need to obtain parts that you don’t make yourself. But how do you know you’re making the right choice when you have multiple suppliers vying to fulfill your orders? How can you be sure you’re selecting the vendor with the highest quality, or eliminating the supplier whose products aren’t meeting your expectations?

Let’s take a look at an example from automotive manufacturing to see how we can use data to make an informed decision about the options.

Camshaft problems

Thanks to camshafts that don’t meet specifications, too many of your company’s engines are failing. It’s harming your reputation and revenue. Your company has two different camshaft suppliers, and it’s up to you to figure out if camshafts from one or both of them are failing to meet standards.

Scott A. Hindle’s picture

By: Scott A. Hindle

When considering how good a production process is, it’s important to ask, “Can we expect the output to be fully conforming?” An assessment of process capability can answer this. Data are needed, but how many? Is “30” the right number? This article examines these last two questions.

First, why 30?

There’s an old joke about statisticians not knowing the difference between 30 and infinity, and figure 1 should shed light on its origin. Degrees of freedom, shown on the x-axis and hereafter referred to as “d.f.,” help to determine how precise, or “solid,” an estimate of standard deviation is, given its estimated uncertainty (the y-axis).1 Figure 1 shows that by the time an estimate of standard deviation is based on 30 d.f., it’s about as precise an estimate as it’s likely to get. (If 30 d.f. aren’t sufficient, getting up to 120 d.f.—a fourfold increase—is necessary to reduce the uncertainty by half.) This is potentially important because an estimate of standard deviation is essential to make an assessment of process capability possible.

Douglas Allen’s picture

By: Douglas Allen

It’s a cold winter’s night in northern New Hampshire. You go out to the woodshed to grab a couple more logs, but as you approach, your hear a rustling inside the shed. You’ve gotten close enough to know you have a critter in the woodpile. You run back inside, bolt the door, hunker down with your .30–06, and prepare for a cold, fireless night.

Analyzing data using common tools like f-tests, t-tests, transformations, and ANOVA methods are a lot like that scenario. They can tell you that you’ve got a critter in the woodshed, but they can’t tell you whether it’s a possum or a black bear. You need to take a look inside to figure this out. Limiting data analysis to the results that you get from the tools cited above is almost always going to lead to missed information and, often, to wrong decisions. Charting is the way to take a look inside your data.

Statistics

Statistics

News

More News

"Skew-whiff" Credit: Bahi

Donald J. Wheeler
The shape parameters for a probability model are called skewness and kurtosis. While skewness at least sounds like something we might understand, kurtosis simply sounds like jargon. Here we’ll use...
    Almost seven years ago, Quality Digest presented a short article by Matthew Barsalou titled “A Worksheet for Ishikawa Diagrams.” At the time, I commented concerning enhancements that provide greater granularity. Indicating that he would probably have little time to devote to such a project,...
    The computation for skewness does not fully describe everything that happens as a distribution becomes more skewed. Here we shall use some examples to visualize just what skewness does—and does not—involve. The mean for a probability model describes the balance point. The standard deviation...
    Does your use of probabilities confuse your audience? Sometimes even using numbers can be misleading. The notion of a 1-in-a-100-year flood doesn’t prevent the possibility of flooding occurring in consecutive years. This description is no more than a statistical device for explaining the likelihood...
    There are four major questions in statistics. These can be listed under the headings of description, probability, inference, and homogeneity. An appreciation of the relationships between these four areas is essential for successful data analysis. This column outlines these relationships and...

More Articles

CorDEX Instruments’s picture

By: CorDEX Instruments


NFPA 70E, full title  “NFPA 70E—“Standard for Electrical Safety in the Workplace,” is a standard written by the National Fire Protection Association (NFPA). NFPA 70E was created primarily to assist companies and their personnel in avoiding potential fatalities and injuries from electrocution, arc-flash incidents, and arc blasts in the workplace. It covers subjects such as selecting appropriate personnel protection equipment (PPE), maintenance, employee training, risk assessments, and safe working practices.

Recent regulation changes

The 2015 edition of NFPA 70E shows a huge change in how electrical risk is evaluated. In the 2012 edition, quantifying a potential electrical risk was done via shock and arc-flash analysis. The 2015 edition streamlines requirements for arc and shock protection, and outlines revised program requirements with a greater emphasis on risk assessment—now referred to as a “flash risk assessment.”

Syndicate content