Statistics Article

Jody Muelaner’s picture

By: Jody Muelaner

In a general sense, capability is the ability to do something. Within manufacturing, capability is given a much more specific definition. It is an expression of the accuracy of a process or equipment, in proportion to the required accuracy.

This can be applied to production processes, in which case any random variation and bias in the process must be significantly smaller than the product tolerance. It can also be applied to measurements, where any uncertainties in the measurement must be significantly smaller than the product tolerance or process variation that is being measured.

Rohit Mathur’s picture

By: Rohit Mathur

Whatever the process or type of data collected, all data display variation. This is also true in software development. Any measure or parameter of interest to our business will vary from time period to time period, e.g., number of incidents per week or month, time taken in resolving incidents, number of tickets encountered in a production support environment per month, and defect density in code.

Understanding variation is about being able to describe the behavior of processes or systems over time. This variation can be stable, predictable, and routine, or unstable, unpredictable, and exceptional. Being able to distinguish between stable or common-cause variation, and unstable or special-cause variation, helps us to decide the type of action needed to improve the process. The control chart, developed by Walter Shewhart, is the tool that enables us to do so.

Multiple Authors
By: Romesh Saigal, Abdullah AlShelahi

Soon after the Great Recession, the U.S. stock markets plunged—and rebounded within 36 minutes. The Dow Jones Industrial Average dropped more than 9 percent, losing more than 1,000 points before suddenly recovering.

This May 6, 2010, event was the first recorded “flash crash.” Although it didn’t have long-term effects, it raised concerns among investors about the stability of the stock market.

Scott A. Hindle’s picture

By: Scott A. Hindle

In everyday language, “in control” and “under control” are synonymous with “in specification.” Requirements have been met. Things are OK. No trouble.

“Out of control,” on the other hand, is synonymous with “out of specification.” Requirements have not been met. Things are not OK. Trouble.

Using this language, an obvious axiom would be: Take action when the process is out of control.

The everyday use of in and out of control is, however, unfortunate for control charts, the major tool of statistical process control (SPC). Why? Because in SPC these terms speak of processes as being stable or unstable. To characterize a process as stable or unstable, process limits, from process data, are needed. Specification limits are not needed.

Given the easy-to-understand basis for the action of meeting or not meeting requirements, coupled with the risk of confusion over the terms in control and out of control, why use control charts? If you are curious to see some of the benefits in doing so, read on. Two case studies are used.

Case one: Part thickness

During a regular review meeting in Plant 17, in- and out-of-specification data on the thickness of part 64 were reviewed.

Multiple Authors
By: Chad Kymal, Gregory F. Gruska

During the early 1980s, GM, Ford, and Chrysler established the Automotive Industry Action Group (AIAG), a not-for-profit organization with the mission “To improve its members’ competitiveness through a cooperative effort of North American vehicle manufacturers and their suppliers.” In the late 1980s, U.S. automotive suppliers, through the auspices of the American Society for Quality (ASQ), approached the VPs of purchasing for GM, Ford, and Chrysler and explained the burden of multiple standards that were being imposed on the supply base. Not only where there multiple OEM standards, there were hundreds of tier one standards as well.

Jay Arthur—The KnowWare Man’s picture

By: Jay Arthur—The KnowWare Man

When I first learned quality improvement back in 1989 at Florida Power and Light, the consultants who trained us taught a very specific way to draw a Pareto chart. They’d been trained in Japan, the place where quality improvement first took root during the 1950s, so I took it for granted that the way they drew Pareto charts was the authentic and best way to do so.

A Pareto chart combines a bar graph with a cumulative line graph. Using the way we were taught to draw a Pareto chart (figure 1), the bars are touching, making it extremely easy to visually compare levels from one bar to the next. The bars span the entire available space along the x axis. The cumulative line graph springs from the bottom left corner of the first big bar, and each subsequent point is plotted from the corresponding top right corner of its bar.

Ryan E. Day’s picture

By: Ryan E. Day

Current business conversation often focuses on data and big data. Data are the raw information from which statistics are created and provide an interpretation and summary of data. Statistics make it possible to analyze real-world business problems and measure key performance indicators that enable us to set quantifiable goals. Control charts and capability analysis are key tools in these endeavors.

Control charts

Developed in the 1920s by Walter A. Shewhart, control charts are used to monitor industrial or business processes over time. Control charts are invaluable for determining if a process is in a state of control. But what does that mean?

William A. Levinson’s picture

By: William A. Levinson

Anthony Chirico1 describes how narrow-limit gauging (NLG, aka compressed limit plans) can reduce enormously the required sample size, and therefore the inspection cost, of a traditional attribute sampling plan. The procedure consists of moving acceptance limits t standard deviations inside the engineering specifications, which increases the acceptable quality level (AQL) and therefore reduces the sample size necessary to detect an increase in the nonconforming fraction.

Teofilo Cortizo’s picture

By: Teofilo Cortizo

Within maintenance management, the term MTBF (mean time between failures) is the most important key performance indicator after physical availability.

Unlike MTTF (mean time to failure), which relates directly to available equipment time, MTBF also adds up the time spent inside a repair. That is, it starts its count from a certain failure and only stops when the fault is remedied, and the equipment restarted and performing again. According to ISO 12849, this indicator can only be used for repairable equipment, and MTTF is the equivalent of nonrepairable equipment.

The graphic below illustrates these occurrences:


Figure 1: Mean time between failures

To calculate the MTBF in figure 1, we add the times T1 and T2 and divide by two. That is, the average of all times between one failure and another, as well as its return, is calculated. It is, therefore, a simple mathematical calculation. But what does MTBF mean?

Scott A. Hindle’s picture

By: Scott A. Hindle

‘Process Capability: What It Is and How It Helps,” parts one, two, three, and four, discussed Alan’s development in the field of process capability1 He’d learned about the mistakes that can be made and how to avoid them in practice to become better at his job. Alan had since passed on his learning to colleagues, one of whom, Owen, had led some successful assessments of process capability.

Syndicate content