William A. Levinson’s picture

By: William A. Levinson

The Automotive Industry Action Group’s (AIAG’s) and German Association of the Automotive Industry’s (VDA’s) new Failure Mode and Effects Analysis Handbook (AIAG, 2019) offers significant advances over FMEA as practiced 15 or 20 years ago.The publication is definitely worth buying because the new approach includes valuable methodology; this article will cover the most important points and highlights.

New features

The new process is qualitative rather than quantitative, which overcomes a major drawback of the previous approach. The older occurrence ratings were based on the probability of a failure, and the older AIAG manuals even tabulated recommended nonconforming fraction ranges. If, for example, the failure was 50 percent or more likely, the occurrence rating was 10 (worst possible on a 1 to 10 scale), while one or fewer per 1.5 million opportunities earned a rating of 1. These probabilities can be estimated from a process capability study, assuming that one is available; otherwise, one might easily have to guess.

Quality Digest’s default image

By: Quality Digest

As usual with Quality Digest’s diverse audience, this year’s top stories covered a wide range of topics applicable to quality professionals. From hardware to software, from standards to risk management, from China trade to FDA regulations. It’s always fun to see what readers gravitate to, and this year was no different.

Below are five articles that garnered a lot of interest from our readers. As you can see, the topics are quite diverse.

Improve Risk Management and Quality Across the Value Chain by Increasing Visibility
by Kelly Kuchinski

Donald J. Wheeler’s picture

By: Donald J. Wheeler

In the past two months we have looked at how three-sigma limits work with skewed data. This column finds the power functions for the probability limits of phase two charts with skewed probability models, and compares the trade-offs made by three-sigma limits with the trade-offs made by the probability limits.

Phase two charts

Ever since 1935, there have been two approaches to finding limits for process behavior charts. There is Walter Shewhart’s approach using fixed-width limits, and there is Egon Pearson’s fixed-coverage approach based on probability models. (For more on these two schools of thought, see “The Normality Myth,” Quality Digest, Sept. 19, 2019.) About the year 2000, some of my fellow statisticians tried to reconcile these two approaches by talking about “phase one and phase two control charts.”

Phase one charts use Shewhart’s fixed-width, three-sigma limits. These charts are used to help identify assignable causes of exceptional variation so that the process can be adjusted or fixed as needed. Then, under the assumption that once a process is fixed it will stay fixed, it is time for phase two.

Jody Muelaner’s picture

By: Jody Muelaner

In a general sense, capability is the ability to do something. Within manufacturing, capability is given a much more specific definition. It is an expression of the accuracy of a process or equipment, in proportion to the required accuracy.

This can be applied to production processes, in which case any random variation and bias in the process must be significantly smaller than the product tolerance. It can also be applied to measurements, where any uncertainties in the measurement must be significantly smaller than the product tolerance or process variation that is being measured.

Rohit Mathur’s picture

By: Rohit Mathur

Whatever the process or type of data collected, all data display variation. This is also true in software development. Any measure or parameter of interest to our business will vary from time period to time period, e.g., number of incidents per week or month, time taken in resolving incidents, number of tickets encountered in a production support environment per month, and defect density in code.

Understanding variation is about being able to describe the behavior of processes or systems over time. This variation can be stable, predictable, and routine, or unstable, unpredictable, and exceptional. Being able to distinguish between stable or common-cause variation, and unstable or special-cause variation, helps us to decide the type of action needed to improve the process. The control chart, developed by Walter Shewhart, is the tool that enables us to do so.

Scott A. Hindle’s picture

By: Scott A. Hindle

In everyday language, “in control” and “under control” are synonymous with “in specification.” Requirements have been met. Things are OK. No trouble.

“Out of control,” on the other hand, is synonymous with “out of specification.” Requirements have not been met. Things are not OK. Trouble.

Using this language, an obvious axiom would be: Take action when the process is out of control.

The everyday use of in and out of control is, however, unfortunate for control charts, the major tool of statistical process control (SPC). Why? Because in SPC these terms speak of processes as being stable or unstable. To characterize a process as stable or unstable, process limits, from process data, are needed. Specification limits are not needed.

Given the easy-to-understand basis for the action of meeting or not meeting requirements, coupled with the risk of confusion over the terms in control and out of control, why use control charts? If you are curious to see some of the benefits in doing so, read on. Two case studies are used.

Case one: Part thickness

During a regular review meeting in Plant 17, in- and out-of-specification data on the thickness of part 64 were reviewed.

Brian Lagas’s picture

By: Brian Lagas

‘Why are our changeovers taking so long?”

If you’ve asked this question on the shop floor, more than likely you were met with blank stares by your employees. Open-ended questions like this are overwhelming, so employees try to find quick answers that don’t really address the problem. They don’t have a starting point to form an answer.

But what if you asked a question with a specific, achievable goal, such as:

“What steps can we take to reduce changeover time by 15 minutes?”

You’ve then provided your employees with a measurable goal in the form of a question. Your workers may feel empowered to answer with some hands-on suggestions for incremental changes, such as reducing setup steps or combining workstations. This in turn could not only reduce changeover time, but also significantly eliminate wait times and inventories.

This approach is often described as kaizen, or “continuous improvement,” which serves as the backbone for lean manufacturing. Kaizen uses the plan, do, check, act (PDCA) problem-solving cycle to encourage manufacturers to use small ideas to solve big problems, such as costly, time-intensive changeovers.

Multiple Authors
By: Chad Kymal, Gregory F. Gruska

During the early 1980s, GM, Ford, and Chrysler established the Automotive Industry Action Group (AIAG), a not-for-profit organization with the mission “To improve its members’ competitiveness through a cooperative effort of North American vehicle manufacturers and their suppliers.” In the late 1980s, U.S. automotive suppliers, through the auspices of the American Society for Quality (ASQ), approached the VPs of purchasing for GM, Ford, and Chrysler and explained the burden of multiple standards that were being imposed on the supply base. Not only where there multiple OEM standards, there were hundreds of tier one standards as well.

Ryan E. Day’s picture

By: Ryan E. Day

Current business conversation often focuses on data and big data. Data are the raw information from which statistics are created and provide an interpretation and summary of data. Statistics make it possible to analyze real-world business problems and measure key performance indicators that enable us to set quantifiable goals. Control charts and capability analysis are key tools in these endeavors.

Control charts

Developed in the 1920s by Walter A. Shewhart, control charts are used to monitor industrial or business processes over time. Control charts are invaluable for determining if a process is in a state of control. But what does that mean?

Nicola Olivetti’s picture

By: Nicola Olivetti

According to a report by PwC, industrial sectors worldwide plan to invest $900 billion in Industry 4.0 each year. Despite these growing technology investments, only a few technologies are significantly mature to drive measurable quality impacts. Digital visual management (DVM) is one of them, being the fundamental link that bridges the lean culture and quality management in the digital age. 

What is digital visual management?

The vast majority of all the information and communication is visual. The human brain processes visual information significantly faster than text. When a relevant image is paired with audio material, two-thirds of people retain the information three days later.

Organizations dedicated to continuous improvement take advantage of this reality and use DVM to engage staff, provide insight into key information, and to ensure improvement projects are moving forward as scheduled.

DVM collaboration consists of a well-structured and interconnected series of stand-up meetings that take place regularly before a board, where the team posts (on paper or digitally) all the information it needs to steer and improve its activity. This is applicable to any team: from permanent shop-floor teams to top management, including (geographically spread) project teams.

Syndicate content