



© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.
Published: 03/13/2007
How do we conclude that a nagging, recurring blip has become serious enough to justify taking action? How do we assess the significance of anomalies—isolated occurrences and scattered, seemingly unrelated events? Can we objectively distinguish those factors that suggest an escalating level of risk? When do we know that we’ve achieved a desired goal?
The ability of an organization to make decisions and take appropriate action is reliant on the integrity of the available data. That mass of data doesn’t spring magically from the quality manager, production supervisor or the finance director. It comes from myriad individuals throughout the organization. It’s founded on every key stroke, hash mark, check-off, plotted point and counted bean. It resides in the records of inspections, corrections, remedial actions, deviation notices, customer concessions, corrective actions, decisions to take no-action, communications, refresher training, equipment maintenance logs, internal audits, and so forth.
Subclause 8.4 of ISO 9001 relates to analysis of data. I’ve seen companies try to develop metrics to fulfill the requirements of this subclause to have something to show the registrar auditor during a surveillance visit. They go through a laborious, meaningless exercise to demonstrate conformance. What they don’t realize is that the raw materials needed to fulfill the requirements of 8.4 are scattered throughout the facility.
The records we keep abound with the data that enable us to analyze and assess our status vis-à-vis the various objectives we have established. And yet, many of those records never see the light of day after a task is completed or a job has shipped. Rather than invent new metrics, organizations need to look at the information they already have and ask themselves, “What does this tell us?” For example, what is indicated if defects continue to be uncovered at final test? The customer isn’t complaining of bad product, because the problems always get caught in-house. You haven’t missed a delivery date in a while, because you keep authorizing overtime to deal with the reworks and keep shipments on schedule. In fact, you’re getting rave reviews from your customers. So, what’s the big deal?
Because you’re not looking at the data for process performance, you’re missing the obvious indicators that:
• You have process breakdowns, as bad product keeps making it’s way undetected all the way to final test
• Your plans and schedules are held hostage to random reworks with questionably higher priority
• Your bottom line is being needlessly whittled away by overtime and scrap expenditures.
• You’re at heightened risk of doing exactly what you’re trying to avoid—shipping bad product.
The other thing to remember is that the gathering and analysis of data don’t have to be complicated. Deciding what information to gather is more important than the method that’s used. There are many things that are easy to track and that go unrecorded. When doing corrective action training, I often explain to people that if you decide not to initiate a corrective action (CA) request, you should keep some kind of record of what you did instead or your justification for taking no action at all. That way, if there are multiple occurrences, there are data to analyze. In effect, you track and trend.
Again, this doesn’t need to be complicated. Hash marks may seem terribly unsophisticated, but sometimes they’re the most effective method you can use. Spreadsheets and simple charts are also easy ways to keep track of things. With consistency and vigilance, monitoring becomes second nature. This gives you the information you need when you need it, so you can react before a trend becomes a crisis.