I travel too much. Invariably, though, it enriches me. I typically return with some unusual experience or a new story with which to regale my colleagues. One day, while sitting in another aluminum tube with wings, I struck up a conversation with the person next to me, an engaging, interesting woman who made me laugh. I enjoyed her company and asked her name. She smiled and replied, “Lawrence.” Looking at her, I paused just long enough that she must have sensed the wheels turning in my head. “My dad wanted a boy,” she offered. “My friends call me ‘Larry.’”
I’m not sure about you, but until I boarded that aircraft, I had never met a woman named “Larry.” It was oh-so-different, unique, and incongruous.
Likewise, until recently, I considered statistical process control (SPC) and acceptance sampling to be different from one another. Their differences seemed to indicate that they were from different worlds, with different methodologies and objectives.
While those differences are clear, recent experience has shown me that these are actually complementary statistical tools. Although very different, SPC and acceptance sampling procedures can be used in concert with one another as a formidable quality improvement tool.
Take, for example, a company that receives large numbers of vendor-supplied “widgets.” The received widget lots are sampled to ensure that the product meets certain quality standards. Lots typically are received in shipments of 10,000. Because the manufacturer requires so many widgets, two vendors are used.
So let’s create an acceptance sampling plan. The keys to determining accept and reject numbers are the lot tolerance percent defective (LTPD), the acceptable quality level (AQL), and alpha and beta risks (known, respectively, as producer risk and consumer risk). Assume these are the company’s standards for receipt of widgets:
LTPD: 4.25 percent
AQL: 2.0 percent
Beta: .05
Alpha: .05
Given these values, and the fact that the inspector is performing a visual inspection of the product, we apply an attribute acceptance sampling plan. Of the 10,000 widgets, only 624 must be randomly sampled and inspected. Based on the results of this plan, the acceptable limit is 18 and the reject limit is 19. This means that if the inspector finds 18 defects or fewer in the 624 inspected, then the widget shipment may be accepted. More than 18 defects and the shipment will be rejected. When a single defect is found on a part it is categorized as “defective.”
The ability to inspect a small number of items out of an enormous shipment makes sense economically and can be of great benefit to organizations that need a feasible means of reducing inspection costs.
For years, though, I have been reluctant to recommend the use of acceptance sampling. My main objection isn’t how it’s used or its inspection-based focus. Instead, my objection has to do with the fact that once an accept or reject decision has been made, nothing is done with the data. This seems wasteful. Because the accept/reject outcome is so important, the actual defect codes and related data themselves are typically ignored. There’s information available to us in the inspection data, if we use it.
Given the above criticism, I suggest making two small amendments to our widget example:
- The inspector counts the number of defectives and keeps track of the defective codes and their frequency.
- The inspector keeps a detailed log identifying:
- The lot numbers inspected
- The vendor from whom the lot was received
- The number of defectives present in each lot
- The defective codes themselves
- The time and date for when the inspection was performed.
If the two amendments above are followed rigorously, data will accumulate in time order. Therefore, control charts can be created for defective frequencies. If defective codes are retained, then Pareto charts could be created. In other words, the data can be treated as quality information that would be useful for reducing defectives.
Lots of 10,000 items are received weekly. Visual inspections have been performed and data-logging has been meticulously followed. An example of the resulting analysis is shown in figure 1.
Figure 1: P-charts of defectives for each of two vendors.
Take a close look at the two P-charts in figure 1. The product (widgets) is the same on each chart, but the process (vendor) is different. No points fall outside of control limits on either chart indicating that the proportion defective is consistent by vendor. Vendor A’s average proportion defective is .0107 while Vendor B’s average is .0164. Is this significantly different? Possibly, but even if the difference is significant, what would we do about it?
You might call the vendors and report their defective levels to them. But what help would that be? Not much because the P-chart shows the production of defectives to be consistent through time. We need to identify ways to eliminate the common causes of defectives in each vendor’s processes.
To do so, a Pareto chart is created using all defective codes. Figure 2 reveals all defectives found during the inspections.
Figure 2: Pareto of total defectives for both vendors, all lots, batch codes, etc.
The largest bars on the Pareto chart indicate that scratches and flashing are most common followed by paint issues (paint contamination and paint color inconsistencies). This information by itself could be helpful. However, we may want to know, specifically, the defectives that each vendor is generating. To do so, figure 1’s defectives will need to be categorized.
Two such categorizations are found in figure 3. The left Pareto’s two yellow bars show the summation of defectives by vendor. Below each yellow bar is found blue bars highlighting defective codes and their frequency. Blue bars sum to the total for each yellow bar creating, in effect, a Pareto within a Pareto. Vendor B’s total of 123 defectives is more than 50 percent greater than Vendor A’s.
Figure 3: Pareto charts categorized by vendor and lot number.
Focus on the blue bars and note the differences between each vendor. Vendor B’s most common defective is “scratches” whereas vendor A’s is “flashing.” The third most common defective for vendor B is “stopper not installed,” representing more than 16 percent of all vendor B’s defectives. Comparatively, vendor A’s widgets only had one “stopper not installed” out of their 80 defectives. Clearly each vendor is generating differing types of defectives at different levels.
The Pareto chart on the right in figure 3 displays the identical defective codes but categorizes by lot number. This may give the vendor additional information that could be used to further refine their search for ways to eliminate defectives. It’s important to note than none of the lots inspected in this example failed the acceptance sampling requirements. All lots passed. And yet, exceedingly useful information can be extracted from the data.
Summary
So what’s the point? By integrating SPC and acceptance sampling, critical information concerning process improvement can be uncovered. Doing so can help identify process changes that could lead to defect reduction, improved process performance and lower costs. Acceptance sampling procedures should not stop with an “accept” or “reject” decision. Instead, acceptance-sampling data can be viewed as a means of providing powerful information, allowing quality professionals to better understand the source and frequency of defectives. As a separate benefit, imagine the collaborative work that could be performed between vendor and customer. By combining and using in concert these seemingly different and unique statistical disciplines, information sharing between vendors and customers could help improve quality throughout the entire supply chain. And what’s wrong with that? Nothing that I can see. But it might seem different, unique, even incongruous. Kind of like meeting a woman named “Larry.”
Add new comment