Featured Product
This Week in Quality Digest Live
Lean Features
Jeremy L. Boerger
To keep your business running, you need visibility into your IT assets
Kevin Ketels
The baby formula industry was primed for disaster long before a key factory closed down
Joe Vernon
The greatest advantage of CV is its ability to count and categorize inventory
James J. Kline
Quality professional organizations need to adjust their body of knowledge to include an understanding of big data
Brandon Cornuke
Focusing on cost isn’t sustainable for growth

More Features

Lean News
Enables system-level modeling with 2D and 3D visualization, reducing engineering effort, risk, and cost
It is a smart way to eliminate waste and maximize value
Simplified process focuses on the fundamentals every new ERP user needs
DigiLEAN software helps companies digitize their lean journey
Partnership embeds quality assurance at every stage of the product life cycle, enables agile product introduction
First trial module of learning tool focuses on ISO 9001 and is available now
Offset-aware programming of spindle transfers and bar pulls helps manufacturers drive multichannel CNC machinery
Freedom platform connects to any industrial asset to provide automated intelligence related to asset availability, utilization, and continuous improvement
Galileo’s Telescope describes how to measure success at the top of the organization, translate down to every level of supervision

More News

Glenn S. Wolfgang


Using Flow Quality Management With Inspection Sampling

Flow QM balances inspection costs against quality improvement during planning and mid-production

Published: Wednesday, August 1, 2018 - 12:03

Flow quality management (Flow QM) is a logistical alternative to handling product in lots for the purpose of assessing and mitigating defects. It features a streamlined, automated acceptance sampling methodology, is built on empirical metrics, and facilitates timely, meaningful performance monitoring and management.

Flow QM also expedites output. It offers more precise and immediate, intentional control over the balance of inspection costs and quality improvement (essentially producer and consumer risks). Further, it is enabled by technological advances in computing speed, data storage, and software applications.

This methodology is an adaptation of traditional acceptance sampling with rectification. Rather than deciding acceptance of lots or subsets of product, it identifies and rectifies defects among inspected units individually and adjusts the selection of future inspections as needed to satisfy quality goals. Each product unit flows directly through production and inspection, if designated, to distribution. Its progress is not delayed or complicated by awaiting other within-lot inspection outcomes. In this processing flow, quality acceptance is implemented by a simple automated calculation for each product unit based on a few tallies accumulated over prior units.

Flow QM employs a computerized system to track product units by serial number, accumulate tallies of inspections and defects found, compute the current need for unit inspection, produce monitoring dashboards or reports, and enable quick performance adjustments using configuration parameters. If existing production systems are computerized, these functions may be simple enhancements, obviating other burdensome or troublesome subroutines.

Conditions for Flow QM

As described here, Flow QM was developed for and best fits operations that transform or augment unique items. Each production unit remains available in original input form for subsequent independent production attempts and adjudication of correct results. Because inspection finds correct results, the methodology is called “corrective rectification,” and fixable defects may also be called errors. Defective units are made good rather than discarded after inspection. Correcting errors is essential when each unit has unique, irreplaceable value, without which overall quality would diminish.

Another condition relevant to Flow QM is the error structure or pattern of defect occurrence. Ideally, we anticipate that defects cluster systematically by identifiable conditions, but also vary randomly within those clusters. Degrading quality level drops or trends are not typical. Tasks based on human judgment or perception rather than precise mechanical measurements are typical of this defect pattern. Work may be conducted in clerical or automated operations, but the final arbiter of correctness is human adjudication.

Examples of such tasks are prominent in survey data processing, for example, converting handwritten questionnaire responses into electronic form; or categorical coding of raw data to aid summary or analysis. Also, data science might use Flow QM methodology if individual data points need to be transformed and the quality checked.

Corrective rectification of a human performance task is the simplest setting to illustrate Flow QM. It is the specific context developed below. Flow QM metrics are not suitable if inspections destroy or require discarding both good and defective units. In less extreme conditions, when quality rectification involves replacing only defects, Flow QM procedures may be adapted. Metrics may be adjusted to reflect varieties of operational implementation or simply viewed as approximate.

converting handwritten questionnaire responses into electronic form

The quality management orientation

Quality is managed most beneficially when there is a focus on process improvement. In Introduction to Statistical Quality Control (Wiley, 2005), Douglas Montgomery asserts that “the underlying philosophy is that acceptance sampling is not a substitute for adequate process monitoring and control....” A fitting quality methodology fosters effective and efficient detection of defects so they may be diagnosed and mitigated. Inspections with rectification, by themselves, do improve the quality of output, but deliberately using inspection insights to review, evaluate, and improve the production process generates quality in units not yet worked. Ways to conduct and sustain the process review is another worthy topic in quality management. This article focuses on Flow QM methodology, which can feed process review efficiently.

The Flow QM orientation aims to optimize intentional, immediate, and precise control of quality outcomes in balance with inspection costs, and combines several innovative features to do so. Metrics of cost and quality are empirical, computed from observable tallies updated with each new unit in production. Derivations mathematically quantify how inspection costs relate to quality. Automated acceptance tests compare the current empirical measure of output quality precisely to a meaningfully understood, target-output quality criterion, rather than comparing small numbers loosely associated with a quality limit much greater than the value of interest.

Acceptance testing consequences are also automated. If output quality is too poor, future inspections are immediately increased. Flow methods contribute to metrics’ statistical precision as tallies accumulate over more units than would be found in a lot. Managers and stakeholders choose the quality performance criterion with awareness of associated cost, deliberately managing consumer and producer risks. A computer system helps them monitor current performance and, if necessary, allows mid-operation adjustments of the performance parameters.

Handling lots vs. flow

Acceptance sampling in lots (also called batches) entails extra handling and delay by holding back product units in groups until each unit has completed the current processing step:
1. Assignment: Units are accumulated into batch lots.
2. Production: The batch production operator works each unit in the lot.
3. Inspection: Another operator independently works a sample of units in the lot.
4. Adjudication: Algorithms and an adjudicator identify errors and decide lot acceptance.
5. Remainder inspection: If there are too many errors in the lot, all units are inspected.
6. Remainder adjudication: Correcting all errors completes rectification of the lot.
7. Release: Units in the lot are released together when all steps are completed.

Flow processing takes a unit through the first four essential steps individually without holding it back in batch groups. Step implementations differ in other ways:
1. Assignment: Units are assigned individually on a flow, into distinct operator streams rather than lots, accumulating tally values as long as the operator continues working.
2. Production: At the time of the first production, an algorithm determines if the unit needs to be inspected.
3. Inspection: Inspections are done by assigning the unit blindly to another operator stream, where it is handled just like another production unit.
4. Adjudication: Algorithms and an adjudicator determine correctness for both operators’ results, gaining inspection tallies for the second operator as well as the first.
5. Release: Rectified product is distributed.

Streaming considerations

Traditionally, product units are assigned to lots or batches. Flow QM assigns units to streams that are homogeneous in error patterns. Each stream operates independently with its own set of tallies and computations. Homogeneous streaming optimizes the efficiency and effectiveness of inspections. Streams with few errors will not engender extended inspections, while the extra work will be focused on dense defect clusters, where corrections and diagnosis for process improvement are most needed and fruitful.

Montgomery made this point in recommending that units should be homogeneous within lots and varied between lots. Homogeneity is accomplished by using characteristics associated with differing defect levels as the basis for assigning units. Human operators perform at varied levels of error, which can be automatically measured and mitigated in separate streams. Data capture of numeric, alphabetic, or checkbox responses are best streamed separately. Automated triage for relatively difficult input characteristics such as length or clarity could separate cases into their own stream.

Although a lot has a fixed size throughout an operation, streams continue to grow in overall size, number of cases inspected, and number of errors found among inspections. When a stream grows beyond the size a lot would be, metric computations and acceptance testing become more statistically precise, just as any estimate becomes more precise with increased sample size.

Empirical metrics

Essentially, acceptance sampling inspects some product units to test if more inspections or mitigation actions are needed to ensure adequate quality in units overall. Implementing that test requires quantifying performance metrics and choosing a test criterion. The details of traditional acceptance sampling methodologies used with lots, such as average outgoing quality limit (AOQL), acceptance quality limit (AQL), or continuous sampling plans (CSP), are fully described in other sources; see Montgomery, cited above, as well as Eugene Grant’s and Richard Leavenworth’s Statistical Quality Control (McGraw-Hill, 2004); and Edward Schilling’s and Dean Neubauer’s Acceptance Sampling in Quality Control (Chapman and Hall/CRC, 2009). Flow QM often parallels yet contrasts with AOQL methodologies.

In a lot or stream of product, there are three observable data points on which calculations are based. They are the count of all units in the group (N), the number inspected (n), and the number of defects or errors (e) found in the inspections. Empirical estimates of important quality and cost measures derive from just these three count values. The estimated proportion of initial production error before corrections due to inspection, called “error-in rate” (EIR), is simply e/n. The outgoing error, called “error-out rate” (EOR), subtracts the overall rate of error-corrected units, or e/N, from EIR. The proportion of overall units inspected, or n/N, is the work rate (WorkR), an index of the cost of inspection.

Multiply WorkR by a monetary cost per inspection to get a budget figure. Using the explicit definition for EOR and substituting EIR and WorkR definitions, we get one form of the relationship between quality and cost related to inspections:

EOR = e/n – e/N = (e/n) (1-(n/N)) = (EIR) (1-WorkR)

In another form that quality-cost relationship is even more meaningful:

WorkR = 1 – EOR/EIR

As inspection rates (WorkR) increase, corrections lower both EOR and EOR/EIR. Thus, improvement in quality (1 EOR/EIR) increases with the rate of inspections. Even independent of the enhanced benefits of process improvement or homogeneous streaming designs, increased inspections with corrective rectification do yield greater quality. These empirical metrics of quality and cost are meaningful and useful in operation design, monitoring, and management.

Traditional acceptance sampling methodologies are based on probability-based projections parallel to these empirical metrics:
• The AOQL methodology uses hypergeometric probability distributions, where p represents an unknown initial production error and is not estimated from observed data like the corresponding EIR.
• The average outgoing quality (AOQ) value derives from parameters specifying lot size, initial inspection sample size, a select p, and an arbitrary acceptance criterion (c);EOR is its empirical counterpart.
• The average fraction of inspection (AFI) projects the rate of all inspections done, given those specific parameters; it parallels WorkR.

Derivation of the quality-cost relationship expressed in projected metric terms is available upon request. For corrective rectification, the formula is precise. Grant and Leavenworth presented it as an approximation, likely assuming replacement:

AOQ = p (1 – AFI)

Empirical performance criteria

Quality acceptance entails some form of testing based on performance criteria. Traditional quality control tests accept or reject a lot at a time, often with replacement rectification to make the batch acceptable for use. With corrective rectification, a batch may be accepted as is or undergo expensive inspection of all units to ensure overall quality. The criterion rule for lot acceptance is typically inspection errors less than or equal to c, where c is the maximum number of errors associated with a specified error limit.

For example, AOQL methodology’s error limit and corresponding c value are commonly chosen using tables or algorithms relating N, n, c, and AOQL values, where AOQL is the maximum AOQ over all relevant p values. The logic is unassailable and fitting when data from past production are not available and reliable, perhaps due to the difficulty of maintaining adequate records over many operators and much time. However, actual outgoing quality can be much lower than that quality limit. Laymen and stakeholders may be misled when interpreting quality performance if they gloss over the distinction between an outgoing error estimate and its limit.

In contrast, Flow QM methodology and advanced computing systems can hold and use the whole operation’s prior data to test quality directly against a meaningful, relevant criterion. The current EOR could be compared to a target EOR criterion. That criterion value may be negotiated by stakeholders as an optimal balance between EOR and WorkR, given the EIR anticipated for the operation. The quality-cost relationship is instrumental in those deliberations. If the electronic systems provide immediate, up-to-date dashboards or reports monitoring empirical tallies and metrics, operation managers can see if expected EIR levels are off and consider adjustments to the target EOR configurable parameter even while production progresses.

Instead of testing acceptance once per batch or lot, Flow QM tests acceptance by a simple computation for each assignment of a unit to a stream. Tallies of N, n, and e from prior completed production in that stream provide a current EOR, which is compared to a target EOR. If the current error rate is too high, the newly assigned unit is tagged for inspection.

Even if errors are within the acceptance criterion, the algorithm checks if there is another reason to inspect the new unit. Random selections ensure that inspections represent units overall. Managers who suspect problems arising in mid-production have the discretion to force, using a configurable parameter, full (i.e., 100 percent) inspection and process analysis of all units for as long as appropriate. After startup of a new production procedure or operator, an initial baseline period of full inspection is often an efficient assurance of quality. Note that whenever a product unit is assigned to a second operator for any of the reasons above, tallies accrue for both operators, yielding, in effect, two operator inspections for the price of adjudicating one unit, which lowers the needed rate of random selections.


Just as implementing acceptance sampling with lots has complications, as when design features like lot and sample sizes are not strictly controlled or acceptance test procedures are not properly conducted, there may be complications with Flow QM. New units often begin processing before e is determined for all prior inspected units. Computations of the new unit’s inspection status should be based on N, n, and e tallies for completed units, leaving a lag in keeping tallies current. Algorithms can adjust for lag, but the ongoing challenge is to minimize lag logistically.

Another complication is adapting Flow QM metrics to reflect a replacement rectification operation’s specific implementation. For example, does N include or exclude units replaced?


Flow quality management methods expedite production, inform immediate monitoring of meaningful metrics, foster continuous process improvement, and enable deliberate balance of inspection costs against quality improvement during planning and mid-production. Relative to lot-based methodologies, it is efficient logistically because it guides each production unit through minimal production and quality steps without delay dependent on other units’ progress, and gathers inspection results for both first and second operators. Current computing capabilities and systems make all those features possible.

Flow QM employs empirical rather than probability-projected metrics of quality and cost. Acceptance tests compare a target performance criterion to an empirical estimate for final outgoing error. Stakeholders’ interpretation of quality is thereby more precise and meaningful.

The benefits of Flow QM are especially clear under ideal conditions, that is, when defects are correctable errors in human-like performance, so all results are usable and none destroyed, since defects are fixed. When units must be replaced, Flow QM may be the framework for suitable adaptations, tailored to how replacement is implemented, adjusting formulae or logistics to yield better operational outcomes. Homogeneous streaming of error sources further amplifies process improvement arising from persistent diagnosis and mitigation of defects.

Comments and questions are welcome. This overview glosses over much relevant detail, including some mathematical derivations, operational implementations, computer simulation research subroutines, and quality-cost design negotiation worksheets. Contact: wolfgangqc@gmail.com.


About The Author

Glenn S. Wolfgang’s picture

Glenn S. Wolfgang

Glenn Wolfgang has a PhD in Educational Psychology and a MS in Statistics from Pennsylvania State University.  He spent more than 30 years in U.S. government statistical agencies, primarily the U.S. Census Bureau, evaluating accuracy and assuring quality of data collection and processing.  This career culminated in a focus on ever more practical, effective, and efficient quality control and management procedures, attending comprehensively to mathematical theory, electronic systems development, and ongoing management of the product.  Now retired, he remains interested in sharing the insights of his experience.