Featured Product
This Week in Quality Digest Live
Six Sigma Features
Donald J. Wheeler
Using process behavior charts in a clinical setting
Alan Metzel
Introducing the Enhanced Perkin Tracker
Donald J. Wheeler
What you think you know may not be so
William A. Levinson
The AIAG offers a clearly defined and powerful synergy between the three
Donald J. Wheeler
How does it compare with a process behavior chart?

More Features

Six Sigma News
Sept. 28–29, 2022, at the MassMutual Center in Springfield, MA
Elsmar Cove is a leading forum for quality and standards compliance
Is the future of quality management actually business management?
Too often process enhancements occur in silos where there is little positive impact on the big picture
Collect measurements, visual defect information, simple Go/No-Go situations from any online device
Good quality is adding an average of 11 percent to organizations’ revenue growth
Floor symbols and decals create a SMART floor environment, adding visual organization to any environment
A guide for practitioners and managers
Making lean Six Sigma easier and adaptable to current workplaces

More News

Scott A. Hindle

Six Sigma

Why Use Control Charts?

Can they help to do a better job?

Published: Wednesday, June 26, 2019 - 12:03

In everyday language, “in control” and “under control” are synonymous with “in specification.” Requirements have been met. Things are OK. No trouble.

“Out of control,” on the other hand, is synonymous with “out of specification.” Requirements have not been met. Things are not OK. Trouble.

Using this language, an obvious axiom would be: Take action when the process is out of control.

The everyday use of in and out of control is, however, unfortunate for control charts, the major tool of statistical process control (SPC). Why? Because in SPC these terms speak of processes as being stable or unstable. To characterize a process as stable or unstable, process limits, from process data, are needed. Specification limits are not needed.

Given the easy-to-understand basis for the action of meeting or not meeting requirements, coupled with the risk of confusion over the terms in control and out of control, why use control charts? If you are curious to see some of the benefits in doing so, read on. Two case studies are used.

Case one: Part thickness

During a regular review meeting in Plant 17, in- and out-of-specification data on the thickness of part 64 were reviewed.


Thickness of part 64

Number of measurements

In specification

Out of specification

Status

108

107

1

Not OK

The measure was red because of an out-of-specification outcome during production run 9; in everyday language, it was out of control. The agreed action was to find the root cause for this problem, followed by the appropriate corrective action.

Production records indicated a problem with a machine setup before run 9 commenced, and this explanation was given during the next meeting as the root cause for the out-of-specification result. Because this setup problem was not expected to recur, the case was closed.

Case one assumes that the machine setup problem—i.e., something different in the process—caused the out-of-specification point. Did this understanding lead to the best course of action? Before we explore this question, let’s look at case two.

Case two: Time to process

During the biweekly production review meeting, in- and out-of-specification data on the time to process part 76 were presented.


Time to process part 76

Number of measurements

In specification

Out of specification

Status

200

200

0

OK

The measure was green because everything was in specification: In everyday language, it was in control. No questions were asked, and production continued without further use of the data obtained during 10 productions. Did this in/out binary view of the world lead to the best course of action? Before we explore this question, we must revisit case one.

Case one revisited

Figure 1 plots the 108 thickness measurements in a histogram against the specifications. The problematic point, located above the upper specification limit, was sample 100 from production run 9.


Figure 1: Histogram of part thickness data during nine production runs. The blue lines, LSL and USL, are the lower and upper specification limits.

A control chart of individual values, or process behavior chart, of the data is shown in figure 2.

 
Figure 2: Control chart of individual values for the thickness data

Interpretation of figure 2’s control chart allows us to characterize the process as stable. This means there are no unnatural patterns in the data that would signal the occurrence of process changes, i.e., instability. (Postscript 1 below gives an overview of how to interpret a control chart.)

The two red lines at 78.4 and 84.1 in figure 2 are natural process limits, or control limits, which define the “voice of the process.” This voice tells us to expect process outcomes between 78.4 and 84.1:
• During the past nine productions
• In future productions so long as the process continues without evidence of change, i.e., stable process behavior

If we listen to the voice of the process, we learn that the out-of-specification value—in this case, sample 100’s value of 83.7—is not, on its own, a signal of change in the process. Why? Because this single measurement falls within the natural process limits. Consequently, the control chart analysis does not support the team’s conclusion that the out-of-specification value is accounted for by the problematic machine setup before commencing run 9.

Note: This understanding does not mean that a problematic setup should be ignored and left uncorrected; simply that the problem with the machine setup is not, on its own, a valid explanation of sample 100 being out of specification.

Out of specification is also synonymous with trouble. To plan how to get out of trouble, we start by comparing the voice of the process with the voice of the customer, i.e., the specifications (see figure 3).

 
Figure 3: Representation of the voice of the process and the voice of the customer. (LNPL and UNPL are the lower and upper natural process limits.)

With an upper specification limit of 83.5, and with process outcomes as high as 84.1 expected to occur in routine production, figure 3 shows graphically that the current process is not capable of meeting specifications.

Figure 3 also provides the insight to define the options to make the process capable:

Option 1: Lower the process average to ensure the control chart’s upper natural process limit is below the upper specification of 83.5.

Option 2: Reduce the level of variation in the process by changing the process in some fundamental way (e.g., use new materials, operating procedures, or equipment, or measure samples in duplicate or triplicate rather than just once).

Option 3: Do both, i.e., a more optimal average and a reduced level of process variation. (See Postscript 2 for a discussion on both these improvement strategies.)

In concluding case one, how has the control chart shown itself to be useful?
• By defining the voice of the process, it told us what to expect of the part thickness process.
• By listening to the voice of the process, we learned to not invest time looking for a root cause for the single out-of-specification point.
• By comparing the voice of the process with the voice of the customer, the options to effectively and permanently eliminate out-of-specification trouble from the process were made clear.

Case two revisited

A histogram of the 200 measurements in relation to the lower specification is shown in figure 4. (There is no upper specification.) The histogram suggests all is OK because the measurements are located at a safe-looking distance above the lower specification.


Figure 4: Histogram of the time to process data, including the lower specification limit of 700 seconds

Is all OK, meaning no need to ask further questions? To answer this question, we start with an average and range control chart (see figure 5.). Each point on the x-axis uses data from one batch. Five measurements are routinely taken across each batch, and the variation within batches is judged appropriate to study the variation between batches.


Figure 5: Average and range chart for the time to process part 76

With several signals of process change in figure 5, this process must be characterized as unstable over time. (Again, refer to Postscript 1 for some guidance on control chart interpretation.)

Because the process in figure 5 is unstable, we learn that the current process has the potential to operate with a narrower level of variation. (If all the points on the upper average chart were within the process limits, the process would operate with a lower level of variation.)

Reducing variation results in process improvement. How does the control chart help us to 1) realize this improvement; and 2) express this improvement opportunity as a cost-saving figure?

1. Improvement
We start by making sense of the signals of instability in figure 5: When are the changes in the process—the instability—thought to have occurred?

Batches 2 to 12 on figure 5’s average (upper) chart are all below the central line (which is the average of all 200 data). Batch 13’s average jumps up above the central line, and for the rest of the record, the average time to process part 76 looks to have increased. This interpretation is illustrated in figure 6 and tested in figure 7’s average and range chart, with the process limits based on the data from batches 1 to 12 only.


Figure 6: Figure 5’s average chart with annotations

 
Figure 7: Average and range chart of the time to process part 76, with the process limits based on data from batches 1 to 12 only

Figure 7 supports the theory that a reasonable degree of stability was present during the production of batches 1 to 12 because there are no signals in the data. Stronger still, the postulated increase in processing time from batch 13 onward is absolutely clear because 27 of the last 28 points are above the central line on figure 7’s average chart.

This understanding tells us to ask what happened around the time batches 12 and 13 were produced. The team investigated:
• Friday, April 18: Last production of the week
• Monday, April 21 to Wednesday, April 23: Periodic maintenance of the line
• Thursday, April 24: Restart production

The investigation identified an error from the periodic maintenance. A bias error in a temperature-sensor loop calibration meant the actual temperature in a heating step was several degrees below the target temperature because of the error in the sensor measurement. This error explained the sustained increase in processing time between April 24–April 30. The loop was recalibrated and the error eliminated.

Using the control chart, the steps involved are shown in figure 8.


Figure 8: Detecting and responding to instability in a process

Success with figure 8 leads to an improved process with leverage for optimization. As discussed below, this leverage can be translated into cost savings.

2. Cost-saving opportunity
For a baseline figure, the plant manager equated a 10-second reduction in time to process part 76 with savings of approximately $14,500 per month. To turn this into an estimated cost-saving figure, two averages are needed:
• Historical average: The best guess of what would happen if nothing in the process changed
• New average: The optimal average to be achieved through improvement

Using the year-to-date data, the historical average was 789.8 seconds.

To get to an estimate of a new, optimal average, we start by estimating the process’s stable standard deviation. Figure 7’s range control chart—the lower chart—contains this information:
• The average range for batches 1 to 12 is 44.58 (the green line in figure 7’s range chart).
• To convert this average range value into a standard deviation, we need a bias correction factor, d2, which is 2.326 for subgroups of size five.

Next, the plant manager would accept a process capability Cpk of 1.33 in the optimization of this operation. For a stable process, this level of capability provides one unit of standard deviation of “safety space” vs. the nearest specification limit, as shown in figure 9.


Figure 9: Visual representation of Cpk = 1.33 with only a lower specification limit in place; the histogram is the data from batches 1 to 12.

As per figure 9, the optimal process average is estimated as follows:

The estimated time saved by operating the process at the optimal average is:

               

A time saving of 13.84 seconds results in an annual cost saving in the ballpark of $250,000:

 

This cost-saving opportunity represents the estimated payback for getting the most from this process, which is a stable and capable process, running on-target at its optimal average.

To fully seize this opportunity, control charts are needed because they:
• Provide the insight and clues that are needed to turn an unstable process into a stable one (done by integrating figure 8 into the routine work of production)
• Are the only way to sustain a stable process over time 

The initial question raised in case two was, “Did this in/out binary view of the world lead to the best course of action?” The answer is no. The control chart bettered the situation:
• It identified the potential for the process to do better because the process was unstable, as revealed in figure 5.
• It provided a method to realize this potential, meaning how to make the process stable, as shown in figure 8.
• It facilitated a repackaging of this potential into a near $250,000 cost-saving opportunity by moving the process to a new, optimal average and comparing this with the historical average.

Control chart usage: Four fundamental questions

With regard to control charts:
Question 1: Did a change in the process occur?

Questions two, three, and four follow if question one is answered with yes:
Question 2: When did the change in the process occur?
Question 3: What caused the change in the process?
Question 4: How can the cause be economically controlled in production to eliminate (or at least reduce) its effect?

Without a signal of change on a control chart, it is premature to seek a “root cause” for one or more values that are undesired (e.g., out of specification). Why? Because the data themselves find no evidence that a root cause is there to be found. The control chart can therefore put the brakes on the loss coming from this course of action. In case one, the team’s belief that the cause of the out-of-specification occurrence had been identified and fixed was not supported by the data.

With a signal of change on a control chart—an unstable process—the green light is given to further interpret the chart. Figure 6 shows an example. Only when a process displays unstable behavior do questions two, three, and four come into play. As shown in case two, the expected payback from the invested effort in tackling questions two, three, and four can often be packaged into a sound dollar figure for effective communication and decision making.

Trouble comes in two flavors

Donald J. Wheeler’s “Two Definitions of Trouble” is based on the combination of 1) being in specification or out of specification; along with 2) being stable or unstable, as per a control chart analysis. This results in four possible states for any process, as shown visually in figure 10. (A predictable process is a stable process, and an unpredictable process is an unstable process.)

 
Figure 10: The four possibilities for any process. (Reproduced from Donald J. Wheeler’s article, “Two Definitions of Trouble.”)

Product trouble
Figure 10’s horizontal axis is well-known: Nonconforming means some product is out of specification, which means trouble. This trouble provides a well-established basis for action.

Yet, knowing how to get out of trouble is key because different paths can be taken:

No. 4—State of Chaos: If the process is unstable with some of it nonconforming, then responding to process changes detected on the control chart (as per figure 8) is often the path to success and the Ideal State.

No. 2—Threshold State: If the process is stable with some of it nonconforming—like case one—taking a different path toward the Ideal State is recommended; this requires a fuller study of the process to learn how to shrink variation and/or to relocate the average to a more optimal level.

Process trouble
Figure 10’s vertical axis, which could be called the stability axis, is somewhat unknown.

A stable process is performing up to its potential, i.e., doing the best it currently can. Moreover, with a stable process, the voice of the process defines what the process is expected to deliver today, tomorrow, next week....

An unstable process, on the other hand, is performing below its potential, i.e., the process could do better. Case two’s time-to-process data were used to estimate the payback from getting this process to perform up to its potential.

The most effective path to improvement is guided by the stability axis. As per the two case studies:
• A control chart of case one’s thickness data showed the futility of seeking a root cause for the out-of-specification point; improvement would come from a fuller study of the process.
• A control chart of case two’s time-to-process data showed the benefit in seeking a root cause for the detected process change (between batch’s 12 and 13); taking action on the identified root cause was the catalyst to improvement.

Wrap-up: Why use control charts?

We started by asking, “Why use control charts?” Control charts are the “voice” of the process. They can be central to process management by helping to ask the right questions. They can be used to give maximal assurance that specifications will be met. They can play a key role in process improvement. And, they can be used to put a sound dollar figure to improvement opportunities.

Might you get more out of your processes with the use of control charts?

Finally, to add value to this article, it would be great if you, the readers, post in the comments section why you use control charts.

Postscript 1: Control chart interpretation—stable or unstable?

Interpreting a control chart starts with the characterization of process behavior: stable or unstable? In traditional control chart terminology, this refers to the process being in- or out of control, meaning statistical control.

As per the Statistical Quality Control Handbook by the Western Electric Co. (second edition, 1958), a stable process possesses the following three characteristics (defined as a natural pattern on page 24):
• Most of the points are near the solid centerline
• A few of the points spread out and approach the control limits
• None of the points (or at least only a very rare and occasional point) exceeds the control limits 

These three characteristics are shown visually in figure 32 of the SQC Handbook (reproduced below as figure 11).


Figure 11: The three characteristics of a natural pattern—a stable process—as per Western Electric

Control chart interpretation starts out by assuming the process is stable. Evidence of instability is looked for, with instability detected by unnatural patterns on the control chart. From page 24 of the SQC Handbook, “...unnatural patterns tend to fluctuate too widely, or else they fail to balance themselves around the centerline.” And, “unnatural patterns always involve the absence of one or more of the three characteristics of a natural pattern.”

The use of detection rules results in the control chart becoming an operational definition of a stable process. Herein, two detection rules are applied to find the unnatural patterns, which are the signals of instability in the process:
• Detection rule 1: A point that falls beyond a process limit (control limit)
• Detection rule 2: Nine or more consecutive points on either side of the central line

Rule 1 is the first, original detection rule. Many other detection rules have been proposed over the years. For a detailed discussion, see “When Should We Use Extra Detection Rules?” The two rules given above combine simplicity with effectiveness in the author’s opinion.

Figure 2 is an example of a process characterized as stable; there is no unnatural pattern in the data.

Figure 5 is an example of a process characterized as unstable. In figure 5 and figure 7, the points signaled with a 1 and a 2 are those corresponding to detection rules 1 and 2 given above.

Postscript 2: Improvements to the part-thickness process

As seen in figure 1, the specification limits are 74.5 and 83.5, and the histogram is located closer to the upper specification. A standard, and important, practice is to also define a process target value and check if the process is on target or not. (The keen observer will have noticed the absence of a target value in figure 1.)

With regard to defining an effective improvement strategy to improve process capability, two situations are discussed. As above, a target minimum process capability is Cpk 1.33. (Figure 9 illustrates a Cpk of 1.33.)

Situation 1: Process target is the center of the specifications
As discussed above, the part thickness process was not capable because process outcomes higher than the upper specification limit were expected. This was illustrated in figure 3 and is shown in the upper histogram in figure 12.

The center of the specifications is 79.
Calculation: .
To hit the target of 79.0, the process average must be reduced by 2.26 units (81.26 – 79.0 = 2.26).

Operating the process in the center of specifications would deliver a capable process, as illustrated in figure 12 .


Figure 12: How to center the process in the middle of the specifications and deliver a capable process

Situation 2: Process target is the current average
The only way to achieve a capable process at the current average of 81.26 is to reduce process variation. As shown in figure 9, for a Cpk of 1.33, the distance from the process average to the nearest specification is four standard deviations. An improved process of Cpk 1.33 at the current average is illustrated in figure 13.

 
Figure 13: Illustration of the improvement needed to reduce variation at the current average and achieve a Cpk of 1.33

The formula for Cpk is:

Because the process has been operating closer to the upper specification limit, the formula becomes:

In relation to the upper specification limit, a Cpk of 1.33 is as shown in figure 13’s lower histogram. The improvement envisaged in figure 13 would not be trivial. It would require a near twofold decrease in the process’s standard deviation, from 0.94 to 0.56. As a general rule, it is often easier, even much easier, to relocate the process average than to reduce process standard deviation.

Discuss

About The Author

Scott A. Hindle’s picture

Scott A. Hindle

Scott Hindle supports R&D and factory operations on process capability studies for new products and processes, statistical process control (SPC) for use in routine production, and the use of online measurement devices as a part of both SPC and engineering process control.

Comments

Why Use Control Charts

I found this article especially useful. The Florida Department of Transportation is working with our crushed stone industry to implement statistical process charts. The industry uses a common commercial software with the capabilities, but SPC is not a familiar tool.

We are teaming to develop a system that is logical and works for both FDOT and Industry. A common case we have come across and are trying to assess is what happens when products are within spec, but the Natural Process Limits are outside the specifications. That is Cpk  < 1.  Scott's article makes it clear that this process is not stable or in control, but it is a difficult concept to grasp when we have to codify this situation in a regulatory document. We are considering allowing producers to take note of this situation and take corrective action. If so, FDOT would not take action against the producer. In our current system, we calculate the (difference between the average and the nearest spec), and divide the result by the standard deviation of the last runing 30 test results -absolute. This results in a Z score, which is required to be grater than 1.645 to ensure 95% compliance. In the situation described, the Z value would show that the data is out of complianc and the producer would be prohibited form shipping to FDOT projects. This method assumes all data are normally distributed. SPC does not and more closely matches our scenarios.

The article sheds light on a potentail solution to a real problem, and offers a system that might be practical and mutually benefical.

Excellent material and detailed explanation of two cases

Hi Scott,

Thank you for our article. it is a great publication on two cases that brought the opportunity to discuss between the voice of the customer and voice of the process. Unfortunately, there are too many cases that there is no action until the trouble is already on our plate. As you mention, there is some missed opportunity to do not implement control charts and use them to translate this information to dollars.

European pharma is working on the similar area using multivariate approach. (https://www.edqm.eu/en/news/ph-eur-commission-consults-stakeholders-general-chapter-multivariate-statistical-process). 

I look forward to your next article!

Marçal

  

Why Use Control Charts?

Another great article, Scott.  It is kind of sad there is still so much confusion about statistical control versus within specifications.  Hopefully this article will be a helpful remedy.  The Shewhart control charts put us at a crossroads and enable us to answer a very broad question:  What problems will be best solved by workers (unstable) close to the process and what problems will be charged to management (stable)?  Without this knowledge, management in general believes faults are on the factory floor and improvements are for someone else, not me.  In The New Economics, Dr. Deming estimated 94% of problems belong to management and only 6% are special. 

The big gains are in management awakening:

"Causes of nonrandom variation are called assignable causes or special causes. And those are usually chargeable to particular, local conditions that the workers can recognize and eliminate. And then you have left random variation that defines the system, and from then on only the management can improve it. That was Shewhart's great contribution." - W. Edwards Deming

Taken from:

THE EARLY INFLUENCE OF W. EDWARDS DEM1NG ON THE DEVELOPMENT OF STATISTICAL QUALITY CONTROL IN THE UNITED STATES AND IN JAPAN,  Nancy R. Mann

link:

https://apps.dtic.mil/dtic/tr/fulltext/u2/p001590.pdf

Why Use Control Charts?

Thanks for writing the article, Scott.

It is unusual to see an article that is set in the real world and not the theoretical of SPC. Your approach is very practical. If only we can get more employers to see that SPC is meant to be used in realtime for defect prevention, rather than after the fact for process improvement, the world would be a better place. Fortunately, my employer firmly supports the former.

Control versus Predictable

Nice article Scott!  You have articulated what is needed in production operations of every type.

I too ran up against the thought that "in control" meant within specifications.  That is when I started using predictable and unpredictable to describe the behavior of a process.  It has less emotional baggage than in control and out of control.

The problems in production are like fog or smog, and process behavior charts (control charts) can spotlight what needs attention and where we need to work.  Moreover, without the framework of an approach such as control charts it is difficult to get the most out of any process.  The problems of production are greater than those of research, where we spend a lot of time on developing techniques for learning.  Moreover, the problems of production change with time, and this is why we need a time-oriented technique such as the control chart.

Thanks for this article.

Why use control charts

Hello Scott,

Very, very detailed article; shall need time to go through fully.

I only wish to express my surprise at two points of your article (as far as I have gone).

Sentence 3 i.e “Out of control,” on the other hand, is synonymous with “out of specification.”

AND

Whether figure 2 –plot of individual values of a parameter ‘should’ and  ‘can’ be called a “CONTROL CHART”.

With due regards; I venture to disagree with both.

Kind regards

Virendra gupta

PS I shall read the article fully and try to give more comments.

Virendra, See my comments

Virendra, See my comments below on why I believe there a common language usage of out of control in relation to out of specification.

Understanding and reducing variation

A nice article by Scott.

Little's Law guarantees that if we reduce the variation in flow of materials through a process that either volume throughput will rise, or work-in-progress will fall, or some combination of both.  This is a law.

Scott's article shows how using control charts helps us to understand and reduce variation.  

Better quality, increased throughput, recuded cycle time ... we can have it all if we know how to reduce variation.

John McConnell

Hi John, I've had your books

Hi John,

I've had your books for decades.

I've added a variation slider in the Lean Simulation in Q-Skills3D, together with a graphical output.  I've been rather surprised by the results.  Variation has much less impact than I would have thought.  The simulation variation slider varies the processing speeds randomly at either station.  It would be interesting to see a mathematical analysis.  Small changes in the mean at either station have an obvious effect.  The sim also shows Little's Law graphically.  It takes a long time to settle down.

Tony

SPC

Dear Scott,

Thanks for such a detailed ,though simple way of presenting, today's most important topic of manufacturing processs-Why use control Charts!

I think, adjusting the average to the centre of the specification doesn't demand much effort in the process lines but manier times the reality is that we try to be too safer!

Am sure this article will help many of the factory guys to produce compliant product with stable process by operating at full potential.

Sukumaran 

 

 

Well done Scott. Nice to see

Well done Scott.

Nice to see histograms without the all too common and meaningless normal distributions drawn over them

Your Fig 6 also shows how the WE Rule 4 (8 point below CL) is superfluous, as Dr Wheeler describes here: https://www.qualitydigest.com/inside/statistics-column/when-should-we-use-extra-detection-rules-100917.html "Rule one has been the key to process improvement for more than 90 years. Use it and learn about the dominant assignable causes affecting your process. In practice, rule one will usually generate all the signals that most people can realistically investigate."

Keep it simple. KISS. There's plenty of free software to draw control charts.

"In control" does not mean you are meeting specifications

Good article except for your opening statement: "In everyday language, “in control” and “under control” are synonymous with 'in specification.' Requirements have been met."

Control charts tell us if the process is stable and predictable, not if requirements have been met. I've seen a lot of processes that  are stable but not within specifications.

Once you have a stable process (determined by control charts), you can then use capability analysis to determine if the process is meeting specifications. Cp and Cpk can tell you if you're meeting specifications or not, as shown in this article.

Stable isn't always capable.

Two different meanings of in and out of control

Thanks for the comment.  Fully agree that stable doesn't mean capable: In SPC, stable (in control) or unstable (out of control) tells us nothing about whether specs have been met or not.

The control chart meaning of in and out of control is different to the colloquial, everyday usage. I picked up on the potential obstacle of terminology some years back (e.g. https://www.spcpress.com/pdf/DJW129.pdf). My experience is that in control and out of control as terminology risks confusion, which can only reduce the expectation of successful control chart usage.

As an example, in training classes it typically works out that after breaks the sessions restart a few minutes late. I noticed it to be a pretty stable situation, repeating itself year after year and in different locations. I plan for it.

I learnt to make an analogy of re-start time as a “process”, where the goal is stated as an on-time start. This goal isn't met. When asked, a sizeable number of people tend to say this "process" is out of control (not in control). Yet, it is pretty stable, only far from capable. In Figure 10 this “process” would be in the Threshold State.

My emphasis at the start of the article is how in control and out of control are used in everyday, colloquial language. In everyday language, I believe "in control" and "under control" are synonymous with specs/targets having been met.

One set of terms and two meanings is a likely source of confusion. My point in the paper is that the colloquial use of in control / out of control has a meaning that is different to the “SPC meaning”.

Controlling understanding

In discussions with engineers, I've been careful to make a distinction between engineering and statistical control. Which, of course, uses the word "statistical", but one thing at a time.

I have a couple of canned charts showing how engineering control (being in spec, controlled by a feedback loop, etc.) is different from being predictable. We have lots of examples of weeding out half of a lot through inspection and high variation despite tight control of the wrong parameter to control to achieve quality to make my case. But it takes time and an engineer or a manager willing to stand with me in an elevator and talk about something besides the weather!

In these discussions, I've been forcing a connection between statistical control and predictability. Maybe it's time to rename statistical control to ... predictably controlled? Just predictable?

Thanks, Scott.

Nice article. Working with a couple of colleagues responsible for incoming quality of subassemblies. This says what I've been trying to say much more cogently and interestingly.

We work for what's essentially been a huge job shop. Production articles have rarely been repeatedly produced, so we've been relying on engineering control (meeting spec) to assure we get what we want. Yes, yes, it would have been easy and helpful to use SPC and similar, but what we've done has mostly worked.

We've recently begun working with larger-volume manufacturers, working toward regular purchases involving significant numbers of the same parts, subassemblies, finished assemblies and even services. We've taken our usual approach of certifying an original design, then inspecting the bejeezus out of what comes to us and comparing that to specs. Many of the managers, engineers and others I've been working with are recognizing that we could get even more value if we make sure the manufacturing and operation processes going forward is the same as what got certified in the first place - that the process is predictable, in statistical control.

I'm from heavy industry. I know this will work. It's unfamiliar to my present employer and our suppliers, though. Time will tell whether it catches on in any meaningful way, but so far, the simplicity and head-smackingly visually obvious methods of SPC are piquing interest.

Keep writing. btw - really appreciate the relevant links to other articles.