{domain:"www.qualitydigest.com",server:"169.47.211.87"} Skip to main content

User account menu
Main navigation
  • Topics
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Videos/Webinars
    • All videos
    • Product Demos
    • Webinars
  • Advertise
    • Advertise
    • Submit B2B Press Release
    • Write for us
  • Metrology Hub
  • Training
  • Subscribe
  • Log in
Mobile Menu
  • Home
  • Topics
    • 3D Metrology-CMSC
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Login / Subscribe
  • More...
    • All Features
    • All News
    • All Videos
    • Contact
    • Training

Myth or Mythunderstanding

Implications of the economic design of control charts

John Flaig
Thu, 05/12/2011 - 06:00
  • Comment
  • RSS

Social Sharing block

  • Print
  • Add new comment
Body

In the article, “Four Control Chart Myths from Foolish Experts,” by Davis Balestracci (Quality Digest Daily, March 30, 2011) the following comments were made regarding what Balestracci considers statistical process control (SPC) myths:

“Myth No. 4: Three standard deviation limits are too conservative.
Reality: Walter A. Shewhart, the originator of the control chart, deliberately chose three standard deviation limits. He wanted limits wide enough so that people wouldn’t waste time interpreting noise as signals (a Type I error). He also wanted limits narrow enough to detect an important signal that people shouldn't miss (avoiding a Type II error). In years of practice he found, empirically, that three standard deviation limits provided a satisfactory balance between these two mistakes. My experience has borne this out as well.

 …

Want to continue?
Log in or create a FREE account.
Enter your username or email address
Enter the password that accompanies your username.
By logging in you agree to receive communication from Quality Digest. Privacy Policy.
Create a FREE account
Forgot My Password

Comments

Submitted by Steven Ouellette on Thu, 05/12/2011 - 12:36

Well done!

John, nice article.  People who understand a little bit (enough to be dangerous) of what Shewhart was saying end up making all sorts of silly statements justified because "Shewhart's limits are economical, not statistical!"  He and Deming well understood statistics as the foundation of a decision-making heuristic for the real world, not because inferential statistics are "real" but because it is a process to follow to consistently make economical decisions.  This harkens back to Deming's often-misunderstood enumerative vs. analytic dichotomy.  You managed to explain Shewhart's reasoning in a way that is clear and cogent.  Well done!

  • Reply

Submitted by Tom Pyzdek on Wed, 12/05/2018 - 10:42

Thomas

Thanks for presenting a scientific approach. It seems that some of my colleagues are approaching the subject of process control and improvement dogmatically rather than rationally.

Thomas Pyzdek

www.pyzdekinstitute.com

  • Reply

Submitted by Davis Balestracci on Fri, 05/13/2011 - 07:56

"It depends"

To my three distinguished colleagues John, Tom, and Steve,

I hereby anoint all three of you to be in the rarefied "1-2%" to whom I alluded in past articles of those who need and -- as you've all proven -- have advanced statistical knowledge.  I have NO doubt the three of you could blow me out of the water theory-wise (although I do have an M.S. in statistics).

John's article rightfully applies to many, many manufacturing situations...or even some research situations.  There truly is a need for this knowledge...in specific situations where an expert is needed.  This is the type of stuff I did the first 10 years of my career as an industrial statistician....and very few people bothered to listen.  I'm even wondering how many QD readers will listen.

My QD articles mainly address the plague of "statistical training of the masses" caused by Six Sigma for applications to business processes and service industries -- and I have no doubt that all three of you would blow those audiences out of the water, too.  For what purpose?

As Deming wrote to Gerry Hahn (very distinguished applied statistician) in a personal correpondence shown to me in 1984:

"Sorry about your misunderstanding...TOTAL!  When will statisticians wake up?"

But, hey...if I need a Fisher's information calculated, you'll be the first ones I'll call.

Who's "right"?  We're BOTH right--"It depends!"

 

  • Reply

Submitted by Steven Ouellette on Fri, 05/13/2011 - 14:39

In reply to "It depends" by Davis Balestracci

I have a different position

(Sorry about the bullets - all the paragraphs get stuck together making an unreadable wall of text.  This way at least it is an unreadable FORMATTED wall of text...)

  • Davis, I guess I see it differently.  I am often faced with people who think they have been trained to use data to make decisions, but actually have only a "Black Box Black Belt" ("put the number in the software, out pops another number - what's the big deal?")  The problem is that real life is sometimes complicated and not amenable to breezy simplification.  By not understanding what one is doing, one can make really really bad and expensive decisions, like John described in the article.
  • I don't despair when I meet these people, I see it as an opportunity to improve, yes, the world.  These are people who make or enable real decisions with real consequences.  Doing things the right way is really not all that much more difficult than doing it in the "approximately right except when it is drastically wrong" way.  I think we should ask for MORE from our expert problem-solvers (call the Black Belts or SWAT teams or whatever), not dumb things down so they can do less.
  • This very morning I was teaching a class full of working Black Belts who, to their great credit, were confident enough to take my entry-level Intro to Applied Statistics class and are now in the first of three forty hour experimental design classes.  What they all said in the first class was that while they had been working as Black Belts, they wanted to take the class because they were smart enough to know that there was something missing, and that "something" had not been in their BB training.  It turns out that they are now learning a lot of really practical and useful information that they are already putting into practice to save the company money, time, and liability exposure.  Heck, my Black Belt training covers most of it in four weeks - its not like I even need more time to teach them the right way than the "Black Box Black Belts" get.
  • I am an engineer by training, not a statistician, and what I teach (and write about) is practical, not theoretical.  (Or rather not more theoretical than it needs to be to get the job done right.)  The way I learned to do things are one-degree of separation from Deming (my mentor and colleague worked with him at Ford), so I am confident I am solid ground there.  My experience is that whether you are working for a multi-billion dollar company or for a start-up, doing things the right way is the more economical (and less dangerous) way of doing things.  I have just seen too many situations where people skip steps, use approximations and assumptions, and end up paying a high price later for the supposed shortcut.
  • I also get asked, "Can you teach us DOE in 3 days?"  Well, no, it is impossible, at least in a way that would allow you to make a business decisions.  I am not willing to incur the karma of training someone, slinking away with the cash, and having my earstwhile students think they know something about data while making terrible decisions.
  • Reply

Submitted by Davis Balestracci on Sat, 05/14/2011 - 07:39

In reply to I have a different position by Steven Ouellette

I guess we agree to disagree

  • One step removed from Deming, eh?  Ever heard of Funnel Rule 4?
  • You certainly like to train, don't you?  As a friend of mine likes to say, "Would you rather your kids had sex 'education' or sex 'training'?"   People don't need statistics, they need to know how to solve their problems. 
  • Ever notice how much "training" Deming did in his seminars?  Virtually none.  It was all about a deep, deep understanding of variation...but mostly about THINKING differently and asking better questions.
  • And then there is the most nontrival "human" variation in application of any training...which NO statistical technique is prepared to handle.
  • From your articles, I have absolutely NO doubt that you are a good trainer and get great reviews -- like me in my past life as an industrial statistician working with engineers.  So what?  Ultimately, I wasn't very effective -- and the very talented  internal statistical consulting group of which I was part (3M) went bust -- as did those of DuPont, Kodak. GE (top of mind) and others.
  • YOU CAN'T MAKE UP what I saw people subsequently do with the material!--As Deming says:

--"You cannot hear what you do not understand."
  
--"Information is not knowledge.  Let's not confuse the two."
 
--"We know what we told him but we don't know what he heard."
 
[Taken from "The Best of Deming" by Ron McCoy]
  

  • Reply

Submitted by Steven Ouellette on Wed, 05/18/2011 - 12:46

In reply to I guess we agree to disagree by Davis Balestracci

Davis' Experiences Are Different Than Mine

  • Davis, I'm not sure what assumptions you are making about what I do and how I do it, but I would encourage you to consider that those assumptions might not be entirely consistent with what I actually do.  This is not really the place to toot my own horn, but let me point out that what I aim for, and I think achieve, is a practical approach to solving problems and improving processes, and I do that through, yes, training and on-site consulting.
  • (Would you rather your kids had problem-solving education or training?  See I'd vote for training there since it implies some sort of hands-on competency.  Either way, paralleling that to sex education is...special.)
  • I figure that I can help them make clients' process better (giving them fish) and I can help them learn how to make their process better on their own going forward (teaching them to fish) to whatever level they want to learn and that makes sense for that organization.  The analogy we use is that the basic tools of quality are sufficient to capture the low-hanging fruit in a process naive to them (and I have seen even just a flow chart save $10,000 a month, an SOP $14,000 a month) but that once you have captured that, you will need a more and more advanced understanding of what your data are telling you in order to extract actionable knowledge from them (get the higher, sweeter fruit).  And in the real world and in the presence of variation, that means one of the things you need to understand and use are increasingly advanced statistics.  This does not denigrate any of the other tools (in fact some of the most fun I have consulting is working with hourly employees building front-line process management systems) it merely recognizes that to get the "higher sweeter fruit" you need another level of sophistication.  When I helped a client to solve a problem that was endemic to their product for 35 years - and found three different ways to do it - it required of the teams a deep knowledge of what is actually going on 1) in the process (SOP, control charts, creating trust in the workers, etc.), 2) measurement system analysis (repeated measures ANOVA, control theory), and 3) fractional factorials, and all the foundational stats supporting these three things.  I wish all problems could be solved by the seven basic tools, and that control charts are black boxes from which new knowledge comes - it would make my job a lot easier, but the fact is that they can't and they aren't.
  • The purpose of Deming's seminars was not to train - the purpose of the seminars was to drive the recognition of the need for change.  Make no mistake though, the training came after that.  I would not serve my clients well if I were to go in, show them how my amazing stuff could make them more profit, and then say, "OK have fun with that!" and drop it in their lap without actually giving them the tools to do anything to make it happen.  Deming didn't do that either, which is why Deming brought my mentor and colleague into Ford to train, and why Ford manufacturing endowed a chair specifically for him (the only time it did so).   Ever heard of Deming's formula for culture change? D x V x F > R
  • I can't comment on why you and your colleagues were not effective at 3M - I have no idea as there are many variables.  I can tell you that my colleagues and I were extremely effective in a number of manufacturing, transactional and service industries, and that our ROI was over 14,000% as calculated by one of our clients over about five years of consulting.  There are many potential reasons behind why you have not observed advanced statistical tools being effective, but just because that is what you have experienced does not mean that the tools are ineffective, only that you have seen them to be so.  Management ignorance, statistical misapplications, and assumptions can all lead to failures, but that is not to say failure is the only option.  It DOES take FAR more than teaching a stats class to optimize the probability of success, of course.  If that is all that is done, you can bet on spectacular and expensive failure sooner or later - the literature is crowded with examples like that.  In fact, as I recall, I have mentioned this in an article or two for QD.
  • I am not an academician; I am an engineer at heart.  What I teach and how I consult is, I think I can say without fear of contradiction, effective, practical, saves my clients and students money that in some cases they didn't even think they were losing, and it pays off.  Advanced statistics are a tool in the toolbox, certainly not the only one.  But incorrectly applied statistics as a decision-making heuristic can, and eventually will, cause very expensive poor decisions to be made.  And really, that was my only point in my first posting - John's article shows why a surface understanding of SPC can lead one to make dangerous decisions with disastrous consequences.
  • Hmm, also in looking over my response I want to be clear that I am not insulting or insinuating anything about Davis, just that I have seen a very different reality than he has.  If I had his experiences, no doubt I would be arguing his side of the case.  But just because he has not seen a black swan does not mean they don't exist.  "Criticism is the only known antidote to error!"
  • Reply

Submitted by William A. Levinson on Fri, 05/13/2011 - 10:31

Sampling costs vs. failure costs

I recall learning a procedure to calculate the total cost of quality for acceptance sampling, with the following components: (1) Cost of inspection or testing, (2) Cost to replace nonconforming items caught by the inspection (internal failure), and (3) Cost of failure in the customer's hands (external failure). The external failure cost of something like a pacemaker is obviously unacceptable so 100 percent testing is indicated.    Performance of this kind of calculation for SPC requires accurate knowledge of both the false alarm risk and beta risk (cost of reacting to a nonexistent problem vs. cost of allowing the process to drift out of control), which in turn supports the need to model non-normal processes with non-normal distributions.

  • Reply

Submitted by Tom Hopper on Sat, 05/14/2011 - 00:58

A Non-Experts Perspective

John, thank you for an excellent post. I think that it is important for both practitioners and managers to recognize that SPC should be about economical control of processes; not just statistical control.
/
However, in ten years of using and studying SPC in several companies, I have never once had the luxury of being able to determine an estimate of the economics of alpha, or the economic consequences of Type I or Type II error. I've worked on colleagues and managers to try to get them to make estimates, and what I find is that (a) the data doesn't exist and (b) no one wants to take the time to guesstimate. Indeed, I have seen people use tightened control limits not based on economical analysis but because they wanted to reduce variation using 100% inspection and were using the natural process limits to set tolerance limits (I'll let the read count how many opportunities for improvement are implied by that statement).
/
I have, unquestionably, been living in target-rich environments, having spent most of my time in R,D&E with smaller companies where statistical techniques and SPC are new, and any application of statistical methods represents a vast improvement over previous conditions. However, I have seen this same problem with much larger and more established companies, too. This is clearly the reason that the Shainin Red-X (or GM's "Statistical Engineering") are popular: most people are not in a position to estimate basic statistical factors, let alone economical ones. At the same time, any standard factor, or even a formulaic approach that pre-calculates all statistical factors, is better than whatever was being done previously.
/
In short, I think that John's article (and the references) is extremely valuable. You can be sure that I will be saving this article and obtaining the references. Based on my experience and observations, Davis' points are entirely applicable and appropriate to a very broad audience of practitioners, who, for better or worse, are not in environments mature enough to benefit from John's more refined approach.

  • Reply

Submitted by Davis Balestracci on Sat, 05/14/2011 - 07:10

In reply to A Non-Experts Perspective by Tom Hopper

BRAVO! Tom

Very, VERY astute comments -- and, unfortunately, my industrial experience as well. 

Of course, us folks who love statistics are all-too-eager to apply the (needed) "advanced" stuff.  But, as Tom says, it falls on deaf ears.

As I've broadened my practice to educate people more about "variation," I may be using the "simple" stuff, but, you know what? -- I've never had more fun...or been more effective.

Well done!

Once again, guys, we're BOTH right.

Kind regards,

Davis

 

 

  • Reply

Submitted by Kim.niles@cox.net on Sat, 05/14/2011 - 09:19

A good read indeed!

John:

While I question your source of passion (blasting other articles), I always find them to be a good read … this one being up at the top.

You have potential to develop the next generation of SPC … models, methods, and tools, not just models. Cost / benefit, judging with confidence, flexible sampling and limits based on need, normalcy, autocorrelation, shifts, drifts, and rules that catch things that Western Electric rules don’t catch should all be easily considered as you seem to suggest. I suggest you connect with a stat software company (Dr. Neil Polhemus at Stat Graphics?) to take the common SPC offerings that they all tend to offer and turn it into next gen SPC.

Thanks for giving me another article for my “SPC WOW articles” file.

KN – www.KimNiles.com

  • Reply

Add new comment

Image CAPTCHA
Enter the characters shown in the image.
Please login to comment.
      

© 2025 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute Inc.

footer
  • Home
  • Print QD: 1995-2008
  • Print QD: 2008-2009
  • Videos
  • Privacy Policy
  • Write for us
footer second menu
  • Subscribe to Quality Digest
  • About Us
  • Contact Us