Featured Video
This Week in Quality Digest Live
Quality Insider Features
Quality Digest
Automation and job loss
Harish Jose
The dangers of misapplying linearity
James daSilva
Like it or not, these are the good times
Chad Kymal
A single set of FMEA requirements will ease the burden on suppliers
Trevor Blumenau
Inexpensive wireless pick-to-light systems put warehouse productivity in reach for everyone

More Features

Quality Insider News
Management's role in improving work climate and culture
Helps manufacturers by focusing on problems and problem resolution in real time
Work with and learn from some of the nation’s best people and organizations
Cricket Media and IEEE team up to launch TryEngineering Together
125 strategies to achieve maximum confidence, clarity, certainty, and creativity
More effective and less expensive than heavy-zinc galvanize

More News

Anthony D. Burns

Quality Insider

Six Sigma Psychology

Why do so many people believe in Six Sigma fairy tales?

Published: Friday, October 19, 2012 - 10:21

W. Edwards Deming spoke at length on the importance of psychology to quality. It’s important because psychology provides a theoretical framework for understanding the differences between people. However, Deming did not consider the Asch Effect—the effect of group pressure on the modification and distortion of judgment—which has become very relevant to quality and the way it is seen by people today.

Imagine you have been shown slide 1, below. Then slide 1 is removed, and you are shown slide 2. You are then asked which of the bars in slide 2 was closest to the length of the bar in slide 1. The answer is obviously bar A. However, 75 percent of people will answer incorrectly at least part of the time, if they are placed in a group that intentionally answers incorrectly. That is, a majority of people will stick with the group view, even when they know the group view is ridiculous. This demonstration of conformity is an example of the Asch Effect, named after Solomon Asch’s experiments during the 1950s.

It is easy to think that Asch’s theory could not possibly relate to the technical and scientific aspects of quality. However, consider the following two choices:
• Quality of a product can be improved simply by broadening its specification limits.
• Quality is inherent in the way a product is made.

The correct choice should be obvious. Deming pointed out that you can’t inspect quality into a product. It is already there. Quality can only be improved by reducing variation. Changing specification limits alone does nothing to quality. Sadly, most people follow the belief that it does. They believe that defects, which are product out of specification, are the key. Six Sigma followers adhere to the words of their guru, Motorola engineer Bill Smith, who claimed that simply broadening specification limits improved quality. It doesn’t; it simply allows more product to be passed. This has nothing to do with a product’s inherent quality.

Real product quality has nothing to do with the specification. In fact, most buyers would consider a product that had wide specification band to be worse than one with a tight spec. Any methodology that has a specification-related name has flawed fundamentals. As Donald Wheeler pointed out (and was later quoted by Deming), “Conformance to specifications, zero defects, Six Sigma quality, and all other [specification-based] nostrums all miss the point.”

Here’s another pair of examples:
• It is impossible to keep any process in control for more than a day or so.
• Good quality is only possible if processes are kept in control.

Again, the answer should be obvious. The implication of the first choice is that it is impossible to predict the outcome of any process. It is therefore impossible to ensure that a product meets specification, no matter what specification limits have been set. This would seem utterly ridiculous, but incredibly, most people accept it blindly as part of Six Sigma doctrine.

The claim that all processes drift or shift by ±1.5 sigma was introduced by Mikel Harry. He claimed that in the “long term” of 50 measurements, which is two days if measurements are made hourly, all processes drift uncontrollably. His figure of 1.5 was based on a 1962 paper by Arthur Bender on tolerances in the height of a stack of discs. “Long term drift” of ±1.5 sigma has now been digested and regurgitated by Six Sigma folks as something real. Similarly, no one questioned why Bill Smith pulled his “1.5” out of thin air. No one questioned why Mikel Harry changed his stack of discs “proof” to a “correction” and then to a fudge factor that “would not be needed.”

It is not surprising that the above sounds like a fairy tale. As Lewis Carroll wrote in Through the Looking Glass (Macmillan, 1871):

“‘You see,’ he went on after a pause, ‘it’s as well to be provided for every thing. That's the reason the horse has all those anklets round his feet.’

‘But what are they for?’ Alice asked in a tone of great curiosity.

‘To guard against the bites of sharks,’ the Knight replied. ‘It’s an invention of my own.’”

The 1.5 sigma drift, shift, fudge factor—like anklets around the feet—is also an invention. It is not real. There is no theoretical or practical evidence of any kind to support it. It doesn’t exist, except in fairy tales. When uncontrolled processes do drift, they can drift by any amount, even many times Six Sigma’s fictitious 1.5 sigma. We do not need fudge factors to guard against the bites of nonexistent sharks or the impossibility of control. As Wheeler says, “The best that can be said is that these numbers represent a triumph of computation over common sense.” The Black Knight of Six Sigma has taken quality back to the Dark Ages.

Why do so many people believe these Six Sigma fairy tales? The Asch Effect explains it all. People follow the herd, rather than stand up and state the obvious truth.

The above two examples are exactly like Solomon Asch’s bars. There are many more examples, but when companies start to appreciate that Six Sigma’s fundamentals are so badly flawed, they may more readily see that the rest of Six Sigma doesn't get any better. Six Sigma’s rejection of Deming and Shewhart, and preoccupation with normal distributions and the need to normalize, is just one example. Most people make the farcical Six Sigma choice rather than following basic logic and common sense. The reason is psychological: We are herd animals; we depend on the herd to survive. Survival depends on sticking with the group, no matter what our personal views may be.

Take as an example the Six Sigma evaluation that was carried out by the statistics department at IBM. According to Dan Rand, an internal statistical methods consultant and quality engineer at IBM Rochester, “We expressed concern with the thin theoretical and practical evidence for the 1.5 sigma shift, and the dubious means of counting defects and opportunities for defects. Our position paper was finally regarded as too disruptive to IBM’s progress in defect reduction, which management wanted to credit to Six Sigma policies. The position paper was never distributed beyond the team that created it.” In other words, they wanted to keep their jobs, so rather than boldly stating that the emperor has no clothes, they felt it was safer to follow the herd.

Deming was right about the importance of psychology. Although following the Six Sigma herd may be a safe path for individuals, it is very damaging for companies. Six Sigma has drifted aimlessly into lean Six Sigma and is seen as synonymous with total quality management (TQM). Even Mikel Harry claimed Six Sigma was “80-percent TQM.” Sadly, the added 20 percent has put quality into reverse gear. Unfortunately, Deming did not provide a way to steer the herd working in companies around the world, back from the nonsense of Six Sigma to good quality.

Discuss

About The Author

Anthony D. Burns’s picture

Anthony D. Burns

Anthony Burns, Ph.D., has a bachelor of engineering and a doctorate in chemical engineering from the University of New South Wales in Sydney, Australia. He has 36 years of experience and his company, MicroMultimedia Pty. Ltd., is responsible for the development of the e-learning quality product Q-Skills and its support tools.

Comments

Six Sigma Psychology - Part 2

Please read more on the utter farce of Six Sigma's 'six sigma' in part 2 on LinkedIn.  I also reply to various comments and the inevitable ad hominem attacks.

What follows the name 'Six Sigma' is even worse.  Why do people turn their back on the world's greatest process statisticians, Dr Wheeler, Dr Shewhart, Dr Taguchi, and Dr Deming, all with PhDs in statistics, and instead follow the lunacy of the psychologist who created Six Sigma?

https://www.linkedin.com/pulse/six-sigma-psychology-part-2-tony-burns/

Reply

The following article "Six Sigma Psychology - Part 2" responds to comments above:https://www.linkedin.com/pulse/six-sigma-psychology-part-2-tony-burns?tr...

Criticism of Six Sigma Metric

I have studied Six Sigma for quite some time. I have never heard anyone say or imply that simply widening the specification limits improves quality. No knowledgeable person would believe this and there are many sharp people involved with Six Sigma.

In addition, I believe strongly that the Six Sigma metric (3.4 ppm, 1.5 sigma shift, etc.) should be dropped entirely from Six Sigma. This metric is not useful and has become harmful. It is the framework of DMAIC and DFSS that is important and good management structure. Nothing in the process improvement efforts has to be tied back to the 3.4ppm. This metric is easily criticized and most criticisms of Six Sigma, such as those in this paper, are just criticisms of this metric.

In case anyone is interested, our views about Six Sigma are in the paper found at http://filebox.vt.edu/users/bwoodall/2008%20Six%20Sigma%20ISR.pdf

Asch Effect

Dr. Burns states, "However, 75 percent of people will answer incorrectly at least part of the time, if they are placed in a group that intentionally answers incorrectly. That is, a majority of people will stick with the group view, even when they know the group view is ridiculous."  The second sentence of that statement is an old assumption.  More recent testing using brain scans shows the facts are more unnerving, the people actually think they are answering correctly.  See a good discussion of this in an excellent book "Quiet - The Power of Introverst in a world where people can't stop talking." 

Interesting

Congratulations to Dr. Burns on an interesting and clearly provocative article. I will qualify my response below by saying I am still relatively new in the quality field, having transitioned two years ago from my academic position as a theoretical physicist specializing in computational statistical mechanics.  Which is a fancy way of saying I have 15+ years of experience in analyzing small data sets produced by complex processes, and am still learning how to use those skills in a manufacturing environment.

To that end, my company invested in Black Belt training for me.  Not so much for the statistical training, but for the rest of the quality and team leadership toolbox. Also because the customers like it ;) So do I, frankly.  But perhaps that is because I see it as another tool in my analysis arsenal -- "tool" being the operative word.  I use it -- it doesn't use me.  I am perfectly free to pick the right bits to use in my context and leave the rest.

And that is where I think the danger lies -- when the "Six Sigma Psychology" encourages slavish adherence to metrics as opposed to thoughtful consideration of objective process data.  If the core of Six Sigma is DMAIC, then perhaps more emphasis needs to be placed in SS training that the core of DMAIC is the scientific method.  The only thing you MUST do when examining and improving your process is control the variables and objectively analyze the results as you drive to your goal. Who was it that said "Pick something you care about and plot it over time?" Everything else is window-dressing.

I work in an environment where for some of our part numbers it would take months if not years to reach what many would consider a sufficient sample size.  Does this mean I can't use a statistical framework? Of course not.  I just have to look at my problems differently -- DMAIC still works! (And for those in my situation, I highly recommend Donald Wheeler's book Short Run SPC).  I also have to listen to process experts from the shop floor on up (which we should all do anyway). And there's even some advantages -- I sure don't have to spend much time worrying about the difference between a 1% and 2% false alarm rate on my control charts.

If there is a fairy tale in the land of Six Sigma, it is that we are somehow excused from our obligations as thoughtful professionals. That we no longer have to respond adaptively to the situations we are faced with because we have some kind of "turn-the-crank" methodology that will allow us to push a button on MiniTab and save our companies millions.  If that worked we'd all be swimming in team jackets and no time to work because of all the celebratory pizza parties. 

Thanks again to Dr. Burns and all the commenters -- healthy debate is of course also in our list of professional obligations :).

6 Sigma tales

Thank you Ally! Your thoughts and comments are a breath of fresh air in what was becoming a "stuffy" debate. And you are correct in stating that debate is part of our obligations. Much sucess as you use your new 6 Sigma tool in a way that it was intended.

Confused published Body of SS Knowledge

Hi Tony,

I see one comment questioned your Doctorate?

I copied and pasted some confusing explanations of Six Sigma (Motorola Trade and Service Mark US Patents Office 1987 term, not 6Sigma for some other folk want to do). Anyway, Tony, these excerpts may reflect the confused BoK by many.

In searching the Austomotive Industry Action Group USA, SPC Manual for the Auto suppliers, I could only find a requirement to achive high process capability and process capability indexes e.g. >cpk 2.0 - not Six Sigma level of some value if that helps your argument.

The extracts follow.

Mike

Thomas Pyzdek The Six Sigma Handbook 1999 - "A six-sigma process will produce failures at a parts-per-million or even parts-per-billion level. This contrasts with the old three sigma process which produces parts-per-thousand failures. This difference of three to six orders of magnitude is profound" page xii Preface.... "Six Sigma Quality is a statistical measure of variation from a desired result"....."Six Sigma requires the process standard deviation be no more than one twelfth of the total allowable spread" p140.

And the final piece is found in Appendix Table 18 "Process sigma Levels and equivalent PPM Quality Levels....Based on the assumption that in the long term the process could drift by plus or minus 1.5 sigma". It then shows the modified Table.

Keki Bhote The Power of Ultimate Six Sigma" 2003. p20 - "The hyped Six Sigma approach asserts that it is too difficult to hold the average, (X Bar), at the target value (i.e. design center) of a parameter's distribution because of inherent shifts in materials and processes, etc". 

And on DMAIC - page 13 "DMAIC is muddled in definition, imprecise in measurement, impotent in analysis, incapacitated in improvement, and rudderless in control. Other than that. it is okay". And he's ex-Motorola!

Pande, Neuman, Cavanagh The Six Sigma Way 2000 p28. "By factoring in opportunities for defects in the calculation, (DPMO), Motorola made it more realistic to equate performance across different processes".  Page 229 - "The 1.5 Sigma Shift is one of the key bones of contention amongst the statistical experts about how Six Sigma measures are defined.  The lucky thing is that when a convention is adopted and applied consistently, it's still valid.....the only challenge comes if you try to equate the accepted Six Sigma scoring system to strict standard deviations under a normal curve".

 

 

Six Sigma Psychology

While reading Dr. Burns article I found many of his points valid and interesting.  Nonetheless, a few comments relative to 

  • Bill Smith
  • Mikel Harry
  • DMAIC
  • Shift (1.5 standard deviations or othewise)

As I was at Motorola during the early days of Six Sigma I knew Bill Smith as we often ate lunch together and to this day I remain in touch with Dr. Harry.  Often I represented Motorola Corporate at speaking engagements (e.g., HP, IBM, Ciba Geigy now Novartis, Johnson and Johnson) discussing the Malcolm Baldrige Award pursuit by Motorola and the history of Six Sigma.  Therefore, I have different perspectives on the history of Six Sigma.

Bill Smith's main emphasis was that the amount and cost of field failures was directly related to failures in both the design and manufacturing process.  For example, the greater number of defects caught, even if the defects were corrected, in the manufacturing process the greater the likelihood that defects would escape into the field.  Mr. Smith demonstrated that the better the product design and the better the manufacturing process the fewer defects would be observed in production which would then result in fewer defects escaping to the customer resulting in significantly lower warranty / field failure costs.

As a result of his efforts Motorola emphasized tracking of all defects in a manufacturing process in order to better understand where and why they occurred with the goal of improving both the product design so that products were easier to build while also streamlining the manufacturing process.  In other words, the use of lean (often called at that time Just-in-Time and Toyota Production System) was highly important at that time.  The focus on reduction of defect counts was emphasized as it was a method to Characterize the process and ultimately Optimize the process.  Of course, a simple way to reduce the defect counts would be to simply not count defects!  

In addition, Bill Smith realized that from a engineering / statistics perspective that the Shape, Center, and Spread of the product performance and dimension data distributions would better characterize the production than merely using the specifications.  In fact, if the Shape of the distribution was uni-modal (e.g. Normal is just one such distribution) with a monotonically decreasing distribution as you move away from the mode then tightening the spread such that the distribution width (e.g., plus or minus 3 standard deviations of the production data) was 1/2 the specification width requested by the customer would essentially lead to a product that is "twice as good" as the customer demanded.  He called such as case a 6 Sigma quality level.  

Bob Galvin, Motorola Chairman, liked the phrase 6 Sigma.  Often when asked how long do we keep up this improvement effort the answer was until we get to at least 6 Sigma level of performance.  Of course, Mr. Galvin was very smart in asking for Six Sigma level of performance as the better the products and processes the more likely the customer expectations would be greater (i.e., the specifications would assumed to tighten).  

The focus was not on the absolute level of Six Sigma as customer demands would continue to be higher.  Instead the focus was on relentless pursuit in variation reduction (i.e., relentless pursuit of consistency in higher levels of performance) in the design and manufacturing processes.  In fact, in our quarterly reviews the emphasis was on maintaining a quarterly (e.g., approximately 25% reduction in defect rate per quarter) improvement rate that would result in a 10-fold improvement every 2 years, 100-fold improvement in 4 years, etc.  I recall watching a product manager presentation in front of Mort Topfer who later became Vice Chairman of Dell Computer.  Mr. Topfer was upset because the quality level of the product that was shipping was 5.2 Sigma (very good) but unfortunately the product was shipping at 5.2 Sigma the quarter before indicating no improvement had been made.  As a result a $20 million order was not shipped to the customer but instead the customer was informed that the product was belng delayed until the quality level was improved!  Mr. Topfer was the President of the Communications Sector of Motorola and made it clear that product quality improvement was expected and non-negotiable.

As for the Sigma shift, the main concern was how much product performance / process variation was inflating over the long term relative to the short term.  Even so, if a process originally in a state of statistical control had its mean shifted halfway to the established control limits and, furthermore, if  the only out of control conditions to which manufacturing would react would be to points outside the control limits then the runs rules may show that only .0668 (approximately 1 out of 16) would end up outside the control limits.  The zone of 1 to 2 standard deviations shift was called the Zone of Indifference in that manufacturing may not react quickly to such shifts.  If the process shifted further the control limits would pick up the change quickly.  Of course this assume that both control charts are used in all processes and that other runs rules (i.e., tests of special causes) are not used. In actuality, it was not that processes shifted, instead, it was that processes may become less consistent (i.e., process variation increases).  The amount of inflation in the process variation in the long term relative to short term or best case was used to detemine a strategy for addressing process optimization.  Large inflation over time in variation (i.e., probabilistically in distribution results similar to a shift of 2 or more standard deviations) would indicate that process controls are not being effectively used as process control charts would trigger such an increase in variation.  Similarly, if the process variation was consistent over time (i.e., probabilistically similar to a shift less than 1 standard deviation) and yet the product was not meeting customer requirements then a focus on product / process design was needed rather than an improvement in process control.

Having a Ph.D. in statistics I understand that the 1.5 shift is merely dead center in the Zone of Indifference as explained above and itself may not have a strong theoretical basis.  In practice the ongoing increase in process variation often was observed to be equivalent to having observed a process shifting 1 to 2 standard deviations.  When a product / process is monitored using statistical process control charts it can be explained why a shift of 1 to 2 standard deviations may be missed for a short period of time.  A greater shift is picked up quickly while a smaller shift will go on longer before being picked up.  I can provide everyday examples of this but I think emphasizing the shift and absolute Six Sigma levels of performance are missing the point.  

Motorola was emphasizing a relentless (some would say ruthless depending on where you worked) pursuit of 10-fold improvement every 2 years, 100-fold every 4 years.  Moreover, many business units had a simultaneous objective of reducing all lead times by 50% every 2 years so time-based improvement was very much part of Six Sigma in the early days of Six Sigma.  Many of us who worked at Motorola in those early years may scratch our head when we hear "Lean Six Sigma" as if Six Sigma was not focused on Lean (i.e., time-based, efficiency focus).  Motorola very much benchmarked the world class manufacturing organizations in Asia, especially Japan, so reduction in both design and product variation was integrated into reducing lead times by eliminating unnecessary processing and streamlining what was left.  No need to improve something that should not be there in the first place (e.g., those steps that add no value).

In any case, I agree with Dr. Burns as to "fantasy" of Six Sigma but in my view the fantasy is the lack of understanding of the history and evolution of Six Sigma.  Many do not realize that Design for Six Sigma was actualy taught well before DMAIC!  The emphasis was on both product design and process design both to increase product performance consistency (e.g., reduce performance variation) while reducing cost (i.e., increase product margins through faster, value-added processing and elimination of the unnecessary waste).  Again, DMAIC is not Six Sigma but rather one methodology roadmap to help others with improvement and innovation in order to drive Six Sigma performance.

Side note:  Dr. Mikel Harry often discussed Characterization and Optimization as the basis for improvement.  A few years later, well after the launch of Six Sigma, Measure and Analyze were used together to build product / process Charaterization while Improve / Innovate and Control together led to Optimization.  It is obvious that MAIC then evolved into DMAIC.  Keep in mind that DMAIC is not Six Sigma which began a few years earlier as much as it is a model for improvement that was created to facilitate Characterization and Optimization of a product / process.  Dr. Mikel Harry "codified" the Characterization and Optimization into MAIC while leading the Six Sigma Research Institute.  During the early years of Six Sigma the emphasis was on Design for Six Sigma so Characterization and Optimization worked to support that effort.  Later many realized that it is difficult to design better products and processes if it was not understood why the current products or processes were not performing well.  Hence, if one has difficulty improving existing processes, how much more difficult would it be to then design better products or processes not inherently having the flaws (defects) that already exist in the current products or process?  Therefore, MAIC and later DMAIC was further developed to help characterize (understand) and optimize (improve and control) current processes.  Nonetheless, at Motorola there was significant emphasis on Design for Six Sigma that began in the early days of Six Sigma.  Dr. Harry contributed much to both the Design for Six Sigma effort including the Characterization and Optimization emphasis that later evolved into MAIC and then DMAIC.

Question

Thanks Ed for this history at Motorola. I would be very interested in a referece for the statement that DFSS preceeded DMAIC. Can you point me to such information? Thanks again. Cliff Norman

six sigma fairy tale

I support the spirit and the message of Dr. Burn's publication in QD. He has the courage to raise a dissenting voice on the value and scientific foundations of Six Sigma. I agree that SS foundations are flawed. For example, SS pushes 3.4 ppm defect target rate that is included in numerous SS articles and books. But this figure is simply the area under the normal distribution curve that corresponds to 4.5 sigma, which is the shift of the unsubstantiated 1.5 sigma from the claimed 6 sigma.   

Many SS claims are based on properties of tails of normal distribution that is a poor model for the majority of real processes anyway because it is 2-way unbounded symmetrical distribution.

The DMAIC process that SS claims as its novel approach to structural problem solving is being used in some forms for centuries in any reputable research since Roger Bacon time, i.e. define the problem, measure the effect under study using a reliable metric system, analyze the data, suggest independent testing and control results.  

Thus, SS is a combination of naive statistics based on normal distribution and some overall slogans for voice of customers (VOC) and stakeholder management. SS cannot offer real quantitative root analysis and improvement methods. Design of experiments (DOE) included in Analysis and Improve phase are of limited use for relatively small scale experiments.

No powerful methodologies such as computer modeling and simulation, mathematical analysis and calculus are a part of SS standard 'tool box'.

Personally I am twice certified SS Black Belt (from General Electric, GE). Therefore, I know this staff from inside. I vividly remember how 'infamous' GE CEO Jack Welch pushed SS into GE by his iron hand, creating a huge new bureaucracy of poorly educated Belt Belts, Master BB, quality champions, etc. I remember how very good engineers were fired for dissenting to SS because they rightly advocated for a wide use of mathematics and modeling for design improvement rather than relying on laughable SS claims.

Anyway, I thank Dr. Burns and QD editorial for the courage and for the sobering view of the largely damaging long-term effect of SS on quality and product design.

 

Wow...just Wow....

I have read this article about 3 or 4 times trying understand if this is a "tongue in cheek" article or is intended to be serious.

If it is "tongue in cheek", I've missed the sarcasm and wit intended, my apologies.

If this is intended to be a serious article, it misses many targets that i was taught and have learned over a few decades if process/product improvement.

First, i do agree that psychology is very important to any choice people make. 

Unfortunately, that is where my agreement stops.  The first choices Mr. Burns proposes are Quality being related to specification limits and quality is inherent to the way a product is made.  I believe the choice is obvious.  Both are incorrect.  Quality of a product is inherent in the design of the product. A lesson taught to me in engineering school and reinforced in "Six Sigma" training and "Lean" training, business school, and a few decades of experience. 

Mr. Burns continues to discuss "real product quality" and concludes, "Most buyers would consider a ...wide specification...to be worse than ... a tight spec."  I would propose that most buyers have no knowledge of specifications nor care about specifications.  Most buyers will recognize variation (either by large within lot variation or large between lot variation...generally different problems and corrections!), ability to meet their use needs, robustness, and depending on the product durability and reliability.  Most of these attributes for the product are designed in and the process to create the product may have little effect on the customers’ perception of "quality"

As i was taught in engineering school, "six sigma" school, and life, design specifications are the knowledge transfer from "user needs" to manufacturing requirements.  If we bring in Taguchi, we recognize that a production facility that is concerned with lowest possible cost strives for 2 key aspects for processes, 1. Hit the nominal and 2. do so with minimal variation.  Reducing variation alone does not accomplish everything alone.

the next pair of examples goes on about "...impossible to keep...in control..." and "good quality is only ...if...in control".  Again, I believe the answer is obvious.  Both are incorrect.  I propose Good quality is only possible if the design is robust to the manufacturing process.  Mr. Burns goes on to discuss "process drift" and "predicting outcome".  I think Mr. Burns is trying to discuss process control and capability analysis.  What I was taught about "process drift" and "capability analysis" is that all processes have common cause and special cause variation.  The challenge is to identify the special cause variation before is detrimentally impacts the process output.  To accomplish this task Deming, Wheeler, et. al. have different techniques to monitor a process and recommend establishing "rules" to identify special cause.  There is even a generally accepted started point developed by Western Electric to identify special cause variation. 

As I was taught, these are starting points. and they begin to define the amount of "drift" that a given process is able to detect.  given a process and a particular rule, there are a few statistical things happening.  First are alpha and beta risk.  We may conclude we have a special cause when there is not or vice versa.  we set the "rules" to balance the risk of a wrong decision and the risk to consumers and the risk to the business to establish the rule.  Similar to this is the "drift" (also seen as "average run length" and a few other consultantese phrases).  This concept of "drift" is the amount of deviation a process can deviate BEFORE it is identified.  Here's where we come full circle back to Motorola.  Many of the Motorola decisions regarding "six Sigma" were business decisions (1.5 standard deviation drift, 3.4 defects per million opportunities, etc.)  MANY consultants and educators have missed this point and mistakenly teach those rules as "six sigma".  each business, product, process, needs to identify those parameters based on the risk of failure.  the reference to the "drift" as a process shift before detection is critical to understand in terms of the potential number of defects created (not found...because i agree that quality is never successful when inspected) to understand the risk to customer, business, etc.  this is far less a "fudge factor" than a clearly calculated potential occurrence.  Far less a of a fairy tale or make believe than suggested.

Your example using IBM certainly seems to reinforce your point of psychology influencing "the herd".  I don’t see the connection to "Six Sigma" and would contest that there were far greater problems in that business than the details of "six sigma" if factual information was not being considered.  The lack of maturity of the company to listen to and rely on factual data violates another tenet of the "six sigma" I was taught.

Lastly, I was also taught the there is very little new to "six sigma".  the power of the philosophy is in the focus to consistently hit the target and systematically reduce variation.  to that end, the construct of the DMAIC process becomes more of a project management tool to make sure key areas are addressed (almost like a simple check sheet)  and fact based decisions are made.  Pretty much the same as TQM, Quality Circles, and any other quality fad that "educators" and consultants have rolled out to create their niche.

In the end, there is little more than good science, good engineering, good mathematics, respect for knowledge, and fact based decisions to any of the fads broadcast throughout businesses today.

 

Six Sigma Psychology

My mother always told me if you don't know what you are talking about stay silent.  Obvioulsy Mr Burns should have remained silent.

His article which to my utter amazement made it through the Quality Digest vetting process is riddled with untrue statements and followed on by half truths.

Stetment #1: A process cannot stay in control for a day or two .. Ridiculous

It appears Mr. Bruns is stuck in the 70's and never progressed into  the 21st century with his thinking

Mr Burns is obviously an expert in his field and whatever that is is where he should focus his article. He obviously has no clue regarding Six Sigma and the impact to the industrial world it has been.

The sigma shift is really unimportant in the six sigma world.. The DMAIC process is the key to six sigma projects. This is what made Six Sigma such a powerful entity.

I would suggest that Mr Burns actually become a certified greenbelt or blackbelt from a reputable training program and then we can discuss issue. Until then Shame on Quality Digest for publishing this artivcle.

 

P.S.

Has anyone checked on where Mr Bruns might have received a doctorate from ??

 

Six Sigma Nonsense

I was glad to read an article that debunks the 6-sigma craze. A real herd mentality if ever there was one. To the lost souls out there dealing every day with Quality: IT'S ALL ABOUT VARIATION! In most of the 6-sigma articles the Control Chart" is an after thought if mentioned at all.

Perhaps YOU should take your mother's advise to heart.

Rich DeRoeck

Six Sigma Psychology

I think you are missing the point - while I agree the statement in the article you refer to in your response is ridiculous, the author likely agrees.  It is stated as an example of the rationale many six sigma practitioners use to justify using the 1.5 sigma mean shift.  There are many incorrect statistical methods in use today because of the fixation of management on indices and metrics without understanding the underlying principles.  You can always achieve better DPMO, Cpk, PPM or whatever metric you choose by fitting different distribution models to data, widening the specification limits, or by arbitrarily eliminating "outliers".  The report will look good but the customer will still be unhappy.

I am a black belt, and agree that the DMAIC process has value for finding root causes of problems and evaluating the effect of corrective actions.  However, the premise of the article was that people are often not able to challenge unsound statistical practices because they are afraid of challenging the "flavor of the month", especially if it is favored by management and served up by high priced consultants.

Based on what I have seen in my 30 years in manufacturing, his insight is spot on.

Mitch 

 

   

  

 

to Ronald R

I understand your ire with the article and have the following comments:

Most of Dr. Burns' observations are correct, which is to say I agree with him.

He wasn't clear about whether he thought processes could stay stable for more than a day.  If his bigger point is that slavish adherence to specifications is suboptimal, he's right.  Ultimately, companies need to know how much variation is optimal.

I've observed groupthink and siloing in several different companies and heard many "just-so" stories from friends and colleagues about how things are done wrong.  I agree with you, Ronald, that QD's journalistic standards are very low - they let most of their authors print opinions presented as fact.  But I let them off the hook for that because I'm not aware of any hard data establishing an unassailable assessment of the effectivity of Six Sigma, and I'm not sure any such assessment is possible given the subjectivity of the topic.  I don't need a study to tell me the sky is blue or trees drop leaves in the winter, I know it's true.  OTOH, scientists sometimes find facts that directly oppose common perceptions - the difference between centripetal (real) and centrifugal (imaginary) acceleration is a great example -, so I agree Dr. Burns should back up his assertions.

Despite my critique, I absolutely agree with the spirit of his article.  Six Sigma is a mess and a sham, and Bill Smith's alleged statement, in this context, is incorrect.  I'll indulge in a bit of ad hominem - what's the big deal about Bill Smith and Six Sigma?  It's just TQM with lipstick.

I think your attack on Dr. Burns' credentials is uncalled for.

Quality Digest Chimes In

Hi David. Regarding your comment about QD journalistic standards. Opinion pieces are just that... opinion. And opinion makers almost always state their opinions as fact. And actually, all the fact-based statements in Burn's opinion piece were checked as far as I can tell. Wheeler did make that quote Burn's mentioned and Deming did quote Wheeler. Mikel Harry did make those assertions Burns mentioned, as did Bill Smith. Now if you want to argue the value, or lack of value of Six Sigma, or if you want to discuss whether the 1.5 sigma shift is important or not... those become matters of opinion and open to discussion in an open forum such as this. So too, does Six Sigma provide value to an organization? Opinion.

At QD, our job is to provide a forum for people in our industry to express their thoughts on a wide range of topics... of which statistics is by far the most volatile (who woulda thunkit).

If and when you find factual errors (as opposed to differences in interpretation) bring them to our attention. The nice thing about online is that we can go back and correct the errors. When we do that, we also leave a comment at the top of the story that a correction was made and what it was.

Thanks

Chiming back

My claim was ill-conceived.  My concern is the common framing statements by your authors "This is how things should be / These are desired goals / Great companies do this".  While I'm completely on board with the sentiments, I'd like to see a bit more rigor in the framing and in predictions of benefit.  For example, instead of saying 5S / Kaizen / DOE is good for you and broadbrushing the results, provide some data driven analyses, and perhaps some examples of when it / they might not be appropriate.

Fair enough

Dave. Point taken. I do agree that nuts and bolts are often lacking in these constructions. QD will try to do better in terms of getting our authors to back up their assertions. It's tough though... given that most of that backup is usually anecdotal or simply based on the author's own experience... which opens up an entirely new topic: A program is what you make of it.

Thanks for keeping us on our toes.

Dirk Dusharme

Remaining Silent

Ronnie, your mother was probably paraphrasing Abe Lincoln, "Better to remain silent and be thought a fool than to speak out and remove all doubt." Good advice in general but not in this case.

I agree that Dr. Burn's first statement could be problematic but I would refrain from extreme adjectives in commenting least you come across as and extremist.

Your "70's" statement is wholly unsubstantiated.

Dr. Burns and I have corresponded about Six Sigma since 2006. We are still looking for someone from IBM who had this unpublished position paper about the 1.5 sigma shift. I have read extensively about it including the original papers. Neither Harry nor Smith was able to satisfactorily answer my questions about it. I have to agree with Dr. Burns and Dr. Wheeler that this part of the theoretical underpinnings of Six Sigma is seriously flawed. That cannot be dismissed or ignored. The flaw stands out on the nose of Six Sigma like a pimple.

DMAIC is a powerful tool. It is an overtly stated version of PDCA. Six Sigma has many examples of successful process improvement to support its continued use. Dr. Burns and others write to caution everyone to Caveat Emptor.

1.5 Sigma Shift

I'm not one to write off Six Sigma completely, as Tony is. I was hired in 1999 by a very strong Deming disciple--Lou Schultz--to help him build a Deming-based Six Sigma practice. We decided that the idea of Six Sigma (striving for a process centered on the nominal and reducing variation around it) was aligned with our concept of Quality (very strongly informed by Deming, Wheeler, Taguchi, Kano) but some of the nonsense associated with it (the 1.5-sigma shift, for example) was NOT. I like DMAIC a lot, and have applied it a lot, for solving the problem of what to do when you have good statistical control but you're not meeting specs. It has proved useful in analyzing thc common cause system and helping to reveal the system changes needed to shift the average and/or reduce the variation...to get the process centered and reduce variation. DMADV has helped a number of design organizations come up with better designs, based on Kano Modeling and QFD, with processes designed for manufacturability and very high quality.

In a meeting with JUSE scientists and engineers in 2000, the conclusion we came to was that Six Sigma was a "vehicle," or a way to bring Quality to an organization. A more cynical translation of that conclusion might be that it's "marketing," and that would not be an entirely inaccurate reading. So I have been using Six Sigma for at least 12 years to bring quality (including the Red Bead, the Funnel, and the System of Profound Knowledge, as well as Shewhart and Wheeler-based SPC) to organizations in many different industries. My own experience with it has been good, and my clients have gotten a lot of great results and learning from it.

I have also seen and read some of the people that set Dr. Burns' hair on fire, though, and I can't blame him for condemning what he's seen as Six Sigma. It can get pretty ugly. I worked one time with a "GE Certified Master Black Belt" who told me that he had learned at Crotonville that when you have a highly skewed (positively skewed) data set, you should use Q1 as your measure of center (and Q3, of course, for negatively skewed data). I've seen many, many people insist on testing for normality before testing for homogeneity. I've seen people claim that they can judge stability from a histogram. Some of these people were, unfortunately, teaching other people to become "Black Belts" or "Green Belts." And of course, the King of Six Sigma promotion in the late '90s and early noughts, Jack Welch--who mocked earlier classes he'd been to where people were playing with red beads--decided that you should fire the bottom ten percent, and that you know who they are. He should have spent some more time playing with those red beads...maybe he'd have learned a bit about statistical theory.  So there's plenty of ammunition out there for Tony. I'd just like to assure him that there are those of us who have been able to take some good from the approach. On to another stump, though, where I will gladly stand with him full time.

I am in the midst of research for what I hope will be a pretty comprehensive treatement of the 1.5 sigma shift. I hope to put it to rest once and for all. In doing that research, I was able to get a one-hour conversation with Mikel Harry. he claimed in that conversation that he had never intended for 1.5 to become a production metric. What he told me in that interview was that he had used 1.5 as a "worst-case-scenario SWAG" for his design engineers, when they were simulating designs. He told them to shift the nominal for each component in their new designs by 1.5 sigma in the worst-case direction, and then run the simulations to see whether they could get to a Cpk that met Motorola acceptance criteria for new designs (I think it was 1.3, but I'd have to check my notes). Using 1.5 sigma that way probably makes sense...using it as a universal assumption (especially for production, especially for a process in statistical control) is just silly.

What many people don't understand is that the use of that table (DPMO with the 1.5-sigma shift applied, yielding 3.4 DPMO at "6 sigma") assumes that you have a sustained 1.5 sigma shift that somehow remains undetected. It's not a process just occasionally drifting back and forth between +1.5 sigma and -1.5 sigma, also somehow undetected--that gives you considerably less than 3.4 DPMO. For 3.4, your process has to have moved to 1.5 sigma away from the mean and just parked itself there, yet somehow your control charts never signalled. Figure the odds? I don't need to...they're ZERO. You can't even simulate making that happen.