# Content By Rip Stauffer

By: Rip Stauffer

I must admit, right up front, that this is not a totally unbiased review. I first became aware of Davis Balestracci in 1998, when I received the American Society for Quality (ASQ) Statistics Division Special Publication, Data “Sanity”: Statistical Thinking Applied to Everyday Data. At the time, I was still working in the Navy’s Total Quality Leadership (TQL) schoolhouse, having spearheaded the statistical process control course we taught to Navy Quality Advisors (TQL’s “Black Belts”). I was struck at the time by the apparent depth of his commitment to statistical thinking and data-based decision science, his iconoclastic style, and the simple, clear examples he used to illustrate the points he made in that publication.

By: Rip Stauffer

A lot of people in my classes struggle with conditional probability. Don’t feel alone, though. A lot of people get this (and simple probability, for that matter) wrong. If you read Innumeracy by John Allen Paulos (Hill and Wang, 1989), or The Power of Logical Thinking by Marilyn vos Savant (St. Martin’s Griffin, 1997), you’ll see examples of how a misunderstanding or misuse of this has put innocent people in prison and ruined many careers. It’s one of the reasons I’m passionate about statistics, but it’s hard for me, too, because it’s not easy to work out in your head. I always have to build a table.

The best thing to do is to be completely process-driven; identify what’s given, then follow the process and the formulas religiously. After a while, you can start to see it intuitively, but it does take a while.

In my MBA stats class, one of the ones that always stumped the students was a conditional problem:

By: Donald J. Wheeler, Rip Stauffer

How do extra detection rules work to increase the sensitivity of a process behavior chart? What types of signals do they detect? Which detection rules should be used, and when should they be used in practice? For the answers read on.

By: Rip Stauffer

Recently, in one of the many online discussion groups about quality, Six Sigma, and lean, this question was posed: “Can X-bar R and X-bar S be used interchangeably based on samples size (n) if the subgroup size is greater than one and less than eight?” Answers varied, of course.

In some of these discussion groups, you get to see how far rule four of W. Edwards Deming’s funnel experiment has pushed some training programs off in one direction or another, especially when it comes to statistical process control (SPC). One set of answers that surprised me, though, came from a couple of consultants in France, who said, “Clearly not... the question is about a sample of 1 to 8. [The] response is definitely no. You can’t calculate a standard deviation with a sample of one or two. A sample higher than 8 is highly recommended.”

The point they were trying to make was that for subgroups of size eight  or smaller, you could only use X-bar R charts.

By: Rip Stauffer

A number of recent articles in quality literature (and in the quality blogosphere) have posited the death or failure of Six Sigma. More articles, from many of the same sources, discuss the outstanding success of current Six Sigma efforts in manufacturing, healthcare, government, and financial services organizations. Which is it? Is Six Sigma dying, or is it alive and well?

It certainly has died at a number of companies. Several consultants I talk to regularly mention that their clients wanted to implement Six Sigma, but they wanted to make sure that no one called it Six Sigma. I have had this experience with a couple of clients as well. The usual reason given runs something like the following: “We tried that a few years ago, and it left a bad taste in peoples’ mouths.”

By: Rip Stauffer

In one recent online forum, a Six Sigma Black Belt asked a question about validating samples—how to ensure that when they are taken, they would reflect (i.e., represent) the population parameter. His purpose: to understand the baseline for a project. He said he had six months of data regarding cycle times for handling maintenance tickets.

Among the early suggestions to the query were an assortment of options, including cross validation using regression (comparing randomly selected subsets) and t-tests on small samples. Another person suggested testing the data for normality: “Then any sampling technique will do,” he claimed.

Someone suggested plotting the data on an XmR or XbarR chart. Someone else suggested simply taking the average and then using process maps and lean techniques to reduce the cycle time. This person asserted that, “Random sampling is all that is needed to have a representative sample—by definition.” He went on to suggest that stability doesn’t matter; with six months of data, you can just number the tickets from 1 to k, and use a random number generator to select a sample. His justification? Classical statistical texts don’t require you to check for stability before taking a random sample.

By: Rip Stauffer

Editor’s note: In response to Kyle Toppazzini’s article, “Lean Without Six Sigma May Be a Failing Proposition,” published in the Sept. 27, 2012, issue of Quality Digest Daily, Rip Stauffer left the following observant comment.

I started my career in quality when the consulting world hadn’t yet split into specialist camps they called “Six Sigma” and “lean.” At that point in time, if people were doing anything programmatic, they were calling it something like total quality. People studied the seven tools, and the seven new tools, and the Toyota Production System, and statistical process control (SPC). We studied general systems theory and learned to do process research.

By: Rip Stauffer

I recently closed the doors of my own consulting company on the prairie in Minnesota and headed back into the wild, wacky, wonderful world of larger consulting groups, joining a group in Northern Virginia. One of the consequences of that transition was that I was unable to meet a couple of speaking engagements, so I promised to write something about dashboards for the people who had planned to attend. This dashboard issue is one of the more egregious problems I see regularly.

Although I can’t really put that whole presentation in writing, I can at least discuss the problem. The presentation was actually a half-day workshop that included W. Edwards Deming’s Red Bead experiment as an opening, followed by a discussion and brief overview of statistical process control (SPC). I see so few people using SPC anymore; most of the companies I work with measure very little and use their data badly. I wanted to include the Red Bead experiment because I consider it a prerequisite, the price of entry. If you understand the red bead, you’d probably never use an idiot-light dashboard.

By: Rip Stauffer

It’s better to measure things when we can; that’s been well-established in the quality literature over the years. The use of go/no-go gauges will always provide much less information for improvement than measuring the pieces themselves. However, we don’t always have the luxury of using continuous or variables data. Sometimes, the only way to track the important events we want to track is to count them. Numbers of defectives, exceptions, reschedules, readmissions, rework rates, scrap rates… all these processes are vital to our operations, and all have to be counted. The performance of numerous transactional and other business processes can only be assessed using counts.