Cost for QD employees to rent an apartment in Chico, CA. $1,200/month. Please turn off your ad blocker in Quality Digest
Our landlords thank you.
Davis Balestracci
Published: Tuesday, September 18, 2012 - 13:17 As you all know, the influence of W. Edwards Deming on my career and thinking has been profound. A criticism always leveled at him was that he was short on specifics—but he would always growl at someone who alluded to this, “Examples without theory teach nothing!” Enter Joseph Juran, the other quality giant of the 20th century. When I worked at 3M during the 1980s, they had several sets of his 16-video Juran on Quality Improvement series. I studied it hard and watched several tapes many, many times. He had a good empirical sense (and sense of humor), and having been around the block once or twice, a lot of wisdom. So, let’s apply some of that wisdom to the two scenarios from part one of this three-part series. Juran always advised as a first strategy: “Exhaust in-house data.” An initial chart analyzing your process is important before doing this. Here’s a key principle of common cause analysis that almost everyone overlooks: The data from any common cause period on a control chart can be aggregated to attempt a “stratification” and apply Juran’s beloved Pareto Principle. For the percent computer-uptime data from part one: What if you considered the potential of using the data from all 19 months of the graph, i.e., ~ 95 hours? One could now ask: The “different conversation”—and resulting action—has begun. For the “never events” data from part one, which also showed common cause, 29 total events occurred during the plotted period. Key point: Are these suggestions for both scenarios something you could easily do up front before getting more people involved? As you know, one of my ongoing themes is about you “getting the respect you deserve.” Might this help? Here’s a more thorough approach to localize the interesting “20 percent.” If the control chart exhibits common cause, the process is stable and, in this first cursory analysis, most likely characterizes the reporting process and the results it is perfectly designed to produce. A deeper issue in the initial analysis of a process, which is in many cases humdrum and routine to the work culture, is looking at the effects of human variation—both in perception (i.e., definition) and reporting. In some cultures, this includes a most nontrivial fear factor. Just because a reporting process exists doesn’t necessarily mean that the right events are being reported, or even that all the events are being reported. In the case of the “never events” data, based on the current chart, the reporting process is stable—no more, no less. Based on the analysis results in the context of the previous paragraph, some changes may be needed to reduce variation to improve the reporting process—and improve the ongoing improvement process as a result. Meanwhile, there is hope that at least some vital areas needing improvement might be exposed, including the reporting process, and that many human variation areas can be considered and improved by working on these “vital few” (Juran again). Do you already have any data on its impact? Do you have any data already? I’ve used Juran's “exhaust in-house data” to introduce the first common cause strategy: stratification. There’s more to stratification than is presented here, and there are also two other common cause strategies. So, to be continued in part three. Quality Digest does not charge readers for its content. We believe that industry news is important for you to do your job, and Quality Digest supports businesses of all types. However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads. So please consider turning off your ad blocker for our site. Thanks, Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.Wasting Time With Vague Solutions, Part 2
Some wisdom from Joseph Juran
From the control chart, the department was averaging 99.3 percent uptime and, hence, (100 – 99.3 =) 0.7 percent downtime. Assuming a 30-day month:
0.007 × 30 × 24 ~ 5 hours a month, on the average (0–16 in any one particular month). The routine meeting previously described usually focused on this, treating it as a special cause in terms of both the amount and its causes.
1. Is it possible to go back even further to see if there is even more stable behavior to potentially aggregate?
2. Can one think of categories into which these hours can be stratified, such as time of day?
3. Is routine maintenance in the uptime definition?
• If so, are these maintenances executed consistently?
• At what time(s)?
• How many of these hours are involved?
• How can the other remaining hours be categorized to see what’s significant?
• Given the trendiness of root cause analysis these days, what if 29 individual analyses had been done during this period (Joiner’s Level 2 fix)?
• What does the chart say about the effect of all this activity? No improvement.
• Given that the control chart is common cause, did you know that you can aggregate all 29 events during this time because the same process produced them? Maybe you should do a root cause analysis of your 29 aggregated root cause analyses! (Joiner’s Level 3 fix)
• Can you go back further to get more data? If you were to plot some previous history, might it be indistinguishable (i.e., common cause) from this time period? In which case, you could aggregate any of those additional events into the mix.Define recurring problems
Assess the impact of each problem
• How often does this problem occur?
• How severe is it when it occurs?
• Would other data be useful to determine its impact? How can you get them?Localize each major problem
• When does or doesn’t the problem occur?
• Where does it occur, or where is it first observed? Where doesn’t it occur? Where is it not observed?
• Does their occurrence correlate with any particular vendor’s product in terms of higher or lower rates?
• Are there other problems that always or often occur together with this problem? Could these be related somehow? Are there problems that you might ordinarily expect to see but don’t?
• Who tends to have the problem most often?
Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.
Quality Digest Discuss
About The Author
Davis Balestracci
© 2023 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute, Inc.