Featured Product
This Week in Quality Digest Live
Quality Insider Features
Bob Holmes
Public health messages should be loud and clear, so that everyone listens and stays safe. But that’s easier said than done.
Donald J. Wheeler
The purpose of analysis is insight
The Un-Comfort Zone With Robert Wilson
Innovation isn’t all about demolishing and rebuilding
Ryan E. Day
Realigning a cornerstone of industry

More Features

Quality Insider News
Hexagon Manufacturing Intelligence launches technologies to transform inspection workflows for enhanced productivity
An efficient and cost-saving way of working with materials like plastic, glass and ceramic
Detects and reports on tool breakage, missing tools, overload, tool wear and fluid flow in real time
Collects thousands of data samples within milliseconds
Automated system processes castable metallographic samples for specialty material manufacturers
3D scans help Chicago Jet Group retrofit old Dassault Falcon avionics system
Real-time data collection and custom solutions for any size shop, machine type, or brand
Lloyd Instruments launches the LS5 high-speed universal testing machine

More News

Forrest Breyfogle—New Paradigms

Quality Insider

How Control Charts Can Provide Predictive Statements

Value chain functions and metrics should maintain continuity even through leadership change

Published: Wednesday, May 9, 2012 - 13:33

I will first describe how long-lasting business metrics can be created at the business level and then illustrate the 30,000-foot-level scorecard tracking of these measurements, where an individuals control chart is used to determine process stability. Then I will show how to make a prediction statement, if a process has a recent region of stability.  

In figure 1 below, you see the nine-step Integrated Enterprise Excellence business management system (see the article "Hoshin Planning Issues and Resolution" for a complete explanation). Step 2 of the system is, “Describe value chain, including satellite-level [financial] and 30,000-foot-level [operational] metrics.” I will now elaborate on the value chain portion of this step.

fig 1 nine-steps-of-IEE

Figure 1: The nine steps of the Integrated Enterprise Excellence business management system

An organization’s value chain, as illustrated in figure 2, provides a visual representation of what the enterprise does (rectangles in the figure) and its performance measures of success (ovals in the figure) from the customer’s and the business’s point of view (i.e., cost, quality, and time).


Figure 2: Value chain with scorecard/dashboard metrics. Shaded areas designate processes that have subprocess drill-downs. (From Integrated Enterprise Excellence, Volume II – Business Deployment: A Leader’s Guide for Going Beyond Lean Six Sigma and the Balanced Scorecard, Citius Publishing, 2008)

In this value chain, the rectangular boxes would provide clickable access to process steps, functional value streams, and procedural documents. The center series of rectangular-box-specified functions describe the primary business flow, and the rectangular boxes that are not in this series describe other support functions (e.g., legal and finance).

With this approach to describing the enterprise, the organization chart is subordinate to the value chain. The value chain is long-lasting even through organizational changes, where process functional procedures and their metrics can change over time.

Metrics within a value chain must be aligned to how the business is conducted. This is in contrast to creating metrics around the organizational chart or strategic plan objectives, where both can significantly change over time. In addition, it is important not only to determine what should be measured but also to have a reporting methodology that leads to healthy behavior so that the organization as a whole benefits.

Creating good metrics that are predictive

Good metrics provide decision-making insight that leads to the most appropriate conclusion and action, or no action. In the article, “Avoiding Company Decline,” the following good-metric characteristics were described: business alignment, honest assessment, consistency, repeatability and reproducibility, actionability, time-series tracking, predictability, and peer comparability.

Organizations often report performance levels using tables of numbers, stack bar charts, or red-yellow-green reports where red indicates that a goal or specification is not being met, and green indicates that current performance is satisfactory. These reporting formats do not have many of the aspects of the good-metrics characteristics that were previously described. In particular, all of these measurement formats do not provide a predictive component and can lead to expensive, nonproductive firefighting behaviors. These reporting-format limitations can be overcome with a 30,000-foot-level reporting system.

In 30,000-foot-level reporting, there are no calendar boundaries, and a prediction statement can be made, when appropriate. For example, one might report that a current metric performance level is predictable since the process has been stable for the last 17 weeks, and there is an estimated nonconformance rate of 2.2 percent. For predictable processes, we expect that this same level of nonconformance would occur in the future unless something is done to improve either the process inputs or the process step-by-step execution itself.

With this form of reporting, common-cause variability is separated from special-cause events at a high level. Also with this 30,000-foot-level business perspective, typical variability from process input differences are considered common-cause input variability that should not be reacted to as though it were special-cause variability (e.g., variation in raw material from lot to lot, on different days of the week, from machine to machine, and people to people). 

Often the management practices of determining what happened by sending someone to “fix the problem” can lead to much firefighting. For this type of situation there are, more often than not, minimal improvements made from these firefighting activities (e.g., common-cause variability issues are treated as though they are special cause). Using red-yellow-green scorecards, where there is a tracking to goals throughout an organization, can sound attractive but can also lead to firefighting.

To illustrate this point, consider the red-yellow-green, continuous-data scorecard shown at the top of figure 3, which is from a corporation’s actual scorecard system, and its comparison to a 30,000-foot-level scorecard reporting system.

This metric reporting system has two steps. The first step of this process is to analyze for predictability. The second step is the formulation of a prediction statement when the process is considered predictable.


Figure 3: Comparison of a red-yellow-green scorecard to 30,000-foot-level predictive measurement reporting (histogram included for illustrative purposes only)

To determine predictability, the process is assessed for statistical stability using a 30,000-foot-level Individuals control chart, which can detect if the process response has changed over time or if it is stable.

When there is a current region of stability, data from this last region can be considered random samples of the future. For this example, note how the 30,000-foot-level Individuals control chart in figure 3 indicates that nothing fundamental in the process has changed, even though a traditional red-yellow-green scorecard showed the metric frequently transitioned among red, yellow, and green. For the traditional scorecard, the performance level was red 5 out of the 13 recorded times.

Included in this figure is a probability plot that can be used to make a prediction statement. Much can be learned about a process through a probability plot. Let’s next examine some of these probability-plot benefit characteristics.

The x-axis in this probability plot is the magnitude of a process response over the region of stability, while the y-axis is percent less than value. A very important advantage of probability plotting is that data do not need to be normally distributed for a prediction statement to be made. The y-axis scale is dependent upon the distribution type (e.g., normal or log-normal distribution).

If the data on a probability closely follow a straight line, we act as though the data are from the distribution that is represented by the probability plot coordinate system. Estimated population percentages below a specification limit can be made by simply examining the y-axis percentage value, as shown in figure 3. For this case, we estimate that about 33 percent of the time, now and in the future, we will be below our 2.2 specified criteria or goal.

There is a certain amount of technical training needed to create 30,000-foot-level metrics. See Chapters 12 and 13 of Integrated Enterprise Excellence, Volume III Improvement Project Execution: A Management and Black Belt Guide for Going Beyond Lean Six Sigma and the Balanced Scorecard (Citius Publishing, 2008). However, the interpretation of the chart is quite simple. In this reporting format, a box should be included below the chart that makes a statement about the process. For this chart we can say that the process is predictable with an approximate nonconformance rate of 32.8 percent. That is, using the current process, the metric response will be below the goal of 2.2 about one-third of the time.

Regarding business-management policy, red-yellow-green vs. this form of reporting can lead to very different behaviors. For this example, a red-yellow-green reporting policy would lead to fighting fires about 33 percent of the time because every time the metric turned red, management would ask the questions, “What just occurred? Why is our performance level now red?” while in actuality the process was performing within its predictable bounds. Red-yellow-green scorecards can result in counterproductive initiatives, 24/7 firefighting, the blame game, and proliferation of fanciful stories about why goals were not met. In addition, these scorecards convey nothing about the future.

With 30,000-foot-level performance metric reporting, we gain the understanding that the variation in this example is from common-cause process variability and that the only way to improve performance is through improving the process itself. With this system, someone would be assigned to work on improving the process that is associated with this metric. This assumes that this metric improvement need is where efforts should be made to improve business performance as a whole.

In organizations, the value chain functions and metrics should maintain basic continuity through acquisitions and leadership change. The value chain with its 30,000-foot-level metric reporting can become the long-lasting front end of a system and baseline assessment from which strategies can be created and improvements made.

Discuss

About The Author

Forrest Breyfogle—New Paradigms’s picture

Forrest Breyfogle—New Paradigms

CEO and president of Smarter Solutions Inc., Forrest W. Breyfogle III is the creator of the integrated enterprise excellence (IEE) management system, which takes lean Six Sigma and the balanced scorecard to the next level. A professional engineer, he’s an ASQ fellow who serves on the board of advisors for the University of Texas Center for Performing Excellence. He received the 2004 Crosby Medal for his book, Implementing Six Sigma. E-mail him at forrest@smartersolutions.com

Comments

Confusion over "stoplights"....

With all due respect, i believe you are missing the point on why/how "stoplight" reporting is intended to work.

From my experience, the colors on a stoplight chart are to indicate when a process is unstable or producing undesirable results (Red), when to "keep an eye on things" (Yellow), and when the process is stable and producing expected results(Green).

The underlying data colelction and analysis determine how well the "Stoplight" guides the management of the company to focus on the critical few items that are needed to achieve a future state of "better" for the company.

In your example, ~33% of the results are "non-conforming" (e.g. they don't meet the expectation).  IF a ~33% defect rate is acceptable to the business and stable, i would expect to see this coded "green" untill there was a shift in the control chart indicating something is moving the process to a higher defect rate and then be coded "RED".  "yellow" could be for shifts indicating lower defect rates or early signs of a shift/trend (e.g. 3 or 4 consectutive increasing points on the I-Chart...not enough for a standard Western Electric rule for a trend, but enough to "keep an eye on it", thus "yellow".  realizing that a 33% defect rate is likely unacceptable to a company, i would then expect that measurement to be "red", until changes were in effect to reduce the rate (changing perhaps to yellow until there has been sufficent time to establish "stable and acceptable" before changing to Green).

In most compaines I have worked, presenting the culmination of charts and graphs to display the analyisis behind a "color" would have glazed eyes and managers scrambling for their electronic distractions (blackberries, iphones, etc). While a crisp, Red/Yellow/green, helps focus attention on the critical few items that need the attention and support.  The managers that require more detail to understand the events can then review the underlying analysis in as much detail as they need to understand the issue.

I believe it is our challenge to provide this "roll-up" from detailed analysis to a 30,000 ft/satellite view without sacrificing the simplicity of "stoplight" approach.

 

Stop Lights

I beg to disagree with you, Mr. Johnson. Stop Lights and "Action Limits" are nothing more than tools to increase tampering. Stop Lights give no insight into the underlying system and how it is operating. Managers who are too darn lazy to learn the basics of proper data anlysis to gain the proper insight before taking action need to find some other career where their tampering will result in less devastating effects.

Stop Light Scorecard and uses

Mr. Moore and I agree about the over simplification and use of metrics in companies, which can lead to much firefighting. Mr. Johnson, this was a really scorecard in a major company; i.e., is one of many from a spreadsheet. What this example illustrates is how stoplight scorecards can be treating common cause variability as though it were special cause. This is a big deal. If we have common cause variability and we do not like the response, we need to improve the overall process to achieve a more desirable output. The proof that such an improvement was made to the process is that the control chart shifted to a new improved level of performance. Everyone seems to agree in that business complexity is growing; however, what is missing is a business management system with scorecards that can address this complexity.