Featured Product
This Week in Quality Digest Live
Quality Insider Features
Bob Holmes
Public health messages should be loud and clear, so that everyone listens and stays safe. But that’s easier said than done.
Donald J. Wheeler
The purpose of analysis is insight
The Un-Comfort Zone With Robert Wilson
Innovation isn’t all about demolishing and rebuilding
Ryan E. Day
Realigning a cornerstone of industry

More Features

Quality Insider News
Hexagon Manufacturing Intelligence launches technologies to transform inspection workflows for enhanced productivity
An efficient and cost-saving way of working with materials like plastic, glass and ceramic
Detects and reports on tool breakage, missing tools, overload, tool wear and fluid flow in real time
Collects thousands of data samples within milliseconds
Automated system processes castable metallographic samples for specialty material manufacturers
3D scans help Chicago Jet Group retrofit old Dassault Falcon avionics system
Real-time data collection and custom solutions for any size shop, machine type, or brand
Lloyd Instruments launches the LS5 high-speed universal testing machine

More News

Forrest Breyfogle—New Paradigms

Quality Insider

A Predictive Approach to Risk Management

Enhancing SPC techniques for enterprise risk management

Published: Monday, December 19, 2011 - 17:47

Organizations need a systematic approach for risk containment when quality, delivery, and design product and service issues occur. Such a system should also help them to recover quickly from errant decisions made by executives, operations personnel, and the quality department.

This article describes how well-chosen metrics can help mitigate these risks if the measurements contain good tracking and reporting methods that lead to the most appropriate action. Many of these ideas are covered in my books, Integrated Enterprise Excellence Volume II—Business Deployment: A Leaders’ Guide for Going Beyond Lean Six Sigma and the Balanced Scorecard, and Integrated Enterprise Excellence Volume III—Improvement Project Execution: A Management and Black Belt Guide for Going Beyond Lean Six Sigma and the Balanced Scorecard (both Citius Publishing Inc., 2008).

There are eight metric characteristics for achieving the objectives described above. These simple and direct metrics cannot be gamed or misrepresented:
Business alignment. Metrics consume resources for both data collection and analyses. Measurements need to provide insight to business performance, its issues, and its needs. For an enterprise, organizational charts will typically change over time, but its basic high-level functional activities do not. Metrics need to provide a quantification of business operational performance at both its high and drill-down levels.
Honest assessment. Creating metrics that flatter the organization’s or a person’s performance can be detrimental to the organization. Metrics need to provide an honest assessment, whether good, bad, or ugly.
Consistency. Identified components in any metric must be defined at the outset and remain constant. Criteria and calculations need to be consistent with respect to time.
Repeatability and reproducibility. Measurements should have little or no subjectivity. A recorded measurement response should have little or no dependence on when and who recorded it.
Actionability. Often measures are created for the sake of measuring. Include only those metrics that will be acted on, i.e., either to remove a degradation problem or to maintain the gain. When the metrics response is unsatisfactory, organizations should be prepared to conduct root cause analysis and corrective or preventive actions.
Time-series tracking. Metrics should be captured in time-series format, not as a snapshot of a point-in-time activity. Time-series tracking can describe trends and separate special cause from common cause variability in predictable processes.
Predictability. A predictability statement should be made when time-series tracking indicates that a process is predictable.
Peer comparability. In addition to internal performance measurements, benefits are achieved when comparisons can be made between peer groups in another business or company. A good peer comparison provides additional analysis opportunities, which can identify improvement possibilities.

Applying all eight characteristics when selecting metrics may seem difficult but difficult, but it doesn’t have to be. If your organization uses metrics that are variances or deviation from plans, then yes, it will be difficult to meet the eight characteristics. Jean Cunningham and Orest Fiume wrote, in detail, about these issues in their book Real Numbers (Managing Times Press, 2003). Too many organizations measure themselves with variances to plans, which are metrics that people can learn to “game” so that they look better without changing performance.

To avoid this issue, simple metrics must be reported in a transparent method. These simple and direct metrics cannot be gamed or misrepresented. Measurements must describe real performance, e.g., time, expenditures, revenue, counts, and error rates. Organizations need to avoid metric reporting that offers an adjustment to a plan or goal.

One client of mine struggled with decreasing work throughput and delayed deliveries. An evaluation of its project management methods showed that every project was reported in terms of deviation from plan rather than tracking the time of each primary process function. To compound the problem, we discovered that the production plan was rebuilt after every major disruption and significant customer complaint in an effort to “re-baseline” the company’s processes. A review of the past production plans showed that the client was increasing the cycle-time estimates for every operation to match the actual time taken. Can you imagine how that was turning out for the client? Shifting to a series of metrics measuring the actual time of each functional process led them to drive improvements that kept the company in business.

I also worked with a nonprofit organization that tracked its revenue from donations only in a cumulative plot by fiscal year. Everything would look fine during the year, but as the fiscal year came to a close, the organization would realize its issues. The cumulative plots were hiding a significant drop in monthly revenue due to a policy change earlier in the year. A change to tracking actual monthly revenue clearly showed the issue, and it could be corrected before services were impacted.

These stories may seem simplistic because the issues are obvious in hindsight. In a typical setting, identifying the best simple metrics and selecting the reporting method do take some effort and a systematic approach. The Integrated Enterprise Excellence (IEE) system provides a clear and easily defended method to both identify the metrics and determine how to report performance.

The recommended method to identify good performance metrics involves the use of the IEE value chain. Figure 1 illustrates how this performance map highlights key functions, which are separate from the organizational structure, and identifies a few high-level metrics that represent the function’s true performance. In the IEE value chain, key functions are represented as rectangular boxes, which can be drilled down to their basic procedures, while the high-level performance metrics of these functions are reported by clicking the oblong boxes.

Figure 1: The Integrated Enterprise Excellence (IEE) value chain (from Integrated Enterprise Excellence, Volume II) For larger image, click here.

After identifying the metrics that clearly demonstrate performance of the value chain functions, the organization needs to decide on its reporting method. The common solution is to generate a red-yellow-green scorecard using Excel or other software. This method is popular because of its simplicity and the stoplight analogy; however, it creates many problems. One is that the color decision incorporates variance to a target or goal, a reporting issue discussed earlier. With stoplight scorecards, each reported value is a one-time assessment. If the goal is to use performance metrics that aid in managing performance risks, why would someone choose a reporting method that is unable to evaluate future performance?

A high-level reporting method addresses this issue, offering a performance metric that can be assessed for both capability and performance over time. If risk assessment is an important element of a performance reporting system, then concern about future risks must also be addressed. High-level reporting provides a focus on future performance, although that is not how organizations typically assess performance.

At the business level, traditional scorecards such as a table of number, pie chart, traditional line charts, and stoplight scorecards lack one or more of the previously described good-metric characteristics. High-level reporting provides all these characteristics, including a predictive component, when appropriate. The benefit of providing a prediction statement is that if an organization does not like what is anticipated, it would be able to adjust the process to reduce the risk of future undesirable occurrences.

Functional areas of a company might use X-bar charts, R charts, or p-charts, along with a process capability statement to monitor process behavior. However, these forms of control charting and process performance statements can have issues, such as those described in Donald J. Wheeler’s article, “What About p-Charts?” The reported metrics for these methods can be very dependent on the process sampling and reporting process. Also, these metrics do not have a predictive component and so fall short of the eight components of good metrics for managing risks.

An enhancement to SPC techniques and stoplight scorecards

Organizations can use SPC charting to identify when special cause conditions occur so that appropriate actions can be taken to identify and resolve these identified issues. However, a high-level control charting approach is different in that it is used to assess business performance behavior, not to manage a process. This is a key difference, which is more important than it may initially appear.

With a high-level performance reporting approach, individuals control charts are created so that common cause input variability, which is used in control limit calculations, occurs between sample periods. This is different from the traditional SPC charting method, which can lead to false special-cause signals because the underlying control chart assumptions are often not valid from a practical, high-level point of view. For example, for the p-chart, the probability of nonconformity p is the same for each unit. The high-level charting technique is applicable throughout organizations, unlike SPC, which is most often applied in manufacturing.

The high-level reporting system will first assess whether a process has regions of statistically determined stability. If there is a recent region of stability, a statement can be made that the process is predictable, and a prediction statement can be made using the data from the recent region of stability. Because the prediction statement used data from the current stable business process data, it is reasonable to assume that the prediction statement is valid into the future. At least it is valid until there is a business process change.

If an organization determines that there is a business risk based on current high-level performance metrics, then process improvement is needed. Another form of business risk occurs when the high-level individuals chart indicates an out-of-control condition, at which time it is possible to conclude that the business process is not predictable. For this situation, an SPC chart may also stimulate process changes that lead to the return of a stable or predictable state with a potentially improved level of performance. An improvement to a process is considered significant when the high-level control chart moves to a new, improved level of performance; i.e., a risk reduction.

Comparing high-level predictive scorecards to stoplight dashboards

A company was using red-yellow-green stoplight scorecards in its management system. One of its measurements is shown in the figure 2. For this application example, there were red indicators for five of the 13 reporting periods. The high-level reporting format is also shown in the figure, which indicates that no process improvements were made, even though the color often changed from red to green. The high-level reporting indicates that there is a common-cause nonconformance rate of about 33 percent.

Figure 2: Red-yellow-green scorecard vs. high-level reporting (from Integrated Enterprise Excellence Volume II) For larger image, click here.

From an examination of the high-level report-out portion of the figure, we would need to improve the process, assuming that a 33-percent level of nonconformance is a risk to future business performance.

As noted earlier, when an organization uses a red-yellow-green scorecard system, it takes action whenever the color is red; however, the high-level reporting indicates that in this example, all the red-colored events are common-cause variability. That is, the perceived improvements when changing from red to green were simply common-cause variability occurrences. From high-level reporting, we could conclude that no process improvements were made, even though the color changed from red to green several times. Also, the statement in the lower portion of the graph summarizes the common-cause performance of this process to be an approximately 32.6 percent nonconformance rate.


With its predictive scorecards, IEEE provides the means for self-evaluation that organizations can use to reduce their exposure to current and future risks that will invariably occur in business over time.


About The Author

Forrest Breyfogle—New Paradigms’s picture

Forrest Breyfogle—New Paradigms

CEO and president of Smarter Solutions Inc., Forrest W. Breyfogle III is the creator of the integrated enterprise excellence (IEE) management system, which takes lean Six Sigma and the balanced scorecard to the next level. A professional engineer, he’s an ASQ fellow who serves on the board of advisors for the University of Texas Center for Performing Excellence. He received the 2004 Crosby Medal for his book, Implementing Six Sigma. E-mail him at forrest@smartersolutions.com