{domain:"www.qualitydigest.com",server:"169.47.211.87"} Skip to main content

        
User account menu
Main navigation
  • Topics
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Videos/Webinars
    • All videos
    • Product Demos
    • Webinars
  • Advertise
    • Advertise
    • Submit B2B Press Release
    • Write for us
  • Metrology Hub
  • Training
  • Subscribe
  • Log in
Mobile Menu
  • Home
  • Topics
    • 3D Metrology-CMSC
    • Customer Care
    • FDA Compliance
    • Healthcare
    • Innovation
    • Lean
    • Management
    • Metrology
    • Operations
    • Risk Management
    • Six Sigma
    • Standards
    • Statistics
    • Supply Chain
    • Sustainability
    • Training
  • Login / Subscribe
  • More...
    • All Features
    • All News
    • All Videos
    • Contact
    • Training

But the Limits Are Too Wide!

What to do when the XmR chart doesn’t seem to work

Donald J. Wheeler
Mon, 10/20/2025 - 12:03
  • Comment
  • RSS

Social Sharing block

  • Print
Body

When administrative and managerial data are placed on an XmR chart, the first reaction will frequently be that the limits are far too wide: “We have to react before we get to that limit.”

ADVERTISEMENT

So what are we to do when this happens? Are the limits really too wide? There are three cases to consider:
• The data are full of noise
• The data are full of signals
• The data alternate between different processes

The data are full of noise

Administrative and managerial data tend to be report card data. The measures are accumulated across departments, across plants, across regions, and even across countries before being presented to managers for consumption. At each stage, as the data are aggregated, the noise of each data stream is also being aggregated into the total, so by the time the finished value is presented, it has a lot of noise in the background. For example, consider the quarterly sales values for one company shown in Figure 1.


Figure 1: X Chart for quarterly sales for all regions for years 1 to 5

If they ever reached the upper limit on this X chart, they’d be the darling of Wall Street. Conversely, if the sales dropped down to the level of the lower limit, they’d be in serious trouble.

So are these limits too wide? Not really. These wide limits are simply warning the reader that these data are so full of noise that you’ll never be able to pinpoint an explanation for why the sales have gone up or down. While you may believe that the advertising campaign helped, these data contain far too much noise to allow you to demonstrate that the advertising campaign had an effect on sales.

In short, report card data result in report card charts, and although a report card will tell you where you are, it doesn’t tell you how you got there. Nor does it identify what needs to be done to change things.

Consider the regional sales that were combined to give the graph in Figure 1. These six time series are shown on their own X charts in Figure 2. There we see many signals of changes that were occurring at the regional level. However, when these six time series were combined into the report card measure in Figure 1, all of the signals of change were lost in the noise. Thus, one way to deal with data that have very wide limits is to disaggregate the time series into its constituent components and consider each component separately.


Figure 2: X charts for quarterly sales by region, years 1 to 5

So although report card charts are valid, they’ll often have very wide limits because the data are full of noise. The wide limits serve as a warning that it’s virtually impossible to assign an explanation to why a point goes up or why it goes down—the routine variation inherent in the data stream itself is sufficient to be the cause of any particular increase or decrease. While movement in one direction may be good, and movement in the other direction may be bad, the metric itself is too full of noise to be used to run the business.

Any failure to appreciate this point will result in the interpretation of noise as if it amounted to signals, which is the first type of mistake that can be made in interpreting data.

The data are full of signals

Because of the way the limits are computed, they can become inflated whenever changes become commonplace.

Moving ranges are used to separate the exceptional variation from the routine variation. By computing limits using either the average or the median of the moving ranges, we hope to dilute the effect of any exceptional variation that’s present. Thus, in the end, the computations implicitly assume that exceptional variation will be an occasional thing. As more and more of the moving ranges are affected by some assignable cause, it will become harder to distinguish between exceptional variation and routine variation. At some point, the exceptional variation may become so commonplace that it will look like routine variation to the computations.

Last month’s article contained an example of this problem. The chart for Plant A appeared to be predictable, but the fact that the instrument was recalibrated after 88 of the 100 measurements resulted in inflated limits.

So, how can we tell if the data are full of signals? In some cases, the context will provide the key. In others, we’ll need to have a reference period where the process is operated without the influence of the assignable cause.

As an example of the first case, where the context will provide a clue that the data are full of signals, we use the data for neonatal autopsies for one hospital over a 10-year period. The data and X chart are shown in Figure 3.


Figure 3: X chart for neonatal autopsies

The limits of 99.4% and 23.4% provide no discrimination here. They are far too wide for any practical purpose. However, any process that changes from 92% in one year to 56% in another year, and then to 36% in a subsequent year, is clearly not the same from year to year. The limits don’t show these changes because these data are full of signals. Fortunately, limits aren’t needed here because common sense is sufficient. The first principle for understanding data is that no data have any meaning apart from their context. This means that context must always be the starting point for any analysis. Here, the context tells us that this process is changing from year to year.

So how does this differ from the situation where the data are full of noise? When we’re working with a highly aggregated metric, wide limits are most likely to be due to excessive noise. When we’re working with a simple, localized metric, wide limits may be due to changing conditions from period to period. Judgment, experience, and contextual knowledge are required.

With annual values, it’s often helpful to break the data down into shorter time periods. Given the small counts in Figure 3, quarterly summaries would be about as far as we should go (monthly values would involve very small counts indeed). Figure 4 shows what the quarterly data for neonatal autopsies might look like.


Figure 4: Neonatal autopsies by quarter

If we use the first year as our baseline, the XmR chart will have an average of 0.908 and an average moving range of 0.139. This will give a lower limit for the X chart of 0.538, and an upper limit that exceeds 1.00. With limits of 54%–100%, this chart will hardly provide the most precise analysis. But it is sufficient to begin to tell the story contained in the data, as seen in Figure 5.


Figure 5: X chart for quarterly neonatal autopsies

With three out of four points closer to the lower limit than the central line, Year 2 can be said to be different from Year 1. This impression is further confirmed when the first point of Year 3 falls below the lower limit.

But wait: The limits in Figure 5 are based on only 4 points! Yes, that’s true, but this analysis is sufficient to show that a change has occurred. The objective is to gain insight and to share it with others; to this end, the best analysis is the simplest analysis that provides the needed insight. It’s not a matter of using the right amount of data or computing the best estimates of the limits, but rather using the data in context to tell the story of what’s happening. The chart in Figure 3 failed to do this because it used annual values, and the year-to-year changes inflated the limits. The chart in Figure 5 succeeds in doing this because, in spite of the small counts involved and the small number of points used in computing the limits, the limits weren’t inflated by the year-to-year differences.

In fact, the chart in Figure 5 can be improved by using multiple sets of limits to tell the story. Since years 2 and 3 look different from Year 1, use them to compute new limits. Now years 4, 5, 6, and 7 may be seen to be detectably different from years 2 and 3. So, compute new limits using years 4–7. Now years 8 and 9 are seen to be detectably different from years 4, 5, 6, and 7. So compute new limits using years 8 and 9. Now we see that Year 10 is detectably different from years 8 and 9, and so we compute new limits using Year 10. In this way, we end up with Figure 6.


Figure 6: Revised X chart for quarterly neonatal autopsies

In Year 1, the autopsy rate was more than 90%. In years 2 and 3, it dropped to 70%. In years 4–7, it dropped to 58%. In years 8 and 9, it dropped to 36%. In Year 10 it went back up to 65%.

None of the limits obtained here is very tight. This is because these data still contain a substantial amount of noise. However, this doesn’t stop us from telling the story in these data when we disaggregate the annual summaries and use the limits intelligently.

Here the interesting question is what happened at the end of years 1, 3, 7, and 9. It turns out that each of these points corresponds to a change in personnel. In this hospital, it was the job of the chaplain to obtain permission for a neonatal autopsy, and the different chaplains did this job differently.

An important point

The primary question of data analysis is one of homogeneity. If the data are reasonably homogeneous, then we’re right to assume that the underlying process is being operated predictably, and to use the central line, limits, and other statistics computed from the data to characterize that underlying process.

However, if the data show evidence that the process is changing, the focus shifts from using the computed values to characterize a single process to one of using the computed values to detect when and how the process is changing. Until we understand the story told by the data, we won’t know how to operate the process up to its full potential. Here, there’s no such thing as computing the right limits or having enough data.

In order for an XmR chart to work, it’s important that successive values be logically comparable. This means, among other things, that the conditions under which the successive values are obtained will need to have remained the same from period to period.

If the time periods have been made so large that the system is bound to have changed from one period to the next, or if the data were collected while the process was deliberately being adjusted or changed, then the resulting data are likely to be full of signals, and the limits may be inflated by these signals of change.

The data represent different processes

This is simply a special case of the problem where the data are full of signals. Here, the data occur in a natural time order, yet they represent two or more conditions, resulting in an apples-to-oranges time series.

The example for this case comes from an allergist who had his patients track their lung congestion using a peak expiratory flow rate (PEFR) gauge. The patient would exhale as hard as possible through the gauge, and it would record the flow rate in liters per minute. The protocol consisted of the patient getting one flow rate value in the morning and one flow rate value in the evening. The morning reading was to be obtained prior to taking any medication. The evening reading was to be taken 15 minutes after using the bronchodilator inhaler. Thus, this sequence of values represented two states: a.m. pre-med and p.m. post-med. To represent a time series, the physician tried to place these values on an XmR chart. The resulting X chart for one patient is shown in Figure 7.


Figure 7: X chart for PEFR scores for one patient

The physician couldn’t make sense of these wide limits. The upper limit was unreasonable for this patient, and the lower limit was nonsense. Of course, these wide limits are a result of the large daily swings. This chart violates the guideline given above. No progress was made in using these charts as an adjunct to clinical practice until the physician started charting the a.m. pre-med values alone. When he did this, the limits began to make clinical sense.

Summary

When the limits seem to be too wide to be practical, it’s important to determine whether this is because the data are full of noise or because they are full of signals.

If the data are full of signals, then the wide limits are incorrect, and the organization of the chart is at fault. Here, it’s up to the user to collect and organize the data in such a way that the charts can be useful.

If the data are full of noise, then the measure will be of little use in running the business. Although the chart may serve as a report card, it can’t be used to identify what caused the values to change. Here, it’s not the chart that’s at fault but rather the idea that you can make use of the data in spite of the noise they contain. The failure to understand this point will inevitably result in Mistake No. 1, where noise is interpreted as a signal.

It’s interesting to note that Mistake No. 1 is encouraged by the emphasis on looking at all of the current values together on a monthly “dashboard.” While this does encourage the use of multiple measures, rather than reacting to each value separately, it still lacks the filter that’s needed to separate the signals (where there’s something to be learned) from the noise (where there’s no change from previous months). With the monthly dashboard, any two numbers that aren’t the same are thought to be different. Unfortunately, although this is true when it comes to arithmetic, it’s not true when it comes to interpretating data. In this world, two different numbers may well represent the same thing.

All data contain noise.

Some data also contain signals.

Until you can differentiate between the noise of routine variation and the signals of exceptional variation, you’re likely to be misled by the noise.

Donald Wheeler’s complete “Understanding SPC” seminar may be streamed for free; for details, see spcpress.com.

Add new comment

The content of this field is kept private and will not be shown publicly.
About text formats
Image CAPTCHA
Enter the characters shown in the image.

© 2025 Quality Digest. Copyright on content held by Quality Digest or by individual authors. Contact Quality Digest for reprint information.
“Quality Digest" is a trademark owned by Quality Circle Institute Inc.

footer
  • Home
  • Print QD: 1995-2008
  • Print QD: 2008-2009
  • Videos
  • Privacy Policy
  • Write for us
footer second menu
  • Subscribe to Quality Digest
  • About Us
  • Contact Us