Statistics Article

Donald J. Wheeler’s picture

By: Donald J. Wheeler

Setting the process aim is a key element in the short production runs that characterize the lean production of multiple products. Last month in part one we looked at how to use a target-centered XmR chart to reliably set the aim. This column will describe aim-setting plans that use the average of multiple measurements.

The necessity of process predictability

All effective aim-setting procedures will be built upon the notion of a process standard deviation. Some estimate of this process dispersion parameter will be used in determining the decision rules for adjusting or not adjusting the process aim. When a process is operated predictably this idea of a single dispersion parameter makes sense.


Figure 1: When statistics serve as estimates

Gleb Tsipursky’s picture

By: Gleb Tsipursky

So many companies are shifting their employees to working from home to address the Covid-19 coronavirus pandemic. Yet they’re not considering the potential quality disasters that can occur as a result of this transition.

An example of this is what one of my coaching clients experienced more than a year before the pandemic hit. Myron is the risk and quality management executive in a medical services company with about 600 employees. He was one of the leaders tasked by his company’s senior management team with shifting the company’s employees to a work-from-home setup, due to rising rents on their office building.

Specifically, Myron led the team that managed risk and quality issues associated with the transition for all 600 employees to telework, due to his previous experience in helping small teams of three to six people in the company transition to working from home in the past. The much larger number of people who had many more diverse roles they had to assist now was proving to be a challenge. So was the short amount of time available to this project, which was only four weeks, and resulted from a failure in negotiation with the landlord of the office building.

Jay Arthur—The KnowWare Man’s picture

By: Jay Arthur—The KnowWare Man

Story update 5/6/2020: The charts and some data have been updated to reflect the data available on the date this article was published.

During the Covid-19 stay-at-home order in Colorado, I've become increasingly frustrated by Covid-19 charts. Most of what I see are cumulative column charts, which don't give any real insight into what's going on. Are we really flattening the curve?

So I decided to use the state's Covid-19 statistics for Colorado and Denver county, and see what I could learn using control charts. Control charts have been around for almost 100 years. They use formulas to calculate control limits that encompass 99.7 percent of the data points. This makes it easy to monitor any process and detect process shifts and "out of control" conditions.


Source: https://covid19.colorado.gov/case-data Click image for larger view.

Multiple Authors
By: Donald J. Wheeler, Al Pfadt

Each day we receive data that seek to quantify the Covid-19 pandemic. These daily values tell us how things have changed from yesterday, and give us the current totals, but they are difficult to understand simply because they are only a small piece of the puzzle. And like pieces of a puzzle, data only begin to make sense when they are placed in context. And the best way to place data in context is with an appropriate graph.

When using epidemiological models to evaluate different scenarios it is common to see graphs that portray the number of new cases, or the demand for services, each day.1 Typically, these graphs look something like the curves in figure 1.


Figure 1: Epidemiological models produce curves of new cases under different scenarios in order to compare peak demands over time. (Click image for larger view.)

William A. Levinson’s picture

By: William A. Levinson

The phrase “flatten the curve” means to slow the transmission of the coronavirus (Covid-19) in order to spread the total number of cases out over a longer period of time. This will avoid overwhelming the healthcare system.1 The model is accurate as presented throughout the internet, but it also overlooks terrible dangers and enormous opportunities.

Multiple Authors
By: Donald J. Wheeler, Al Pfadt, Kathryn J. Whyte

This article is an update to “Tracking Covid-19” that Al Pfadt, Kathryn Whyte, and I wrote last week. In that article we summarized what is known about Covid-19, what has already happened, and what is to be expected based on the analysis of the data and the epidemiological models.

Over the past week the curve of Covid-19 infections in the United States has slightly flattened. Here are updated graphs of the actual data and new projections for what we can expect in the next few weeks.

Figure 1 shows the number of confirmed cases of Covid-19 in the United States as of 7 a.m. each day. These are the values posted by the European CDC at noon London time, and so they are slightly smaller than some other values that are reported later each day.



Figure 1: Number of confirmed Covid-19 cases in the United States

Multiple Authors
By: Donald J. Wheeler, Al Pfadt, Kathryn J. Whyte

Based on the professional literature available, there are some inconvenient truths about Covid-19 that are not always considered in the chorus of confusion that exists today. Here we summarize what is known, what has already happened, and what is to be expected based on the analysis of the data and the epidemiological models.

Background

An analysis of the first 425 laboratory-identified cases of a novel coronavirus infected pneumonia (Covid-19) is presented by Qun Li, et.al.1. The first cases were identified at Wuhan hospitals as a "pneumonia of unknown etiology" when the patients met the following criteria: fever in excess of 100.4°F, radiographic evidence of pneumonia, low or normal white-cell count or low lymphocyte count, and no symptomatic improvement after antimicrobial treatment for 3 to 5 days according to standard clinical guidelines. On Jan. 7, 2020, the outbreak was confirmed as a new coronavirus infection2.

Peter Dizikes’s picture

By: Peter Dizikes

Given the complexities of healthcare, do basic statistics used to rank hospitals really work well? A study co-authored by MIT economists indicates that some fundamental metrics do, in fact, provide real insight about hospital quality.

“The results suggest a substantial improvement in health if you go to a hospital where the quality scores are higher,” says Joseph Doyle, an MIT economist and co-author of a new paper detailing the study’s results.

The study was designed to work around a difficult problem in evaluating hospital quality: Some high-performing hospitals may receive an above-average number of very sick patients. Accepting those difficult cases could, on the surface, worsen the aggregate outcomes of a given hospital’s patients and make such hospitals seem less effective than they are.

However, the scholars found a way to study equivalent pools of patients, thus allowing them to judge the hospitals in level terms. Overall, the study shows, when patient sickness levels are accounted for, hospitals that score well on quality measures have 30-day readmission rates that are 15 percent lower than a set of lesser-rated hospitals, and 30-day mortality rates that are 17 percent lower.

Quality Digest’s picture

By: Quality Digest

As usual with Quality Digest’s diverse audience, this year’s top stories covered a wide range of topics applicable to quality professionals. From hardware to software, from standards to risk management, from China trade to FDA regulations. It’s always fun to see what readers gravitate to, and this year was no different.

Below are five articles that garnered a lot of interest from our readers. As you can see, the topics are quite diverse.

Improve Risk Management and Quality Across the Value Chain by Increasing Visibility
by Kelly Kuchinski

Donald J. Wheeler’s picture

By: Donald J. Wheeler

In the past two months we have looked at how three-sigma limits work with skewed data. This column finds the power functions for the probability limits of phase two charts with skewed probability models, and compares the trade-offs made by three-sigma limits with the trade-offs made by the probability limits.

Phase two charts

Ever since 1935, there have been two approaches to finding limits for process behavior charts. There is Walter Shewhart’s approach using fixed-width limits, and there is Egon Pearson’s fixed-coverage approach based on probability models. (For more on these two schools of thought, see “The Normality Myth,” Quality Digest, Sept. 19, 2019.) About the year 2000, some of my fellow statisticians tried to reconcile these two approaches by talking about “phase one and phase two control charts.”

Phase one charts use Shewhart’s fixed-width, three-sigma limits. These charts are used to help identify assignable causes of exceptional variation so that the process can be adjusted or fixed as needed. Then, under the assumption that once a process is fixed it will stay fixed, it is time for phase two.

Syndicate content