Statistics Article

William A. Levinson’s picture

By: William A. Levinson

The phrase “flatten the curve” means to slow the transmission of the coronavirus (Covid-19) in order to spread the total number of cases out over a longer period of time. This will avoid overwhelming the healthcare system.1 The model is accurate as presented throughout the internet, but it also overlooks terrible dangers and enormous opportunities.

Multiple Authors
By: Donald J. Wheeler, Al Pfadt, Kathryn J. Whyte

This article is an update to “Tracking Covid-19” that Al Pfadt, Kathryn Whyte, and I wrote last week. In that article we summarized what is known about Covid-19, what has already happened, and what is to be expected based on the analysis of the data and the epidemiological models.

Over the past week the curve of Covid-19 infections in the United States has slightly flattened. Here are updated graphs of the actual data and new projections for what we can expect in the next few weeks.

Figure 1 shows the number of confirmed cases of Covid-19 in the United States as of 7 a.m. each day. These are the values posted by the European CDC at noon London time, and so they are slightly smaller than some other values that are reported later each day.

Figure 1: Number of confirmed Covid-19 cases in the United States

Multiple Authors
By: Donald J. Wheeler, Al Pfadt, Kathryn J. Whyte

Based on the professional literature available, there are some inconvenient truths about Covid-19 that are not always considered in the chorus of confusion that exists today. Here we summarize what is known, what has already happened, and what is to be expected based on the analysis of the data and the epidemiological models.


An analysis of the first 425 laboratory-identified cases of a novel coronavirus infected pneumonia (Covid-19) is presented by Qun Li, The first cases were identified at Wuhan hospitals as a "pneumonia of unknown etiology" when the patients met the following criteria: fever in excess of 100.4°F, radiographic evidence of pneumonia, low or normal white-cell count or low lymphocyte count, and no symptomatic improvement after antimicrobial treatment for 3 to 5 days according to standard clinical guidelines. On Jan. 7, 2020, the outbreak was confirmed as a new coronavirus infection2.

Peter Dizikes’s picture

By: Peter Dizikes

Given the complexities of healthcare, do basic statistics used to rank hospitals really work well? A study co-authored by MIT economists indicates that some fundamental metrics do, in fact, provide real insight about hospital quality.

“The results suggest a substantial improvement in health if you go to a hospital where the quality scores are higher,” says Joseph Doyle, an MIT economist and co-author of a new paper detailing the study’s results.

The study was designed to work around a difficult problem in evaluating hospital quality: Some high-performing hospitals may receive an above-average number of very sick patients. Accepting those difficult cases could, on the surface, worsen the aggregate outcomes of a given hospital’s patients and make such hospitals seem less effective than they are.

However, the scholars found a way to study equivalent pools of patients, thus allowing them to judge the hospitals in level terms. Overall, the study shows, when patient sickness levels are accounted for, hospitals that score well on quality measures have 30-day readmission rates that are 15 percent lower than a set of lesser-rated hospitals, and 30-day mortality rates that are 17 percent lower.

Quality Digest’s picture

By: Quality Digest

As usual with Quality Digest’s diverse audience, this year’s top stories covered a wide range of topics applicable to quality professionals. From hardware to software, from standards to risk management, from China trade to FDA regulations. It’s always fun to see what readers gravitate to, and this year was no different.

Below are five articles that garnered a lot of interest from our readers. As you can see, the topics are quite diverse.

Improve Risk Management and Quality Across the Value Chain by Increasing Visibility
by Kelly Kuchinski

Donald J. Wheeler’s picture

By: Donald J. Wheeler

In the past two months we have looked at how three-sigma limits work with skewed data. This column finds the power functions for the probability limits of phase two charts with skewed probability models, and compares the trade-offs made by three-sigma limits with the trade-offs made by the probability limits.

Phase two charts

Ever since 1935, there have been two approaches to finding limits for process behavior charts. There is Walter Shewhart’s approach using fixed-width limits, and there is Egon Pearson’s fixed-coverage approach based on probability models. (For more on these two schools of thought, see “The Normality Myth,” Quality Digest, Sept. 19, 2019.) About the year 2000, some of my fellow statisticians tried to reconcile these two approaches by talking about “phase one and phase two control charts.”

Phase one charts use Shewhart’s fixed-width, three-sigma limits. These charts are used to help identify assignable causes of exceptional variation so that the process can be adjusted or fixed as needed. Then, under the assumption that once a process is fixed it will stay fixed, it is time for phase two.

Jody Muelaner’s picture

By: Jody Muelaner

One of the key ideas in lean manufacturing is that defects should be detected as early as possible. Efforts to control manufacturing processes, so that issues can be detected before defects occur, actually predate lean. Statistical process control (SPC) is a set of methods first created by Walter A. Shewhart at Bell Laboratories during the early 1920s. W. Edwards Deming standardized SPC for U.S. industry during WWII and introduced it to Japan during the American occupation after the war. SPC became a key part of Six Sigma, the Toyota Production System (TPS), and by extension, lean manufacturing.

SPC measures the outputs of processes, looking for small but statistically significant changes, so that corrections can be made before defects occur. SPC was first used within manufacturing, where it can greatly reduce waste due to rework and scrap. It can be used for any process that has a measurable output, and SPC is now widely used in service industries and healthcare.

Bill Snyder’s picture

By: Bill Snyder

In 1500, China’s economy was the strongest in the world. But by the 19th century, the United States, Western Europe, and Japan had leapfrogged over China by churning out goods and services in vast quantities while the former superpower stalled.

Why? Some economists argue that China’s lack of free markets and unencumbered innovation in the West led to the shift. But what is the relationship between innovation and markets, productivity, and inequality?

The answer to that puzzle and others were explored during a recent forum on the relationship of innovation to economic growth at the Hoover Institution. Three Stanford professors, all Hoover fellows—Stephen Haber, Edward Lazear, and Amit Seru—spoke on a panel moderated by Jonathan Levin, dean of Stanford Graduate School of Business.

Christopher Shoe’s picture

By: Christopher Shoe

According to a recent LNS Research survey, 37 percent of quality leaders cite an inability to measure quality metrics as their No. 1 barrier to achieving quality goals. Even worse, the survey showed four in five companies have poor visibility into real-time metrics.

These figures highlight a central problem in quality management: In an era of increasingly large data sets, how can manufacturers leverage these data for meaningful improvement?

It’s a question that’s especially relevant for manufacturers engaged in layered process audits (LPAs), a high-frequency verification strategy where teams conduct short audits every shift. With hundreds or even thousands of audits taking place during the course of a year, making sense of a large volume of data is a core challenge of LPA programs.

While plant managers often track metrics at a granular level, executives need to look at the data a little differently. Let’s look at four of the most important enterprise-level LPA metrics to track.

Multiple Authors
By: Yen Duong, Knowable Magazine

If you think it’s hard to tell how you’re doing at your job, imagine being a hockey goalie. Let’s say you block every shot in a game. Was that performance due to your superior skills? Or maybe just to a lack of skill in your opponents?

Evaluating ice hockey players' performance is getting easier, for goalies and their teammates. Advances in data collection—including video that can be slowed down and analyzed—and the application of more sophisticated statistics are allowing analysts to better assess how all players contribute to team performance on the ice. Among the more exciting outcomes are data-rich maps of the rink that can reveal especially successful shots or strategic passes.

“Back in the day, like decades ago, we could only really credit players for goals, and maybe assists and stuff like that,” says Namita Nandakumar, co-author of a recent review of trends in hockey analytics in the Annual Review of Statistics and Its Application. “Now research shows that there are other aspects of the game that you can be consistently better or worse at.”

Syndicate content