Statistics Article

Stanford News Service’s picture

By: Stanford News Service

Most leadership advice is based on anecdotal observation and basic common sense. Stanford Graduate School of Business professor Kathryn Shaw tried a different tack: data-driven analysis.

Through research done in collaboration with a very large, undisclosed technology-based company that has a penchant for collecting data, Shaw found that employees who work under good bosses were more productive. “There are bad bosses out there,” she says, “but what’s not talked about as much is that there are also good bosses.”

Shaw, along with fellow Stanford GSB professor Edward Lazear and Harvard Business School’s Christopher Stanton, published a 2015 paper titled “The Value of Bosses,” in which they gathered data from the tech company in an attempt to see whether they could show that bosses matter and, if so, how much. As part of their research, the authors asked company employees and managers, “What are the traits of a good boss?” They found that bosses matter substantially.

Joby George’s picture

By: Joby George

Having difficulty managing quality and quality-related data? You’re not alone. Many manufacturers struggle with this these issues due to paper-based or other disparate systems being used to track, manage, and report on quality events. Walk about a production room floor, and there’s a good chance you’ll see a few three-ring binders or folders stuffed with handwritten, quality-related worksheets and forms.

Some manufacturing companies may also use email, spreadsheets, or homegrown software systems to capture quality events. But even then, gathering and consolidating information for audits and reporting can be a nightmare. And the problem is exacerbated by the number of external partners, regulatory bodies, and diverse reporting requirements from country to country.

But beyond the inefficiencies, there are some serious risks associated with these manual and paper-based methods of managing quality, for companies and consumers alike. A lack of visibility into quality events could mean a harmful product makes it out to market, leaving manufacturers to deal with product recalls, and risk to brand reputation and stock price value.

Eston Martz’s picture

By: Eston Martz

If you were among the 300 people who attended the first-ever Minitab Insights conference last month, you already know how powerful it was. Attendees learned how practitioners from a wide range of industries use data analysis to address a variety of problems, find solutions, and improve business practices. For those who weren’t there, here are five helpful, challenging, and thought-provoking ideas and suggestions that we heard during the event.

Scott A. Hindle’s picture

By: Scott A. Hindle

In all walks of life, being wrong can come with a penalty. It’s also true that, if you’re lucky, you sometimes get away with it without anybody being the wiser. To understand what this means in relation to the capability indexes Cp and Cpk, read on.

Introduction

In part 3 of “Process Capability: What It Is and How It Helps,” I wrote regarding the interpretation of the two most commonly used capability indexes:
• Predictable processes: Cp and Cpk can be considered reliable indicators of future performance.
• Unpredictable processes: Cp and Cpk may be false, or very misleading, indicators of what the process will give in the future.

Scott A. Hindle’s picture

By: Scott A. Hindle

Part two of this four-part series on process capability concluded with Alan just about to meet Sarah for a second time. He thought he was making good progress with his analysis of Product 874 data until he was asked to assess process capability, even though it can’t be assessed for an unstable process.

Making sense of the XmR chart

Alan thanked Sarah for the two articles she’d given him. He said that, guided by the second article by Donald Wheeler, he’d created his first XmR chart (figure 1 below), which he interpreted to mean that the process data represented an unstable or not-in-control process. Wheeler’s article noted that it was more important to find the cause of process changes rather than computing statistics.

Scott A. Hindle’s picture

By: Scott A. Hindle

In part one of this four-part series, we considered the basics of process capability, as witnessed through the learning curve of Alan in his quest to determine the product characteristics of the powder, Product 874. We pick up with Alan here as he prepares for his second meeting with his colleague Sarah, to discuss his preliminary results.

The second article Sarah had given Alan was titled “Individual Charts Done Right and Wrong,” by Donald J. Wheeler. It helped him to move in a different direction with the data he received to assess process capability. He recalled having been briefly exposed to Shewhart-type control charts, the subject of the paper, during a training class some time back, but he didn’t remember much about them.

Scott A. Hindle’s picture

By: Scott A. Hindle

In my August 2015 article, “Process Capability: How Many Data?” I discussed whether 30 data were the “right” number in an analysis of process capability. In this four-part series, the focus is on understanding what process capability is and the pitfalls associated with it, along with how it can help manufacturers develop process knowledge, reach better decisions, and take better actions.

Product 874: What is process capability?

The story starts with Alan, a relative novice in the field of process capability, who was assigned the task of writing a report on the process capability for a key product characteristic of Product 874, a powder product. The 56 data values he received are found in figure 1. Alan’s brief was to use these data to write a report covering:
• The process capability results for the characteristic under study
• An interpretation of the results
• An appendix of all calculations in Excel for traceability purposes

Barbara A. Cleary’s picture

By: Barbara A. Cleary

Approaching the end of the school year means focusing on graduation rates, dropout rates, and other data suggesting trends for students. Opportunities for considering statistics abound, but one must examine the way that these statistics are actually used by asking the right questions about the data.

For example, as teachers finish state testing regimens and head into final exams, it may be useful to see data related to average pay for teachers. Is it going up? The following figures from 1969–2014 suggest that it is:

1969–70

1979–80

1989–90

1999–2000

2009–10

2011–12

2012–13

$8,626

$15,970

$31,367

$41,807

$55,202

$55,418

$56,383

Ken Levine’s picture

By: Ken Levine

How do you determine the “worst case” scenario for a process? Is it by assuming the worst case for each process task or step? No. The reason is that the probability of every step having its worst case at the same time is practically zero. What we’re looking for is a value that will occur a very small percentage of the time, but still be a possibility.

In statistics, we do this with a confidence interval, typically plus or minus three standard deviations from the mean to achieve 99.7-percent confidence.

For example, let’s say that we have a three-step process, with means and standard deviations of x1 = 20, s1 = 3; x2 = 30, s2 = 5; and x3 = 60, s3 = 9, respectively. Since variation (variance) is additive, the variance of the entire process is therefore:
S2Process = 32 + 52 + 92 = 9 +25 + 81 = 115, and the process standard deviation is:
SProcess = SQRT(115) = 10.7.

Ken Voytek’s picture

By: Ken Voytek

In a recent post, I examined the differences in productivity across small and large manufacturing firms, and noted that there were differences across manufacturers in terms of size. But it’s also clear from the literature that productivity differs across companies even in the same industry.

Syndicate content