Statistics Article

Ken Voytek’s picture

By: Ken Voytek

In a recent post, I examined the differences in productivity across small and large manufacturing firms, and noted that there were differences across manufacturers in terms of size. But it’s also clear from the literature that productivity differs across companies even in the same industry.

Brooke Pierce’s picture

By: Brooke Pierce

The healthcare industry is in a state of constant change, and with change comes opportunity. With the passage of the Affordable Care Act (ACA) and the Medicare Access and CHIP Reauthorization Act (MACRA), healthcare providers are, or will be, paid differently for their services. No longer can they rely on the volume of services rendered to generate sustained income.

With the transition from volume-based payments to value-based payments, many health systems are investing in data analytics platforms to help expose cost savings, as well as uncover hidden revenue. But is investing in data analytics worth the cost?

With the high cost of data analytics packages, you should expect a positive return on investment (ROI). There are many ways health systems can use data analytics tools to generate a positive return. Data analytics tools can be used in a variety of situations—lowering administrative costs, supporting clinical decisions, reducing fraud and abuse, coordinating care, and improving patient wellness, for example. Healthcare systems have many options to use data analytics tools to increase the bottom line by reducing costs and increasing revenue. Some options are easily quantifiable while others are not, and therein lies the difficulty in determining ROI.

Multiple Authors
By: Donald S. Holmes, A. Erhan Mergen

Regression analysis is used in a variety of manufacturing applications. An example of such an application would be to learn the effect of process variables on output quality variables. This allows the process control people to monitor those key variables and keep the output variables at the desired level.

Regression analysis is also used in design of experiments (DOE) to identify the key process variables that have the most effect on the quality of the end product or service. In addition, if the process is autocorrelated and we want to perform statistical process control (SPC), regression models (i.e., autoregressive models) could help model the autocorrelation in the process and help modify the SPC application accordingly so that the right questions can be tested on the control charts.

Fred Schenkelberg’s picture

By: Fred Schenkelberg

What if all failures occurred truly randomly? Well, for one thing the math would be easier.

The exponential distribution would be the only time to failure distribution—we wouldn’t need Weibull or other complex multi-parameter models. Knowing the failure rate for an hour would be all we would need to know, over any time frame.

Sample size and test planning would be simpler. Just run the samples at hand long enough to accumulate enough hours to provide a reasonable estimate for the failure rate.

Would the design process change?

Yes, I suppose it would. The effects of early life and wear-out would not exist. Once a product is placed into service, the chance to fail the first hour would be the same as in any hour of its operation. It would fail eventually, and the chance of failing before a year would solely depend on the chance of failure per hour.

A higher failure rate would suggest it would have a lower chance of surviving very long. 

William A. Levinson’s picture

By: William A. Levinson

Experiments that might require a handful of real-number measurements (variables data) could need hundreds or more attribute data for comparable power, i.e., the ability to determine whether an experiment improves performance over that of a control. Sample sizes needed for ANSI/ASQ Z1.4 (for inspection by attributes) are similarly much larger than sample sizes for ANSI/ASQ Z1.9 (for inspection by variables).

One application of attribute data is the estimation of the nonconforming fraction (p) from a process. The binomial distribution is the standard model in which p is the probability that each of n items will or will not have a certain attribute (such as meeting or not meeting specifications). The probability p is assumed to be identical for every item in the population; that is, every item has the same chance of being nonconforming. In addition, the sample n is assumed to come from an infinite population. That is, removal and inspection of an item does not change the probability that the next one will have the attribute in question.

Inderjit Arora’s picture

By: Inderjit Arora

Objective auditing has always been a challenge, and this is especially true now for ISO 9001:2015 audits. To better meet customer expectations, fundamental changes have been introduced to the standard to address current business realities and advancements in technology. Much of the responsibility of meeting the new requirements falls on leaders, and a careful, objective audit to the standard can help them.

It’s human nature that with knowledge and experience comes a touch of ego, but an auditor with an ego can be a liability. Experienced auditors must guard against a tendency to add subjective opinions to their audit reports and focus instead on providing objective inputs. In this way they can help leaders make rational, objective decisions. This challenge is further compounded for auditors experienced in auditing to ISO 9001:2008, with its emphasis on preventive action. ISO 9001:2015 no longer addresses preventive action but instead focuses on establishing risk-based thinking throughout the management system. What’s the best way to audit this?

NIST’s picture

By: NIST

NIST scientists have devised and improved a prototype instrument the size of a loaf of bread that can substantially increase the accuracy of length measurements in commerce. Perhaps surprisingly, it does so by achieving the most accurate measure of the refractive index of air reported to date.

An enormous share of U.S. industry depends on precise dimensional measurements, frequently on the scale of nanometers (nm, billionths of a meter) or even smaller. Such measurements can only be made by using light, typically laser light, employing a device called an interferometer, which measures lengths in terms of the wavelength of light. For example, the diameter of a pinhead is about 2,400 wavelengths of red light.

TÜV SÜD America’s picture

By: TÜV SÜD America

The revised ISO 9001:2015 standard was published on Sept. 23, 2015. The release of this standard has many organizations preparing for the transition from ISO 9001:2008, and with transition comes uncertainty: How long will it take? What will it cost my organization? Where do I start? These and other questions are far from uncommon.

Fred Schenkelberg’s picture

By: Fred Schenkelberg

The planning of environmental or reliability testing becomes a question of sample size at some point. It’s probably the most common question I hear as a reliability engineer: How many samples do we need?

Also, when evaluating supplier-run test results, we need to understand the implications of the results, again based on the number of samples in the test. If the supplier runs 22 samples without failure over a test that replicates the shipping set of stresses, then we need a way to interpret those results.

We often use success testing (no expected or actual failures during the testing) to minimize the number of samples required for a test and still show some level of confidence for a specified reliability level. The basis for success testing is the binomial distribution. The result of the applied stress results in the product either working or not. Binary results.

Recently I received a request to explain where the success-testing sample size formula comes from, or it’s derivation. First here’s the formula:

Where, C is confidence and R is the lower limit of the reliability.

Douglas C. Fair’s picture

By: Douglas C. Fair

Sponsored Content

Statistical process control (SPC) software has been around for decades, used by manufacturers across industries to help monitor process behavior and control quality on the shop floor. Like any technology, the software has evolved over the years into something much more than a tool for collecting quality data from a single manufacturing site.

In addition to its intrinsic ability to collect quality data, today’s most advanced SPC software can serve as a solution for managing global, enterprisewide quality. However, many manufacturers are hesitant to embrace this expanded functionality. They are either reluctant to move away from their familiar, yet likely antiquated, SPC system, or are simply unsure what other value the technology has to offer. So, what else can (and should) your quality software do? Consider the following checklist.

Syndicate content