Measurement Systems Analysis for Attributes, Part 2
The first part of this series introduced measurement systems analysis for attribute data, or attribute agreement analysis.
The first part of this series introduced measurement systems analysis for attribute data, or attribute agreement analysis.
When we talk about measurement system analysis (MSA), people tend to focus on attribute agreement analysis because it is usually quicker and easier to do than a gauge repeatability and reproducibility (gauge R&R) study.
Measurement systems analysis (MSA) for attributes, or attribute agreement analysis, is a lot like eating broccoli or Brussels sprouts. We must often do things we don't like because they are necessary or good for us.
To date, this series focused on relatively simple data analyses, such as learning one summary statistic about our data at a time.
In our last article, we discussed how to determine how many people drink pumpkin spice lattes in a given time period without learning t
How many people drink pumpkin spice lattes in October, and how would you calculate this without learning specifically who is drinking them, and who is not?
Inspection sounds simple. Screen out the bad stuff and ship the good stuff. However, measurement error will always create problems of misclassification where good stuff is rejected, and bad stuff gets shipped.
A quick Google search returns many instances of the saying, “A man with a watch knows what time it is.
Traditional statistical methods for computing the process performance index (Ppk) and control limits for process-control purposes assume that measurements are available for all items or parts.
This story was originally published by MIT Computer Science & Artificial Intelligence Lab (CSAIL).