Geraldine S. Cheok, Alan M. Lytle, and Kamel S. Saidi, Ph.D.’s default image

By Geraldine S. Cheok, Alan M. Lytle, and Kamel S. Saidi, Ph.D.

3-D Imaging Terminology

One of the documents to come out of committee E57 was E2544-08 -- "Standard terminology for three- dimensional (3-D) imaging systems." What follows is an excerpt from the document of some of the 3-D imaging terminology. To keep the excerpt short, we have included the definition of just a few of the terms listed.

3.2 Definitions of terms specific to this standard

3-D imaging system--a noncontact measurement instrument used to produce a 3-D representation (e.g., a point cloud) of an object or a site.

 

Angular increment--the angle between samples, Da, where Da = ai- ai-1, in either the azimuth or elevation directions (or a combination of both) with respect to the instrument’s internal frame of reference

Brenda Boughton’s default image

By Brenda Boughton

Electronic records--theircreation, modification, maintenance, retrieval, and archiving--can create ongoing challenges for all organizations. For industries regulated by the U.S. Food and Drug Administration (FDA), such as pharmaceutical companies, medical device manufacturers, food processing plants, and biotech companies, the FDA’s Code of Federal Regulations Title 21 Part 11 applies to the specifications, use, and control of electronic records and electronic signatures.

The requirements of FDA 21 CFR Part 11 for electronic records are based on good practices, organization, and, most of all, common sense to ensure the efficient and secure handling of these records. In general, these requirements state that:

• All information is complete, and all records can be tracked to their originator and corresponding records.

• Appropriate securities are in place to ensure that tampering that would alter the record from its original intent does not take place.

• Only the appropriate parties can access the records, and only those so identified can create, modify, or review those records.

Thomas Hill, Ph.D.; Robert Eames; and Sachin Lahoti’s default image

By Thomas Hill, Ph.D.; Robert Eames; and Sachin Lahoti

Data mining methods have many origins, including drawing on insights into learning as it naturally occurs in humans (cognitive science), and advances in computer science and algorithm design on how to best detect patterns in unstructured data. Although traditional statistical methods for analyzing data, based on statistical theories and models, are now widely accepted throughout various industries, data mining methods have only been widely embraced in business for a decade or two. However, their effectiveness for root cause analysis, and for modeling, optimizing and improving complex processes, are making data mining increasingly popular--and even necessary--in many real-world discrete manufacturing, batch manufacturing, and continuous-process applications.

There is no single, generally agreed-upon definition of data mining. As a practical matter, whenever data describing a process are available, in manufacturing for example, then any systematic review of those data to identify useful patterns, correlations, trends, and so forth, could be called “data mining.” Put simply, data mining uncovers nuggets of information from a sometimes vast repository of data describing the process of interest.

Peter Schulz’s picture

By Peter Schulz

 

The idea of mixing optics and measurement has its origins hundreds of years ago in the realm of pure science, i.e., astronomy (telescopy) and microscopy. Manufacturing first adopted optics for routine inspection and measurement of machined and molded parts in the 1920s with James Hartness’ development of instruments capable of projecting the magnified silhouette of a workpiece onto a ground glass screen. Hartness, as longtime chairman of the United States’ National Screw-Thread Commission, applied his pet interest in optics to the problem of screw-thread inspection. For many years, the Hartness Screw-Thread Comparator was a profitable product for the Jones and Lamson Machine Company, of which Hartness was president.

Horizontal vs. vertical instrument configurations

 

Jeff Bibee’s default image

By Jeff Bibee

Optical measurement, when clearly understood and applied, can bring huge benefits. It can also be an investment disaster. To avoid the latter, we need to start with an understanding of the basics--the capabilities and limitations of optical measurement. Then, we can consider the applications where it might provide a better solution over current methods, such as touch probes, optical comparators, hand gauges, or microscopes. Digging deeper, we can discover the challenges that those applications present to optical measurement, the limitations, and the potentials for failure. In this article, we will investigate the optical tools and software strategies that have been developed to meet those challenges. With a deeper understanding, the right technology can be applied to the task, and the investment dollars will make sense.

The basics

The diagram in figure 1 below illustrates the basics of optical measurement: lighting, optics, XY Stage, and a Z axis that handles the focus.

Mario Perez-Wilson’s default image

By Mario Perez-Wilson


The process potential index, or Cp, measures a process's potential capability, which is defined as the allowable spread over the actual spread. The allowable spread is the difference between the upper specification limit and the lower specification limit. The actual spread is determined from the process data collected and is calculated by multiplying six times the standard deviation, s. The standard deviation quantifies a process's variability. As the standard deviation increases in a process, the Cp decreases in value. As the standard deviation decreases (i.e., as the process becomes less variable), the Cp increases in value.

By convention, when a process has a Cp value less than 1.0, it is considered potentially incapable of meeting specification requirements. Conversely, when a process Cp is greater than or equal to 1.0, the process has the potential of being capable.

Ideally, the Cp should be as high as possible. The higher the Cp, the lower the variability with respect to the specification limits. In a process qualified as a Six Sigma process (i.e., one that allows plus or minus six standard deviations within the specifications limits), the Cp is greater than or equal to 2.0.

Dave K. Banerjea’s picture

By Dave K. Banerjea

If your company is involved in manufacturing, chances are that a good portion of your company's assets include measurement and test equipment (M&TE). This includes everything from simple go/no-go plug gauges to air-pressure gauges, voltmeters, micrometers and calipers on up to very sophisticated equipment such as robotic coordinate measurement machines and scanning electron microscopes.

 M&TE are those assets your company uses to make critical decisions on whether to pass or fail incoming materials, in-process work and finished goods.

 Of course, M&TE itself must be periodically inspected, tested and calibrated as part of the quality process. Poor or unreliable measurements result in faulty decisions and questionable product quality. Calibration management software can be crucial to helping maintain equipment accuracy and properly calibrated testing equipment.

 Calibration management software saves time, effort and money. Computerizing your calibration records makes them instantly available in the event of product quality problems or a quality system audit.

William H. Denney, Ph.D.’s default image

By William H. Denney, Ph.D.

“We are going to win, and the industrial West is going to lose: There’s nothing much you can do about it because the reasons for your failure are within yourselves.”

--Konosuke Matsushita  

They work tirelessly to change our world irreversibly. If they succeed at what they’re doing and aren’t challenged, our way of life as we know it will end. While we whine about our bosses, our organizations, and our government; while we do the minimum that our jobs require; while we flip-flop through the mall and watch Oprah they’re planning, learning, and executing. When we’re tucked away in our beds, tossing and turning in restless sleep, they’re even busier. They don’t seem to tire; their passion is relentless. To them, weekends and holidays are inconsequential in their desire to have what we have.

We’re at war, but we seem oblivious to it. Our children’s future, our families, even our liberties are at risk, but for now, apathy is our primary defense. Secure in our ignorance of what’s happening far away, we think that we’re safe. But we’re not.

Charles Wells’s default image

By Charles Wells

Most in the electronic manufacturing services industry are acutely aware of the growing problem of counterfeit and substandard electronic components within the supply chain, as well as the headaches that they cause.

Although industry and governments are working diligently in addressing counterfeit abatement, you may already have one of the most useful tools in combating phony parts in place right on your production floor.

Chris Eckert’s picture

By Chris Eckert

Manufacturers’ efforts to do more with less have resulted in purchasing departments sourcing cheaper products and parts, often from overseas. Such cost-cutting certainly makes purchasing look good to management. But the effect on quality professionals may be just the opposite: product or part defects, malfunctions or undesirable side effects, not to mention the challenge of producing high-quality end-products within narrow timelines and budgets. Many sleepless nights are a frequent outcome.

Because cost cutting and global sourcing are here to stay, how can quality professionals combat these monumental challenges? Root cause analysis (RCA), when fully utilized, can eliminate defects in your operations as well as defects that you inherit from suppliers, ultimately helping to maintain a satisfied and engaged customer base.