Featured Product
This Week in Quality Digest Live
Operations Features
Innovating Service With Chip Bell
You will need all the half-baked, slightly wild ideas you can get
John Hayes
Automation can lead to better paying, more fulfilling jobs
Sara Adams
Here’s how you can avoid making them
Torsten Schimanski
Benefits to students, employers, and industries do more than address the skills gap
Eliot Dratch
Responding to the organization in the mirror

More Features

Operations News
The reshoring move will simplify operations
Additional option for interfacing and controlling innovative mobile surface measurement system cuts implementation costs in half
Is the future of quality management actually business management?
Features high-resolution, high-color contrast display to mimic smooth motion of an analog meter
Featuring 96 × 72 × 36 in., five-axis work envelope ideal for large-part welding, drilling, and cutting processes
First trial module of learning tool focuses on ISO 9001 and is available now
Automotive test bench supports brake hydraulic pressure and electric parking brake systems R&D

More News

Jeffrey Heimgartner

Operations

Advances in Machine Vision Enable Automation of Quality Inspections

Experts from Amazon Web Services and Elementary Robotics share insights about automated inspections

Published: Thursday, September 9, 2021 - 12:03

During the Association for Advancing Automation (A3) Vision Week in June 2020, experts from Amazon Web Services and Elementary Robotics weighed in on the traditional challenges organizations face when using machine vision. They discussed how to incorporate the latest advances—including the cloud—to make the process easier, faster, and able to solve seemingly unsolvable quality inspections.

Difficult quality inspections across industries have traditionally relied on manual inspections. Although it may be easy to put a person at the end of a production line, humans are inherently subjective and prone to error. Machine vision has proven itself as a valuable tool to address those issues while also lowering the costs of inspection.

“The industry has shifted to machine learning for some challenging problems,” says Dan Pipe-Mazo, CTO at Elementary Robotics. “We might have inconsistent product, but it’s hard to quantify or qualify rules. We might have examples when you train a system, but it has limited defects. Assuming we have a challenging product, we have a challenging configuration. With machine learning, it is no longer rules-based configuration.”

Unfortunately, machine vision is usually purpose built and only works particularly well for certain use cases. If a different defect type or production line is brought in, the entire system requires reprogramming or recalibration, thus increasing up-front costs and limiting scalability.

For example, an automotive customer has inspection points and processes from stamping to welding to painting and overall end inspection. Each station often has its own quality assurance process with different defects and approaches.

For businesses seeking to incorporate machine vision, the challenges of having these different inspection points require asking how to build a system—and make it one that can scale across the different inspection types while tracking costs. This often boils down to three key challenges with machine vision: configuring, running, and maintaining models.

Challenges with machine vision

Traditionally, thousands of images are required to enable machine learning to find defects. This often involves engineers going on site to take images and upload them back so they can run. Because the images are often hard to produce, this process takes significant time. It requires setting up equipment at the right angles and with the proper lighting.

Monitoring and maintenance often have complexities that mean reconfiguring when nonstandard or difficult variations are the problem. If the product looks like the initial images, it will typically be fine. Yet, any slight variances such as light, color tint, a bumped camera, or other small factor can make the model no longer perform as it did in its original training. To make those quick, corrective actions, it again requires someone on site.

Recent advances have helped to solve the challenge of configuring, maintaining, and monitoring for optimal performance.

“We can have an IoT-connected, cloud-based machine learning platform,” says Pipe-Mazo. “We are leveraging all those technologies to mitigate these challenges. Also on configuring, by leveraging the IoT cloud, we no longer need to make a trip to the camera to configure and set up. We can do that all remotely. With cloud-based, you can be constantly ingesting and monitoring data to take quick action.”

Advanced solutions

Amazon Web Services (AWS) launched its Amazon Lookout for Vision in February 2020 with the goal of making scalability easier.

“We know that, historically, machine learning takes hundreds if not thousands of images to identify defects at the right scale,” says Anant Patel, senior product manager-technical at AWS.

“Our lower bar is only 30 images. It’s a great way to get started and just see how they are working and if they need more images from there. Running—if using a third party, you have to buy purposeful cameras up front that you have to calibrate. Maintaining—environmental conditions are different and change. Being able to maintain and improve them over time is critical to long-term success and reducing operational costs.”

Amazon Lookout for Vision is an easy-to-use, cohesive service that analyzes images by using computer vision and machine learning to detect defects and anomalies in manufactured products. With as few as 30 images, customers are able to quickly spot manufacturing and production defects, and prevent costly errors from moving down the line.

machine vision representation
Amazon Lookout for Vision enables customers to create, run, and maintain a machine-vision inspection platform with ease and minimal up-front costs. (Image courtesy of Amazon Web Services.)

According to Patel, key benefits of Amazon Lookout for Vision include speed of deployment along with the ability to handle diverse conditions and incorporate different use cases.

“By allowing the same foundational science to be used across different use cases, you are training a custom model based on the set of images you bring in,” he says. “You can then configure product type and defect for each specific use case. It isn’t going to solve everything, but once that anomaly is flagged, you can ask: How do we improve decisions for an operator? How do we improve for kicking off that defective product so it never gets to the end user? Do I need to rework? Do I need to scrap?”

Leveraging technological advances is enabling Elementary Robotics to solve problems that were once unsolvable, such as not being able to detect a slight color variance.

“Using a color filter, we can set the boundaries just right, so [for example] it’s not picking up any granola but picking up a slight piece of debris,” says Dat Do, head of machine learning at Elementary Robotics. “If we apply those settings to brown, it’s not able to pick up because there is not sufficient contrast. When we look at a learning-based detection method, it finds it quite effectively even though there is not much contrast. The reason is that we can key in on shape and texture in addition to color.”

Along with requiring fewer images, the power and computational capacity in the cloud allows for training only good images. That dataset of good images is mapped in the neural network, which learns that space. When there is a bad image, the neural network will put it far away, making it easier to determine if it is good or bad.

Use cases

While surface issues such as scratches and holes can be seen by the human eye, shape, missing components, or process issues can easily go undetected.

One of Patel’s use-case scenarios included helping GE Healthcare with scalability in process control. Although the company builds CT and MRI machines on a small scale, the inspections must be of the highest quality. Different objects are placed on a machine, scans are run, and then analysis is carried out with up to 3,000 images per screen. Traditionally, an individual would sit and review all 3,000 images and verify that there were no defects. By automating the process, the operator—a subject-matter expert—can focus on specific defects that are identified. If what is identified is not a defect, the system can be retrained, boosting confidence that it will catch all future defects.

“New advancements like self-supervised training allow [us] to initialize our neural network weights to a good place,” Do said. “We have a neural network with millions of parameters, so it needs lots of data to see where to set network weights. We can create a pretext—remove color from images, feed black and white, and train to reproduce color images. We can take those weights and train on an actual test. In this case, we only used one image per class and five images in total. We were able to achieve an accuracy of 99.87 percent on an entire dataset of 700 images.”

Conclusion

From finding a small particle on a grain to detecting a bottleneck in a production line, machine vision has come a long way. For the innovators behind the scenes, the ultimate goal is to make incorporating these advances easier and seamless.

“Our intention is to make it simple to use, for anyone from nontechnical to machine learning expert,” Patel says. “It is a simple process, and it is a fully managed service, so everything can be done in the console. If you have existing images, you can use them. Once you bring them in, we offer the ability to label [an item] as normal or anomaly directly in the console. You can train models and get evaluation results, which allow you to determine if you have enough data.”

Incorporating the cloud means putting a dashboard in anyone’s hands without the added time of going on site, as well as giving back time to subject matter experts so they can annotate anomalies and lend their expertise to further increase flexibility and scalability for quality inspections. Ease of use has also been greatly increased, making the next era of machine vision more of a reality, no matter the size or scope of business.

First published July 22, 2021, on the engineering.com blog.

Discuss

About The Author

Jeffrey Heimgartner’s picture

Jeffrey Heimgartner

Jeffrey Heimgartner has more than 20 years of experience in the computer-aided drafting and design field. He manages the Lincoln, Nebraska-based drafting and design firm, Advanced Technical Services. His main responsibilities include managing the CAD team, sales, scheduling and coordinating projects, drafting and design, as well as marketing and all IT functions.