Featured Product
This Week in Quality Digest Live
Innovation Features
Megan Wallin Kerth
One important lesson learned was maintaining quality customer service in the face of unpredictability
Katie Rapp
Benefits, tips, and lessons learned
Prasad Akella
Drishti’s focus on enhancing employee capabilities is a fresh take on technological solutions
Ben Bensaou
Steps to integrate innovative practices and thinking across your organization
Artem Kroupenev
What is your organization’s digital strategy?

More Features

Innovation News
Initiatives include collaborations with printer manufacturers pro-beam, Sciaky, DM3D, Gefertec, and Meltio
Forrester Wave names Siemens a leader in their “Industrial Internet-of-Things Software Platforms Q3 2021” report
World standards leaders issued a call to action to the heads of state
Minimizes manual labor in high-risk roles associated with high turnover
Eiger Fleet to enable more control and automation of distributed manufacturing
The tabletop diagnostic yields results in an hour and can be programmed to detect variants of the SARS-CoV-2 virus
First Responder UAS Triple Challenge focuses on using optical sensors and data analysis to improve image detection and location
More than half of respondents expect to meet Industry 4.0 goals within two years
Both quality professionals and their business leaders agree that openness and communication is essential to moving forward

More News

Rachel Gordon

Innovation

Dexterous Robotic Hands Manipulate Thousands of Objects With Ease

This deft ‘handiwork’ could be an asset in speeding up logistics and manufacturing

Published: Tuesday, November 30, 2021 - 12:02

First published Nov. 5, 2021, on MIT CSAIL News. 

At just 1 year old, a baby is more dexterous than a robot. Sure, machines can do more than just pick up and put down objects, but we’re not quite there as far as replicating a natural pull toward exploratory or sophisticated dexterous manipulation goes.

OpenAI gave it a try with “Dactyl” (meaning “finger,” from the Greek daktylos), using their humanoid robot hand to solve a Rubik’s cube with software that’s a step toward more general AI, and a step away from the common single-task mentality. DeepMind created “RGB-Stacking,” a vision-based system that challenges a robot to learn how to grab items and stack them.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), in the ever-present quest to get machines to replicate human abilities, created a framework that’s more scaled up: A system that can reorient more than 2,000 different objects, with the robotic hand facing both upward and downward. This ability to manipulate anything from a cup, a tuna can, or a Cheez-It box, could help the hand quickly pick-and-place objects in specific ways and locations—and even generalize to unseen objects.

This deft “handiwork”—which is usually limited by single tasks and upright positions—could be an asset in speeding up logistics and manufacturing, helping with common demands such as packing objects into slots for kitting, or dexterously manipulating a wider range of tools. The team used a simulated, anthropomorphic hand with 24 degrees of freedom, and showed evidence that the system could be transferred to a real robotic system in the future.

“In industry, a parallel-jaw gripper is most commonly used, partially due to its simplicity in control, but it’s physically unable to handle many tools we see in daily life,” says MIT CSAIL Ph.D. student Tao Chen, member of the Improbable AI Lab and the lead researcher on the project. “Even using pliers is difficult because it can’t dexterously move one handle back and forth. Our system will allow a multifingered hand to dexterously manipulate such tools, which opens up a new area for robotics applications.” 

Give me a hand

This type of “in hand” object reorientation has been a challenging problem in robotics, due to the large number of motors to be controlled and the frequent change in contact state between the fingers and the objects. And with more than 2,000 objects, the model had a lot to learn. 

The problem becomes even more tricky when the hand is facing downward. Not only does the robot need to manipulate the object, but it must also circumvent gravity so it doesn’t fall down. 

The team found that a simple approach could solve complex problems. They used a model-free reinforcement learning algorithm—meaning the system has to figure out value functions from interactions with the environment—with deep learning, and something called a “teacher-student” training method.

For this to work, the “teacher” network is trained on information about the object and robot that’s easily available in simulation, but not in the real world, such as the location of fingertips or object velocity. To ensure that the robots can work outside of the simulation, the knowledge of the “teacher” is distilled into observations that can be acquired in the real world, such as depth images captured by cameras, object pose, and the robot’s joint positions. They also used a “gravity curriculum” where the robot first learns the skill in a zero-gravity environment, and then slowly adapts the controller to the normal gravity condition, which, when taking things at this pace, really improved the overall performance.

Although seemingly counterintuitive, a single controller (known as the “brain” of the robot) could reorient a large number of objects it had never seen before and with no knowledge of shape.

“We initially thought that visual perception algorithms for inferring shape while the robot manipulates the object was going to be the primary challenge,” says MIT professor Pulkit Agrawal, an author on the paper about the research. “To the contrary, our results show that one can learn robust control strategies that are shape agnostic. This suggests that visual perception may be far less important for manipulation than what we are used to thinking, and simpler perceptual processing strategies might suffice.”

Many small, circular-shaped objects (apples, tennis balls, marbles) had close to 100-percent success rates when reoriented with the hand facing up and down. Unsurprisingly, the lowest success rates, at closer to 30 percent, were for more complex objects like a spoon, a screwdriver, or scissors.

Because success rates varied with object shape, in the future the team notes that training the model based on object shapes could improve performances.

Chen wrote a paper about the research alongside MIT CSAIL Ph.D. student Jie Xu and MIT professor Pulkit Agrawal. The research is funded by Toyota Research Institute, Amazon Research Award, and DARPA Machine Common Sense Program. It was presented at the Conference on Robot Learning (CoRL) held Nov. 8–11, 2021.

Reprinted with permission of MIT CSAIL News.

Discuss

About The Author

Rachel Gordon’s picture

Rachel Gordon

Rachel Gordon is the Communications and Media Officer at CSAIL, MIT’s Computer Science and Artificial Intelligence Laboratory. MIT CSAIL pioneers research in computing that improves the way people work, play, and learn.