UW News

May 9, 2016

This five-fingered robot hand learns to get a grip on its own

This five-fingered robot hand can learn how to perform dexterous manipulation — like spinning a tube full of coffee beans — on its own, rather than having humans program its actions.

This five-fingered robot hand can learn how to perform dexterous manipulation — like spinning a tube full of coffee beans — on its own, rather than having humans program its actions.University of Washington

Robots today can perform space missions, solve a Rubik’s cube, sort hospital medication and even make pancakes. But most can’t manage the simple act of grasping a pencil and spinning it around to get a solid grip.

Intricate tasks that require dexterous in-hand manipulation — rolling, pivoting, bending, sensing friction and other things humans do effortlessly with our hands — have proved notoriously difficult for robots.

Now, a University of Washington team of computer scientists and engineers has built a robot hand that can not only perform dexterous manipulation but also learn from its own experience without needing humans to direct it. Their latest results are detailed in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation.

“Hand manipulation is one of the hardest problems that roboticists have to solve,” said lead author Vikash Kumar, a UW doctoral student in computer science and engineering. “A lot of robots today have pretty capable arms but the hand is as simple as a suction cup or maybe a claw or a gripper.”

By contrast, the UW research team spent years custom building one of the most highly capable five-fingered robot hands in the world. Then they developed an accurate simulation model that enables a computer to analyze movements in real time. In their latest demonstration, they apply the model to the hardware and real-world tasks like rotating an elongated object.

With each attempt, the robot hand gets progressively more adept at spinning the tube, thanks to machine learning algorithms that help it model both the basic physics involved and plan which actions it should take to achieve the desired result. (This demonstration begins at 1:47 in the video below.)

This autonomous learning approach developed by the UW Movement Control Laboratory contrasts with robotics demonstrations that require people to program each individual movement of the robot’s hand in order to complete a single task.

“Usually people look at a motion and try to determine what exactly needs to happen —the pinky needs to move that way, so we’ll put some rules in and try it and if something doesn’t work, oh the middle finger moved too much and the pen tilted, so we’ll try another rule,” said senior author and lab director Emo Todorov, UW associate professor of computer science and engineering and of applied mathematics.

“It’s almost like making an animated film — it looks real but there was an army of animators tweaking it,” Todorov said. “What we are using is a universal approach that enables the robot to learn from its own movements and requires no tweaking from us.”

UW computer science and engineering doctoral student Vikash Kumar custom built this robot hand, which has 40 tendons, 24 joints and more than 130 sensors.

UW computer science and engineering doctoral student Vikash Kumar custom built this robot hand, which has 40 tendons, 24 joints and more than 130 sensors.University of Washington

Building a dexterous, five-fingered robot hand poses challenges, both in design and control. The first involved building a mechanical hand with enough speed, strength responsiveness and flexibility to mimic basic behaviors of a human hand.

The UW’s dexterous robot hand — which the team built at a cost of roughly $300,000 — uses a Shadow Hand skeleton actuated with a custom pneumatic system and can move faster than a human hand. It is too expensive for routine commercial or industrial use, but it allows the researchers to push core technologies and test innovative control strategies.

“There are a lot of chaotic things going on and collisions happening when you touch an object with different fingers, which is difficult for control algorithms to deal with,” said co-author Sergey Levine, UW assistant professor of computer science and engineering who worked on the project as a postdoctoral fellow at University of California, Berkeley.  “The approach we took was quite different from a traditional controls approach.”

The team first developed algorithms that allowed a computer to model highly complex five-fingered behaviors and plan movements to achieve different outcomes — like typing on a keyboard or dropping and catching a stick — in simulation.

The research team from the UW Movement Control Laboratory includes (left to right) Emo Todorov, associate professor of computer science and engineering and of applied mathematics; Vikash Kumar, doctoral student in computer science and engineering; and Sergey Levine, assistant professor of computer science and engineering.

The research team from the UW Movement Control Laboratory includes (left to right) Emo Todorov, associate professor of computer science and engineering and of applied mathematics; Vikash Kumar, doctoral student in computer science and engineering; and Sergey Levine, assistant professor of computer science and engineering.University of Washington

Most recently, the research team has transferred the models to work on the actual five-fingered hand hardware, which never proves to be exactly the same as a simulated scenario. As the robot hand performs different tasks, the system collects data from various sensors and motion capture cameras and employs machine learning algorithms to continually refine and develop more realistic models.

“It’s like sitting through a lesson, going home and doing your homework to understand things better and then coming back to school a little more intelligent the next day,” said Kumar.

So far, the team has demonstrated local learning with the hardware system — which means the hand can continue to improve at a discrete task that involves manipulating the same object in roughly the same way. Next steps include beginning to demonstrate global learning — which means the hand could figure out how to manipulate an unfamiliar object or a new scenario it hasn’t encountered before.

The research was funded by the National Science Foundation and the National Institutes of Health.

For more information, contact Kumar at vikash@cs.washington.edu.

Tag(s):