USEUROPEAFRICAASIA 中文双语Français
Lifestyle
Home / Lifestyle / People

Robotic hand learns how to juggle

Updated: 2018-08-08 09:33
Dactyl, a system for manipulating objects, uses a robotic hand to hold a 3D-printed block at OpenAI, a nonprofit artificial intelligence lab. [Photo provided to China Daily]

Milestone research that trained a robot in a virtual environment may one day have real-world applications2018-08-08

Picture 1:Dactyl, a system for manipulating objects, uses a robotic hand to hold a 3D-printed block at OpenAI, a nonprofit artificial intelligence lab.

icture 2:Dactyl, a system for manipulating objects, uses a robotic hand to hold a 3D-printed block at OpenAI, a nonprofit artificial intelligence lab.

Picture 3:Elon Musk, one of the investors behind OpenAI.

SAN FRANCISCO-How long does it take a robotic hand to learn to juggle a cube?

About 100 years, give or take.

That's how much virtual computing time it took researchers at OpenAI, the nonprofit artificial intelligence lab funded by Elon Musk and others, to train its disembodied hand. The team paid Google $3,500 to run its software on thousands of computers simultaneously, crunching the actual time to 48 hours. After training the robot in a virtual environment, the team put it to the test in the real world.

The hand, called Dactyl, learned to move itself, the team of two dozen researchers disclosed this week. Its job was simply to adjust the cube so that one of its letters-"O", "P", "E", "N", "A" or "I"-faces upward to match a random selection.

Dactyl, a system for manipulating objects, uses a robotic hand to hold a 3D-printed block at OpenAI, a nonprofit artificial intelligence lab. [Photo provided to China Daily]

Ken Goldberg, a University of California, Berkeley robotics professor who is not affiliated with the project, said OpenAI's achievement is a big deal because it demonstrates how robots trained in a virtual environment could operate in the real world. His lab is trying something similar with a robot called Dex-Net, although its hand is simpler and the objects it manipulates are more complex.

"The key is the idea that you can make so much progress in simulation," he said. "This is a plausible path forward, when doing physical experiments is very hard."

Dactyl's real-world fingers are tracked by infrared dots and cameras. In training, every simulated movement that brought the cube closer to the goal gave Dactyl a small reward. Dropping the cube caused it to feel a penalty 20 times as big.

The process is called reinforcement learning. The robot software repeats the attempts millions of times in a simulated environment, trying over and over to get the highest reward. OpenAI used roughly the same algorithm it used to beat human players in the video game Dota 2.

In real life, a team of researchers worked for around a year to get the mechanical hand to reach this point. But the question is-why?

For one, the hand in a simulated environment doesn't understand friction. So even though its real fingers are rubbery, Dactyl lacks the human ability to form the appropriate grip.

1 2 Next   >>|
Most Popular
Top
BACK TO THE TOP
Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US