First pancakes, then people.

From Vimeo:


The video shows a Barrett WAM 7 DOFs manipulator learning to flip pancakes by reinforcement learning.
The motion is encoded in a mixture of basis force fields through an extension of Dynamic Movement Primitives (DMP) that represents the synergies across the different variables through stiffness matrices. An Inverse Dynamics controller with variable stiffness is used for reproduction.

The skill is first demonstrated via kinesthetic teaching, and then refined by Policy learning by Weighting Exploration with the Returns (PoWER) algorithm. Compared to policy-gradient approaches, the reward is treated as a pseudo-probability, which allows Reinforcement Learning to use probabilistic estimation methods such as Expectation-Maximization (EM).

After 50 trials, the robot learns that the first part of the task requires a stiff behavior to throw the pancake in the air, while the second part requires the hand to be compliant in order to catch the pancake without having it bounced off the pan.

Video credits:
Dr Petar Kormushev
Dr Sylvain Calinon
(Italian Institute of Technology)

Load Comments...

Send this Article to a Friend



Separate multiple emails with a comma (,); limit 5 recipients






Your email has been sent successfully!

Manage this Video in Your Playlists




notify when someone comments
X

This website uses cookies.

This website uses cookies to improve user experience. By using this website you consent to all cookies in accordance with our Privacy Policy.

I agree
  
Learn More