Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Autonomous assembly planning of demonstrated skills with reinforcement learning in simulation

Autonomous assembly planning of demonstrated skills with reinforcement learning in simulation Industrial robots used to assemble customized products in small batches require a lot of reprogramming. With this work we aim to reduce the programming complexity by autonomously finding the fastest assembly plans without any collisions with the environment. First, a digital twin of the robot uses a gym in simulation to learn which assembly skills (programmed by demonstration) are physically possible (i.e. no collisions with the environment). Only from this reduced solution space will the physical twin look for the fastest assembly plans. Experiments show that the system indeed converges to the fastest assembly plans. Moreover, pre-training in simulation drastically reduces the number of interactions before convergence compared to directly learning on the physical robot. This two-step procedure allows for the robot to autonomously find correct and fast assembly sequences, without any additional human input or mismanufactured products. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Autonomous Robots Springer Journals

Autonomous assembly planning of demonstrated skills with reinforcement learning in simulation

Loading next page...
 
/lp/springer-journals/autonomous-assembly-planning-of-demonstrated-skills-with-reinforcement-Uv0X5ERSz6
Publisher
Springer Journals
Copyright
Copyright © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021
ISSN
0929-5593
eISSN
1573-7527
DOI
10.1007/s10514-021-10020-x
Publisher site
See Article on Publisher Site

Abstract

Industrial robots used to assemble customized products in small batches require a lot of reprogramming. With this work we aim to reduce the programming complexity by autonomously finding the fastest assembly plans without any collisions with the environment. First, a digital twin of the robot uses a gym in simulation to learn which assembly skills (programmed by demonstration) are physically possible (i.e. no collisions with the environment). Only from this reduced solution space will the physical twin look for the fastest assembly plans. Experiments show that the system indeed converges to the fastest assembly plans. Moreover, pre-training in simulation drastically reduces the number of interactions before convergence compared to directly learning on the physical robot. This two-step procedure allows for the robot to autonomously find correct and fast assembly sequences, without any additional human input or mismanufactured products.

Journal

Autonomous RobotsSpringer Journals

Published: Oct 16, 2021

Keywords: Reinforcement learning; Digital twin; Assembly planning; Programming by demonstration

References