Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Visually Guided Cooperative Robot Actions Based on Information Quality

Visually Guided Cooperative Robot Actions Based on Information Quality In field environments it is not usually possible to provide robots in advance with valid geometric models of its environment and task element locations. The robot or robot teams need to create and use these models to locate critical task elements by performing appropriate sensor based actions. This paper presents a multi-agent algorithm for a manipulator guidance task based on cooperative visual feedback in an unknown environment. First, an information-based iterative algorithm to intelligently plan the robot’s visual exploration strategy is used to enable it to efficiently build 3D models of its environment and task elements. The algorithm uses the measured scene information to find the next camera position based on expected new information content of that pose. This is achieved by utilizing a metric derived from Shannon’s information theory to determine optimal sensing poses for the agent(s) mapping a highly unstructured environment. Second, after an appropriate environment model has been built, the quality of the information content in the model is used to determine the constraint-based optimum view for task execution. The algorithm is applicable for both an individual agent as well as multiple cooperating agents. Simulation and experimental demonstrations on a cooperative robot platform performing a two component insertion/mating task in the field show the effectiveness of this algorithm. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Autonomous Robots Springer Journals

Visually Guided Cooperative Robot Actions Based on Information Quality

Autonomous Robots , Volume 19 (1) – Jan 1, 2005

Loading next page...
 
/lp/springer-journals/visually-guided-cooperative-robot-actions-based-on-information-quality-R5iXCqltUT

References (64)

Publisher
Springer Journals
Copyright
Copyright © 2005 by Springer Science + Business Media, Inc.
Subject
Engineering; Robotics and Automation; Artificial Intelligence (incl. Robotics); Computer Imaging, Vision, Pattern Recognition and Graphics; Control, Robotics, Mechatronics
ISSN
0929-5593
eISSN
1573-7527
DOI
10.1007/s10514-005-6013-2
Publisher site
See Article on Publisher Site

Abstract

In field environments it is not usually possible to provide robots in advance with valid geometric models of its environment and task element locations. The robot or robot teams need to create and use these models to locate critical task elements by performing appropriate sensor based actions. This paper presents a multi-agent algorithm for a manipulator guidance task based on cooperative visual feedback in an unknown environment. First, an information-based iterative algorithm to intelligently plan the robot’s visual exploration strategy is used to enable it to efficiently build 3D models of its environment and task elements. The algorithm uses the measured scene information to find the next camera position based on expected new information content of that pose. This is achieved by utilizing a metric derived from Shannon’s information theory to determine optimal sensing poses for the agent(s) mapping a highly unstructured environment. Second, after an appropriate environment model has been built, the quality of the information content in the model is used to determine the constraint-based optimum view for task execution. The algorithm is applicable for both an individual agent as well as multiple cooperating agents. Simulation and experimental demonstrations on a cooperative robot platform performing a two component insertion/mating task in the field show the effectiveness of this algorithm.

Journal

Autonomous RobotsSpringer Journals

Published: Jan 1, 2005

There are no references for this article.