Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Cooperative multi-robot patrol with Bayesian learning

Cooperative multi-robot patrol with Bayesian learning Patrolling indoor infrastructures with a team of cooperative mobile robots is a challenging task, which requires effective multi-agent coordination. Deterministic patrol circuits for multiple mobile robots have become popular due to their exceeding performance. However their predefined nature does not allow the system to react to changes in the system’s conditions or adapt to unexpected situations such as robot failures, thus requiring recovery behaviors in such cases. In this article, a probabilistic multi-robot patrolling strategy is proposed. A team of concurrent learning agents adapt their moves to the state of the system at the time, using Bayesian decision rules and distributed intelligence. When patrolling a given site, each agent evaluates the context and adopts a reward-based learning technique that influences future moves. Extensive results obtained in simulation and real world experiments in a large indoor environment show the potential of the approach, presenting superior results to several state of the art strategies. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Autonomous Robots Springer Journals

Cooperative multi-robot patrol with Bayesian learning

Autonomous Robots , Volume 40 (5) – Oct 13, 2015

Loading next page...
 
/lp/springer-journals/cooperative-multi-robot-patrol-with-bayesian-learning-0U1fCApqmw

References (52)

Publisher
Springer Journals
Copyright
Copyright © 2015 by Springer Science+Business Media New York
Subject
Engineering; Robotics and Automation; Artificial Intelligence (incl. Robotics); Computer Imaging, Vision, Pattern Recognition and Graphics; Control, Robotics, Mechatronics
ISSN
0929-5593
eISSN
1573-7527
DOI
10.1007/s10514-015-9503-7
Publisher site
See Article on Publisher Site

Abstract

Patrolling indoor infrastructures with a team of cooperative mobile robots is a challenging task, which requires effective multi-agent coordination. Deterministic patrol circuits for multiple mobile robots have become popular due to their exceeding performance. However their predefined nature does not allow the system to react to changes in the system’s conditions or adapt to unexpected situations such as robot failures, thus requiring recovery behaviors in such cases. In this article, a probabilistic multi-robot patrolling strategy is proposed. A team of concurrent learning agents adapt their moves to the state of the system at the time, using Bayesian decision rules and distributed intelligence. When patrolling a given site, each agent evaluates the context and adopts a reward-based learning technique that influences future moves. Extensive results obtained in simulation and real world experiments in a large indoor environment show the potential of the approach, presenting superior results to several state of the art strategies.

Journal

Autonomous RobotsSpringer Journals

Published: Oct 13, 2015

There are no references for this article.