Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Prediction-Based Multi-Agent Reinforcement Learning in Inherently Non-Stationary Environments

Prediction-Based Multi-Agent Reinforcement Learning in Inherently Non-Stationary Environments Multi-agent reinforcement learning (MARL) is a widely researched technique for decentralised control in complex large-scale autonomous systems. Such systems often operate in environments that are continuously evolving and where agents actions are non-deterministic, so called inherently non-stationary environments. When there are inconsistent results for agents acting on such an environment, learning and adapting is challenging. In this article, we propose P-MARL, an approach that integrates prediction and pattern change detection abilities into MARL and thus minimises the effect of non-stationarity in the environment. The environment is modelled as a time-series, with future estimates provided using prediction techniques. Learning is based on the predicted environment behaviour, with agents employing this knowledge to improve their performance in realtime. We illustrate P-MARLs performance in a real-world smart grid scenario, where the environment is heavily influenced by non-stationary power demand patterns from residential consumers. We evaluate P-MARL in three different situations, where agents action decisions are independent, simultaneous, and sequential. Results show that all methods outperform traditional MARL, with sequential P-MARL achieving best results. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Autonomous and Adaptive Systems (TAAS) Association for Computing Machinery

Prediction-Based Multi-Agent Reinforcement Learning in Inherently Non-Stationary Environments

Loading next page...
 
/lp/association-for-computing-machinery/prediction-based-multi-agent-reinforcement-learning-in-inherently-non-vV5Yp2gYZj

References (189)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2017 ACM
ISSN
1556-4665
eISSN
1556-4703
DOI
10.1145/3070861
Publisher site
See Article on Publisher Site

Abstract

Multi-agent reinforcement learning (MARL) is a widely researched technique for decentralised control in complex large-scale autonomous systems. Such systems often operate in environments that are continuously evolving and where agents actions are non-deterministic, so called inherently non-stationary environments. When there are inconsistent results for agents acting on such an environment, learning and adapting is challenging. In this article, we propose P-MARL, an approach that integrates prediction and pattern change detection abilities into MARL and thus minimises the effect of non-stationarity in the environment. The environment is modelled as a time-series, with future estimates provided using prediction techniques. Learning is based on the predicted environment behaviour, with agents employing this knowledge to improve their performance in realtime. We illustrate P-MARLs performance in a real-world smart grid scenario, where the environment is heavily influenced by non-stationary power demand patterns from residential consumers. We evaluate P-MARL in three different situations, where agents action decisions are independent, simultaneous, and sequential. Results show that all methods outperform traditional MARL, with sequential P-MARL achieving best results.

Journal

ACM Transactions on Autonomous and Adaptive Systems (TAAS)Association for Computing Machinery

Published: May 25, 2017

Keywords: Multi-agent systems

There are no references for this article.