Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Efficient and Robust Emergence of Norms through Heuristic Collective Learning

Efficient and Robust Emergence of Norms through Heuristic Collective Learning In multiagent systems, social norms serves as an important technique in regulating agents’ behaviors to ensure effective coordination among agents without a centralized controlling mechanism. In such a distributed environment, it is important to investigate how a desirable social norm can be synthesized in a bottom-up manner among agents through repeated local interactions and learning techniques. In this article, we propose two novel learning strategies under the collective learning framework, collective learning EV-l and collective learning EV-g, to efficiently facilitate the emergence of social norms. Extensive simulations results show that both learning strategies can support the emergence of desirable social norms more efficiently and be applicable in a wider range of multiagent interaction scenarios compared with previous work. The influence of different topologies is investigated, which shows that the performance of all strategies is robust across different network topologies. The influences of a number of key factors (neighborhood size, actions space, population size, fixed agents and isolated subpopulations) on norm emergence performance are investigated as well. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Autonomous and Adaptive Systems (TAAS) Association for Computing Machinery

Efficient and Robust Emergence of Norms through Heuristic Collective Learning

Loading next page...
 
/lp/association-for-computing-machinery/efficient-and-robust-emergence-of-norms-through-heuristic-collective-NHzlEqATDH

References (40)

Publisher
Association for Computing Machinery
Copyright
Copyright © 2017 ACM
ISSN
1556-4665
eISSN
1556-4703
DOI
10.1145/3127498
Publisher site
See Article on Publisher Site

Abstract

In multiagent systems, social norms serves as an important technique in regulating agents’ behaviors to ensure effective coordination among agents without a centralized controlling mechanism. In such a distributed environment, it is important to investigate how a desirable social norm can be synthesized in a bottom-up manner among agents through repeated local interactions and learning techniques. In this article, we propose two novel learning strategies under the collective learning framework, collective learning EV-l and collective learning EV-g, to efficiently facilitate the emergence of social norms. Extensive simulations results show that both learning strategies can support the emergence of desirable social norms more efficiently and be applicable in a wider range of multiagent interaction scenarios compared with previous work. The influence of different topologies is investigated, which shows that the performance of all strategies is robust across different network topologies. The influences of a number of key factors (neighborhood size, actions space, population size, fixed agents and isolated subpopulations) on norm emergence performance are investigated as well.

Journal

ACM Transactions on Autonomous and Adaptive Systems (TAAS)Association for Computing Machinery

Published: Oct 27, 2017

Keywords: Norm emergence

There are no references for this article.