Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Norms and value based reasoning: justifying compliance and violation

Norms and value based reasoning: justifying compliance and violation There is an increasing need for norms to be embedded in technology as the widespread deployment of applications such as autonomous driving, warfare and big data analysis for crime fighting and counter-terrorism becomes ever closer. Current approaches to norms in multi-agent systems tend either to simply make prohibited actions unavailable, or to provide a set of rules (principles) which the agent is obliged to follow, either as part of its design or to avoid sanctions and punishments. In this paper we argue for the position that agents should be equipped with the ability to reason about a system’s norms, by reasoning about the social and moral values that norms are designed to serve; that is, perform the sort of moral reasoning we expect of humans. In particular we highlight the need for such reasoning when circumstances are such that the rules should arguably be broken, so that the reasoning can guide agents in deciding whether to comply with the norms and, if violation is desirable, how best to violate them. One approach to enabling this is to make use of an argumentation scheme based on values and designed for practical reasoning: arguments for and against actions are generated using this scheme and agents choose between actions based on their preferences over these values. Moral reasoning then requires that agents have an acceptable set of values and an acceptable ordering on their values. We first discuss how this approach can be used to think about and justify norms in general, and then discuss how this reasoning can be used to think about when norms should be violated, and the form this violation should take. We illustrate how value based reasoning can be used to decide when and how to violate a norm using a road traffic example. We also briefly consider what makes an ordering on values acceptable, and how such an ordering might be determined. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Artificial Intelligence and Law Springer Journals

Norms and value based reasoning: justifying compliance and violation

Loading next page...
 
/lp/springer-journals/norms-and-value-based-reasoning-justifying-compliance-and-violation-BXR437PRQM
Publisher
Springer Journals
Copyright
Copyright © 2017 by The Author(s)
Subject
Computer Science; Artificial Intelligence (incl. Robotics); International IT and Media Law, Intellectual Property Law; Philosophy of Law; Legal Aspects of Computing; Information Storage and Retrieval
ISSN
0924-8463
eISSN
1572-8382
DOI
10.1007/s10506-017-9194-9
Publisher site
See Article on Publisher Site

Abstract

There is an increasing need for norms to be embedded in technology as the widespread deployment of applications such as autonomous driving, warfare and big data analysis for crime fighting and counter-terrorism becomes ever closer. Current approaches to norms in multi-agent systems tend either to simply make prohibited actions unavailable, or to provide a set of rules (principles) which the agent is obliged to follow, either as part of its design or to avoid sanctions and punishments. In this paper we argue for the position that agents should be equipped with the ability to reason about a system’s norms, by reasoning about the social and moral values that norms are designed to serve; that is, perform the sort of moral reasoning we expect of humans. In particular we highlight the need for such reasoning when circumstances are such that the rules should arguably be broken, so that the reasoning can guide agents in deciding whether to comply with the norms and, if violation is desirable, how best to violate them. One approach to enabling this is to make use of an argumentation scheme based on values and designed for practical reasoning: arguments for and against actions are generated using this scheme and agents choose between actions based on their preferences over these values. Moral reasoning then requires that agents have an acceptable set of values and an acceptable ordering on their values. We first discuss how this approach can be used to think about and justify norms in general, and then discuss how this reasoning can be used to think about when norms should be violated, and the form this violation should take. We illustrate how value based reasoning can be used to decide when and how to violate a norm using a road traffic example. We also briefly consider what makes an ordering on values acceptable, and how such an ordering might be determined.

Journal

Artificial Intelligence and LawSpringer Journals

Published: Mar 9, 2017

References