Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Intelligent agents and liability: is it a doctrinal problem or merely a problem of explanation?

Intelligent agents and liability: is it a doctrinal problem or merely a problem of explanation? The question of liability in the case of using intelligent agents is far from simple, and cannot sufficiently be answered by deeming the human user as being automatically responsible for all actions and mistakes of his agent. Therefore, this paper is specifically concerned with the significant difficulties which might arise in this regard especially if the technology behind software agents evolves, or is commonly used on a larger scale. Furthermore, this paper contemplates whether or not it is possible to share the responsibility with these agents and what are the main objections surrounding the assumption of considering such agents as responsible entities. This paper, however, is not intended to provide the final answer to all questions and challenges in this regard, but to identify the main components, and provide some perspectives on how to deal with such issue. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Artificial Intelligence and Law Springer Journals

Intelligent agents and liability: is it a doctrinal problem or merely a problem of explanation?

Artificial Intelligence and Law , Volume 18 (1) – Mar 5, 2010

Loading next page...
 
/lp/springer-journals/intelligent-agents-and-liability-is-it-a-doctrinal-problem-or-merely-a-fb3DU6vSuJ
Publisher
Springer Journals
Copyright
Copyright © 2010 by Springer Science+Business Media B.V.
Subject
Computer Science; Artificial Intelligence (incl. Robotics); International IT and Media Law, Intellectual Property Law; Philosophy of Law; Legal Aspects of Computing; Information Storage and Retrieval
ISSN
0924-8463
eISSN
1572-8382
DOI
10.1007/s10506-010-9086-8
Publisher site
See Article on Publisher Site

Abstract

The question of liability in the case of using intelligent agents is far from simple, and cannot sufficiently be answered by deeming the human user as being automatically responsible for all actions and mistakes of his agent. Therefore, this paper is specifically concerned with the significant difficulties which might arise in this regard especially if the technology behind software agents evolves, or is commonly used on a larger scale. Furthermore, this paper contemplates whether or not it is possible to share the responsibility with these agents and what are the main objections surrounding the assumption of considering such agents as responsible entities. This paper, however, is not intended to provide the final answer to all questions and challenges in this regard, but to identify the main components, and provide some perspectives on how to deal with such issue.

Journal

Artificial Intelligence and LawSpringer Journals

Published: Mar 5, 2010

There are no references for this article.