Access the full text.
Sign up today, get DeepDyve free for 14 days.
Discrimination in decision making is prohibited on many attributes (religion, gender, etc…), but often present in historical decisions. Use of such discriminatory historical decision making as training data can perpetuate discrimination, even if the protected attributes are not directly present in the data. This work focuses on discovering discrimination in instances and preventing discrimination in classification. First, we propose a discrimination discovery method based on modeling the probability distribution of a class using Bayesian networks. This measures the effect of a protected attribute (e.g., gender) in a subset of the dataset using the estimated probability distribution (via a Bayesian network). Second, we propose a classification method that corrects for the discovered discrimination without using protected attributes in the decision process. We evaluate the discrimination discovery and discrimination prevention approaches on two different datasets. The empirical results show that a substantial amount of discrimination identified in instances is prevented in future decisions.
Artificial Intelligence and Law – Springer Journals
Published: Feb 17, 2014
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.