Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Human decision making modelling for gambling task under uncertainty and risk

Human decision making modelling for gambling task under uncertainty and risk In this paper, modelling of human decision making process and comparison among various reinforcement learning (RL) techniques with utility functions has been performed. Iowa gambling task (IGT) is used to collect real time data to understand and model the decision making (DM) process involving uncertainty, risk or ambiguity. Performance of models is evaluated based on their mean square deviation (MSD) value. This helps to predict the probability of the next choice that lead to the selection of the advantageous deck as compared to disadvantageous one. Along with that, the deck selection pattern between male and female with the learning process of the participants were also analysed. By comparing the MSD value of various RL models, it is found that the MSD value of DM model consists of prospect utility (PU)-decay reinforcement learning (DRI) with trial dependent choice (TDC) rule is best. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Information and Decision Sciences Inderscience Publishers

Human decision making modelling for gambling task under uncertainty and risk

Loading next page...
 
/lp/inderscience-publishers/human-decision-making-modelling-for-gambling-task-under-uncertainty-k3ix2PwJO4
Publisher
Inderscience Publishers
Copyright
Copyright © Inderscience Enterprises Ltd
ISSN
1756-7017
eISSN
1756-7025
DOI
10.1504/ijids.2022.122723
Publisher site
See Article on Publisher Site

Abstract

In this paper, modelling of human decision making process and comparison among various reinforcement learning (RL) techniques with utility functions has been performed. Iowa gambling task (IGT) is used to collect real time data to understand and model the decision making (DM) process involving uncertainty, risk or ambiguity. Performance of models is evaluated based on their mean square deviation (MSD) value. This helps to predict the probability of the next choice that lead to the selection of the advantageous deck as compared to disadvantageous one. Along with that, the deck selection pattern between male and female with the learning process of the participants were also analysed. By comparing the MSD value of various RL models, it is found that the MSD value of DM model consists of prospect utility (PU)-decay reinforcement learning (DRI) with trial dependent choice (TDC) rule is best.

Journal

International Journal of Information and Decision SciencesInderscience Publishers

Published: Jan 1, 2022

There are no references for this article.