Journal of Mathematical Psychology vol:54 issue:1 pages:14-27
The purpose of the popular Iowa gambling task is to study decision making deficits in clinical populations by mimicking real–life decision making in an experimental context. Busemeyer and Stout (2002) proposed an "Expectancy Valence" reinforcement learning model that estimates three latent components which are assumed to jointly determine choice behavior in the Iowa gambling task: weighing of wins versus losses, memory for past payoffs, and response consistency. In this article we explore the statistical properties of the Expectancy Valence model. We first demonstrate the difficulty of applying the model on the level of a single participant, we then propose and implement a Bayesian hierarchical estimation procedure to coherently combine information from different participants, and we finally apply the Bayesian estimation procedure to data from an experiment designed to provide a test of specific influence.