Download PDF (external access)

IEEE Transactions on Systems, Man and Cybernetics B, Cybernetics

Publication date: 2008-08-01
Volume: 38 Pages: 976 - 981
Publisher: Institute of Electrical and Electronics Engineers

Author:

Vrancx, Peter
Verbeeck, Katja ; Nowe, Ann

Keywords:

game theory, multi-agent systems, reinforcement learning, stochastic automata, stochastic games, Science & Technology, Technology, Automation & Control Systems, Computer Science, Artificial Intelligence, Computer Science, Cybernetics, Computer Science, STOCHASTIC GAMES, Algorithms, Artificial Intelligence, Computer Simulation, Game Theory, Markov Chains, Models, Theoretical, 0102 Applied Mathematics, 0801 Artificial Intelligence and Image Processing, 0906 Electrical and Electronic Engineering, Artificial Intelligence & Image Processing, 4602 Artificial intelligence, 4603 Computer vision and multimedia computation, 4611 Machine learning

Abstract:

Learning automata (LA) were recently shown to be valuable tools for designing multiagent reinforcement learning algorithms. One of the principal contributions of the LA theory is that a set of decentralized independent LA is able to control a finite Markov chain with unknown transition probabilities and rewards. In this paper, we propose to extend this algorithm to Markov games-a straightforward extension of single-agent Markov decision problems to distributed multiagent decision problems. We show that under the same ergodic assumptions of the original theorem, the extended algorithm will converge to a pure equilibrium point between agent policies.