Download PDF (external access)

12th International Conference on Knowledge-Based Intelligent Information and Engineering Systems Zagreb, CROATIA, SEP 03-05, 2008, Date: 2008/09/03 - 2008/09/05, Location: CROATIA, Zagreb

Publication date: 2008-01-01
Volume: 5178 Pages: 379 - 390
ISSN: 3540855645, 978-3-540-85564-4
Publisher: Springer-verlag berlin; HEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY

Knowledge-based intelligent information and engineering systems, pt 2, proceedings

Author:

Peeters, Maarten
Kononen, Ville ; Verbeeck, Katja ; Nowe, Ann

Keywords:

Science & Technology, Technology, Computer Science, Artificial Intelligence, Computer Science, Information Systems, Computer Science

Abstract:

The policy gradient method is a popular technique for implementing reinforcement learning in an agent system. One of the reasons is that a policy gradient learner has a simple design and strong theoretical properties in single-agent domains. Previously, Williams showed that the REINFORCE algorithm is a special case of policy gradient learning. He also showed that a learning automaton could be seen as a special case of the REINFORCE algorithm. Learning automata theory guarantees that a group of automata will converge to a stable equilibrium in team games. In this paper we will show a theoretical connection between learning automata and policy gradient methods to transfer this theoretical result to multi-agent policy gradient learning. An appropriate exploration technique is crucial for the convergence of a multi-agent system. Since learning automata are guaranteed to converge, they posses such an exploration. We identify the identical mapping of a learning automaton onto the Boltzmann exploration strategy with an suitable temperature setting. The novel idea is that the temperature of the Boltzmann function is not dependent on time but on the action probabilities of the agents.