ITEM METADATA RECORD
Title: Integrating experimentation and guidance in relational reinforcement learning
Authors: Driessens, Kurt ×
Dzeroski, Saso #
Issue Date: 2002
Host Document: Proceedings of the Nineteenth International Conference on Machine Learning pages:115-122
Conference: International Conference on Machine Learning location:Sydney, Australia date:July 8-12, 2002
Abstract: Reinforcement learning, and Q-learning in particular, encounter two major problems when dealing with large state spaces. First, learning the Q-function in tabular form may be infeasible because of the excessive amount of memory needed to store the table, and because the Q-function only converges after each state has been visited multiple times. Second, rewards in the state space may be so sparse that with random exploration they will only be discovered extremely slowly. The first problem is often solved by learning a generalisation of the encountered examples (e.g., using a neural net or decision tree). Relational reinforcement learning (RRL) is such an approach; it makes Q-learning feasible in structural domains by incorporating a relational learner into Q-learning. The problem of sparse rewards has not been addressed for RRL. This paper presents a solution based on the use of “reasonable policies” to provide guidance. Experimental results in several domains show the merit of the approach and indicate pitfalls to be avoided.
ISBN: 1-55860-873-7
Publication status: published
KU Leuven publication type: IC
Appears in Collections:Informatics Section
× corresponding author
# (joint) last author

Files in This Item:
File Status SizeFormat
2002_icml_driessens.pdf Published 210KbAdobe PDFView/Open

 


All items in Lirias are protected by copyright, with all rights reserved.