ITEM METADATA RECORD
Title: Graph kernels and Gaussian processes for relational reinforcement learning
Authors: Gartner, T ×
Driessens, Kurt
Ramon, Jan #
Issue Date: 2003
Publisher: Springer
Host Document: Lecture notes in computer science vol:2835 pages:146-163
Conference: Inductive Logic Programming edition:13 location:Szeged, Hungary date:September 29 - October 1, 2003
Abstract: Relational reinforcement learning is a Q-learning technique for relational state-action spaces. It aims to enable agents to learn how to act in an environment that has no natural representation as a tuple of constants. In this case, the learning algorithm used to approximate the mapping between state-action pairs and their so called Q(uality)-value has to be not only very reliable, but it also has to be able to handle the relational representation of state-action pairs.
In this paper we investigate the use of Gaussian processes to approximate the quality of state-action pairs. In order to employ Gaussian processes in a relational setting we use graph kernels as the covariance function between state-action pairs. Experiments conducted in the blocks world show that Gaussian processes with graph kernels can compete with, and often improve on, regression trees and instance based regression as a generalisation algorithm for relational reinforcement learning.
Description: Acceptance rate = 40%
URI: 
ISBN: 978-3-540-20144-1
ISSN: 0302-9743
Publication status: published
KU Leuven publication type: IC
Appears in Collections:Informatics Section
× corresponding author
# (joint) last author

Files in This Item:
File Status SizeFormat
2003_ilp_gaerter.pdf Published 219KbAdobe PDFView/Open

 


All items in Lirias are protected by copyright, with all rights reserved.

© Web of science