Late breaking papers of Inductive Logic Programming, 18th International Conference edition:first pages:14-19
Inductive Logic Programming edition:18 location:Prague, Czech Republic date:10-12 September 2008
There has recently been a lot of interest in learning from graphs. Most approaches to this problem up to this point have been pragmatic, i.e., focused on creating algorithms that achieve decent accuracy on large datasets, using just a limited amount of computational resources. These algorithms tend to yield very limited theories about their domain, which often take the form of frequent patterns.
While there definitely exists a need for such research, it is also desirable to look into approaches that can be shown to be theoretically sound, and yield comprehensible theories of higher expressive power.
ILP-like techniques are a good starting point for such an approach. In order to express hypotheses about graphs, a graph logic has to be chosen. A good candidate is Cardelli et al's GL,
since it has decent expressive power (between FO and MSO) while retaining acceptable computational complexity. The most interesting operator in this logic is composition, which non-deterministically composes (splits) a graph in two parts. This makes the logic very flexible, since it allows quantification over subgraphs with specified properties, but this power comes at the price of high complexity.