Conference on Uncertainty in Artificial Intelligence (UAI) edition:30 location:Quebec City, Quebec, Canada date:23 July 2014
This tutorial explains the core ideas behind lifted probabilistic inference in statistical relational learning (SRL) and extensional query evaluation in probabilistic databases (PDBs). Both fields deal with relational representations of uncertainty and have realized that efficient inference is an enormous challenge. Both fields have also achieved remarkable results developing efficient algorithms for tasks previously thought to be intractable.
SRL and PDBs have very recently started to connect through the common language of relational logic. We now understand their commonalities and differences. Typical inference tasks are different in nature, yet can be captured in the same weighted model counting framework. Theoretical complexity bounds from one field translate to the other. This tutorial covers several of these developments.
Within SRL, we cover weighted model counting encodings of graphical models and Markov logic, the realization that symmetries enable tractable, lifted inference, as well as a deeper understanding of the theoretical strengths and limitations of lifting.
Within PDBs, we study of the data complexity of probabilistic inference, where the FO formula is fixed, and one asks for the complexity of the problem as a function of size of the database. In particular, we discuss a dichotomy theorem for the data complexity stating that, for a large class of queries, weighted model counting is either efficient, or provably #P-hard.
The tutorial will focus on the big ideas that set probabilistic inference with relations apart from more classical inference problems. We focus on why lifted algorithms work, which properties they exploit, how they can reduce complexity, but also why inference is still hard in the worst case.