International Workshop on Big Uncertain Data (BUDA) edition:1 location:Snowbird, Utah, USA date:22 June 2014
Statistical relational models combine aspects of first-order logic, databases and probabilistic graphical models, enabling them to represent complex logical and probabilistic relations between large numbers of objects. But this level of expressivity comes at a price: inference (i.e., drawing conclusions on the probability of events) becomes highly intractable. Nevertheless, relational models of real-life applications often exhibit a high level of symmetry (i.e., substructures that are modeled in a similar manner). Lifted inference is the art of exploiting that symmetry towards efficient inference. The first part of this tutorial describes the basic ideas underlying lifted inference algorithms, why they work, and how they are fundamentally different from other probabilistic reasoning algorithms. The second part is a brief overview of theory and applications. Lifted inference theory looks at tractable classes of models, and at different notions of symmetry and exchangeability. Practical applications of lifted inference are in approximate reasoning and machine learning.