In many applications of Inductive Logic Programming (ILP), learning occurs from a knowledge base that contains a large number of examples. Storing such a knowledge base may consume a lot of memory. Often, there is a substantial overlap of information between different examples. Consider for instance the learning from episodes setting (e.g., from games) where each example represents the state of a world. Typically, each example encodes the complete state even though the difference between consecutive states is small. Similar redundancies occur when the knowledge base stores examples that represent complex objects (e.g., molecules) built from smaller components (e.g., functional groups), since the same components may occur in different objects. To reduce memory consumption, we propose a method to represent a knowledge base more compactly. We achieve this by introducing a meta-theory able to build new theories out of other (smaller) theories. In this way, the information associated with an example can be built from the information associated with one or more other examples and redundant storage of shared information is avoided. We also discuss algorithms to construct the information associated with example theories and report on a number of experiments evaluating our method in different problem domains.