AAAI'08 Workshop on Transfer Learning for Complex Tasks location:Chicago, Illinois date:14 July 2008
All machine learning algorithms require data to learn, and often the amount of data available is a limiting factor. For instance, classiﬁcation and regression require labeled data, which may be expensive to obtain. Reinforcement learning requires samples from an environment, which must be collected through repeated interaction with an agent’s environment. Typically, a learning system or agent treats every problem as district and must begin learning tabula rasa. The insight behind transfer learning is that
past experience may assist learning a novel task, even if the tasks are very different. While the idea of transfer has long been explored in the psychological literature, it has only recently been gaining popularity as a general machine learning technique.
In the transfer learning paradigm, one typically uses a set of source tasks to help learn one or more target tasks. Successful transfer allows faster, or better, learning, when compared to learning without previous knowledge, even if the source and target task data originate from different distributions.