Title: Look before you leap: Some insights into learner evaluation with cross-validation (Poster)
Authors: Vanwinckelen, Gitte
Blockeel, Hendrik
Issue Date: 1-Nov-2014
Conference: Intelligent Data Analysis edition:13 location:Leuven, Belgium date:30 October - 1 November 2014
Abstract: Machine learning is largely an experimental science, of which the evaluation of predictive
models is an important aspect. These days, cross-validation is the most widely used method
for this task. There are, however, a number of important points that should be taken into
account when using this methodology. First, one should clearly state what they are trying to
estimate. Namely, a distinction should be made between the evaluation of a model learned
on a single dataset, and that of a learner trained on a random sample from a given data
population. Each of these two questions requires a different statistical approach and should
not be confused with each other. While this has been noted before, the literature on this
topic is generally not very accessible. This paper tries to give an understandable overview
of the statistical aspects of these two evaluation tasks.
Publication status: published
KU Leuven publication type: IMa
Appears in Collections:Informatics Section

Files in This Item:
File Description Status SizeFormat
insights_into_crossvalidation.pdfPoster: Some insights into learner evaluation with cross-validation Published 111KbAdobe PDFView/Open


All items in Lirias are protected by copyright, with all rights reserved.