Title: Deriving biased classifiers for better ROC performance
Authors: Blockeel, Hendrik ×
Struyf, Jan #
Issue Date: May-2002
Publisher: Slovene Society Informatika, Ljubljana
Series Title: Informatica vol:26 issue:1 pages:77-84
Abstract: Induction of classifiers is an important task in the field of data mining. Classifiers are often evaluated based on their predictive accuracy, but there are disadvantages associated with this measure: it may not be appropriate for the context in which the classifier will be deployed. ROC analysis is an alternative evaluation technique that makes it possible to evaluate how well classifiers will perform given certain misclassification costs and class distributions. Given a set of classifiers, it also provides a method for constructing a hybrid classifier that optimally uses the available classifiers according to specific properties of the deployment context. Now in some cases it is possible to derive multiple classifiers from a single one, in a cheap way, and such that these classifiers focus on different areas of the ROC diagram, such that a hybrid classifier with better overall ROC performance can be constructed. This principle is quite generally applicable; here we describe a method to apply it to decision tree classifiers. An experimental evaluation illustrates the usefulness of the technique.
ISSN: 0350-5596
Publication status: published
KU Leuven publication type: IT
Appears in Collections:Informatics Section
× corresponding author
# (joint) last author

Files in This Item:
File Description Status SizeFormat
37767.pdf Published 231KbAdobe PDFView/Open


All items in Lirias are protected by copyright, with all rights reserved.