Induction of classifiers is an important task in the field of data mining. Classifiers are often evaluated based on their predictive accuracy, but there are disadvantages associated with this measure: it may not be appropriate for the context in which the classifier will be deployed. ROC analysis is an alternative evaluation technique that makes it possible to evaluate how well classifiers will perform given certain misclassification costs and class distributions. Given a set of classifiers, it also provides a method for constructing a hybrid classifier that optimally uses the available classifiers according to specific properties of the deployment context. Now in some cases it is possible to derive multiple classifiers from a single one, in a cheap way, and such that these classifiers focus on different areas of the ROC diagram, such that a hybrid classifier with better overall ROC performance can be constructed. This principle is quite generally applicable; here we describe a method to apply it to decision tree classifiers. An experimental evaluation illustrates the usefulness of the technique.