The k-nearest neighbor classifier follows a simple, yet powerful algorithm: collect the k data points closest to an unlabeled instance, according to a given distance measure, and use them to predict that in- stance's label. The two components, the parameter k governing the size of used neighborhood, and the distance measure, essentially determine success or failure of the classifier. In this work, we propose to reverse the use of outlier-detection techniques that are based on k-neighborhoods in order to determine the value of k. To achieve this, we invert the workings of these techniques: instead of using a fixed k to decide whether an instance is an outlier, we stop growing the k-neighborhood as soon as the unlabeled instance would be given outlier status. We derive a number of criteria from different neighborhood-based outlier detection techniques. With the exception of one technique, our approaches have low complexity and running times. In our experiments, we compare against two recently proposed techniques from the field that are have more sophisticated theoretical foundations, as well as against two well-established kNN classifiers. We find that our approaches are competitive with existing work and especially that the recent techniques do not constitute an improvement.