During visual object categorization, a match must be found between the input image and stored information about basic-level categories. Graf (2002) suggested the involvement of analog transformational, shape-changing processes to align the memory representation of the category with the perceptual representation of the current stimulus. Here we compare the predictions of alignment models with exemplar-based models using morphing between four exemplar outlines within each of eleven categories. Overall, with increasing transformational distance between two exemplars of the same category, reaction times to decide whether they belong to the same category in a sequential matching paradigm increased, while rated similarity between both exemplars decreased. However, in contrast to alignment accounts, exemplar-based accounts (a) can correctly predict the observed dissociation between typicality and categorization time, and allow the observed (b) deviations from sequential additivity, and (c) non-linear relations between transformational distance and rated similarity. By discussing integrations of exemplar-based theories with neglected processes such as information accumulation, response competition, response priming, and gain-modulation, a view of the recognition process from input to response emerges, which increases the validity and scope of modern exemplar-based categorization and recognition models.