IEEE Transactions on neural networks vol:14 issue:2 pages:447-450
In this letter, we present a simple and straightforward primal-dual support vector machine formulation to the problem of principal component analysis (PCA) in dual variables. By considering a mapping to a high-dimensional. feature space and application of the kernel trick (Mercer theorem) kernel PCA is obtained as introduced by Scholkopf et al. While least squares support vector machine classifiers have a natural link with kernel Fisher discriminant analysis (minimizing the within class scatter around targets + 1 and -1), for PCA analysis one can take the interpretation of a one-class modeling problem with zero target value around which one maximizes the variance. The score variables are interpreted as error variables within the problem formulation. In this way primal-dual constrained optimization problem interpretations to linear and kernel PCA analysis are obtained in a similar style as for least square-support vector machine (LS-SVM) classifiers.