Download PDF (external access)

IEEE Transactions on Neural Networks

Publication date: 2008-09-01
Volume: 19 Pages: 1583 - 1598
Publisher: Institute of Electrical and Electronics Engineers

Author:

Alzate Perez, Carlos
Suykens, Johan

Keywords:

SISTA, Science & Technology, Technology, Computer Science, Artificial Intelligence, Computer Science, Hardware & Architecture, Computer Science, Theory & Methods, Engineering, Electrical & Electronic, Computer Science, Engineering, epsilon-insensitive loss function, kernel principal component analysis (PCA), least squares support vector machines (LS-SVM), loss function, robustness, sparseness, Algorithms, Artificial Intelligence, Computer Simulation, Models, Statistical, Pattern Recognition, Automated, Principal Component Analysis, Artificial Intelligence & Image Processing, 4602 Artificial intelligence

Abstract:

Kernel principal component analysis (PCA) is a technique to perform feature extraction in a high-dimensional feature space, which is nonlinearly related to the original input space. The kernel PCA formulation corresponds to an eigendecomposition of the kernel matrix: eigenvectors with large eigenvalues correspond to the principal components in the feature space. Starting from the least squares support vector machine (LS-SVM) formulation to kernel PCA, we extend it to a generalized form of kernel component analysis (KCA) with a general underlying loss function made explicit. For classical kernel PCA, the underlying loss function is L(2) . In this generalized form, one can plug in also other loss functions. In the context of robust statistics, it is known that the L(2) loss function is not robust because its influence function is not bounded. Therefore, outliers can skew the solution from the desired one. Another issue with kernel PCA is the lack of sparseness: the principal components are dense expansions in terms of kernel functions. In this paper, we introduce robustness and sparseness into kernel component analysis by using an epsilon-insensitive robust loss function. We propose two different algorithms. The first method solves a set of nonlinear equations with kernel PCA as starting points. The second method uses a simplified iterative weighting procedure that leads to solving a sequence of generalized eigenvalue problems. Simulations with toy and real-life data show improvements in terms of robustness together with a sparse representation.