Linear Discriminant Analysis based on Fast Approximate SVD
Résumé
We present an approach for performing linear discriminant analysis (LDA) in the contemporary challenging context of high dimensionality. The projection matrix of LDA is usually obtained by simultaneously maximizing the between-class covariance and minimizing the within-class covariance. However it involves matrix eigendecomposition which is computationally expensive in both time and memory requirement when the number of samples and the number of features are large. To deal with this complexity, we propose to use a recent dimension reduction method. The technique is based on fast approximate singular value decomposition (SVD) which has deep connections with low-rank approximation of the data matrix. The proposed approach, appSVD+LDA, consists of two stages. The first stage leads to a set of artificial features based on the original data. The second stage is the classical LDA. The foundation of our approach is presented and its performances in term of accuracy and computation time in comparison with some state-of-the-art techniques are provided for different real data sets.