The Fisher-EM algorithm has been recently proposed in [ 2 ] for the simultaneous visualization and clustering of high-dimensional data. It is based on a discriminative latent mixture model which fits the data into a latent discriminative subspace with an intrinsic dimension lower than the dimension of the original space. The Fisher-EM algorithm includes an F-step which estimates the projection matrix whose columns span the discriminative latent space. This matrix is estimated via an optimization problem which is solved using a Gram-Schmidt procedure in the original algorithm. Unfortunately, this procedure suffers in some case from numerical instabilities which may result in a deterioration of the visualization quality or the clustering accuracy. Two alternatives for estimating the latent subspace are proposed to overcome this limitation. The optimization problem of the F-step is first recasted as a regression-type problem and then reformulated such that the solution can be approximated with a SVD. Experiments on simulated and real datasets show the improvement of the proposed alternatives for both the visualization and the clustering of data.
L’algorithme Fisher-EM a été récemment proposé dans [ 2 ] pour simultanément visualiser et classer automatiquement des données de grande dimension. Il se base sur un modèle de mélange latent et discriminant qui modélise les données dans un sous-espace de dimension intrinsèque plus petite que celle de l’espace des observations. L’algorithme Fisher-EM est composé d’une étape F qui estime la matrice de projection dont les colonnes engendrent le sous-espace latent. Cette matrice est estimée via un problème d’optimisation, lequel est résolu, dans l’algorithme original, par une procédure de type Gram-Schmidt. Malheureusement, cette procédure souffre dans certains cas d’instabilités numériques qui peuvent engendrer une détérioration de la qualité de la visualisation ou de la classification automatique des données. Pour pallier cette limitation, nous proposons deux alternatives d’estimation du sous-espace latent. Le problème d’optimisation de l’étape F est réécrit tout d’abord comme un problème de régression puis reformulé de telle manière que la solution peut être approchée par une SVD. Des expériences sur des données simulées et réelles montrent l’intérêt des alternatives proposées pour la visualisation et la classification automatique des données.
Mot clés : classification automatique, algorithme Fisher-EM, problème de régression, critère de Fisher, sous-espace latent discriminant, réduction de dimension, données de grande dimension
@article{JSFS_2011__152_3_98_0, author = {Bouveyron, Charles and Brunet, Camille}, title = {On the estimation of the latent discriminative subspace in the {Fisher-EM} algorithm}, journal = {Journal de la soci\'et\'e fran\c{c}aise de statistique}, pages = {98--115}, publisher = {Soci\'et\'e fran\c{c}aise de statistique}, volume = {152}, number = {3}, year = {2011}, mrnumber = {2871179}, zbl = {1316.62082}, language = {en}, url = {http://archive.numdam.org/item/JSFS_2011__152_3_98_0/} }
TY - JOUR AU - Bouveyron, Charles AU - Brunet, Camille TI - On the estimation of the latent discriminative subspace in the Fisher-EM algorithm JO - Journal de la société française de statistique PY - 2011 SP - 98 EP - 115 VL - 152 IS - 3 PB - Société française de statistique UR - http://archive.numdam.org/item/JSFS_2011__152_3_98_0/ LA - en ID - JSFS_2011__152_3_98_0 ER -
%0 Journal Article %A Bouveyron, Charles %A Brunet, Camille %T On the estimation of the latent discriminative subspace in the Fisher-EM algorithm %J Journal de la société française de statistique %D 2011 %P 98-115 %V 152 %N 3 %I Société française de statistique %U http://archive.numdam.org/item/JSFS_2011__152_3_98_0/ %G en %F JSFS_2011__152_3_98_0
Bouveyron, Charles; Brunet, Camille. On the estimation of the latent discriminative subspace in the Fisher-EM algorithm. Journal de la société française de statistique, Volume 152 (2011) no. 3, pp. 98-115. http://archive.numdam.org/item/JSFS_2011__152_3_98_0/
[1] Simultaneous model-based clustering and visualization in the Fisher Discriminative subspace, Statistics and Computing, Volume 22(1) (2011), pp. 301-324 | MR | Zbl
[2] Dynamic Programming, Princeton University Press, 1957 | MR | Zbl
[3] High-Dimensional Data Clustering, Computational Statistics and Data Analysis, Volume 52 (2007) no. 1, pp. 502-519 | MR | Zbl
[4] On using principal component before separating a mixture of two multivariate normal distributions, Journal of the Royal Statistical Society, Series C, Volume 32 (1983) no. 3, pp. 267-275 | MR | Zbl
[5] The use of multiple measurements in taxonomic problems, Annals of Eugenics, Volume 7 (1936), pp. 179-188
[6] An optimal set of discriminant vectors., IEEE Transactions on Computers, Volume 24 (1975), pp. 281-289 | Zbl
[7] Introduction to Statistical Pattern Recognition, Academic. Press, San Diego, 1990 | MR | Zbl
[8] Auto-Associative Models and Generalized PrincipalComponent Analysis, Journal of Multivariate Analysis, Volume 93 (2005) no. 1, pp. 21-39 | MR | Zbl
[9] Matrix Computations. Second ed., The Johns Hopkins University Press, Baltimore, 1991 | MR | Zbl
[10] 1, Matrix nearness problems and its applications, Oxford University Press (1989), pp. 1-27 | MR | Zbl
[11] A note on the orthonormal discriminant vector method for feature extraction., Pattern Recognition, Volume 24 (1991) no. 7, pp. 681-684
[12] A Database for Handwritten Text Recognition Research, IEEE Pattern Analysis and Machine Intelligence, Volume 16 (1994) no. 5, pp. 550-554
[13] Variable selection in model-based clustering: A general variable role modeling, Computational Statistics and Data Analysis, Volume 53 (2009), pp. 3872-3882 | MR | Zbl
[14] Parsimonious Gaussian mixture models, Statistics and Computing, Volume 18 (2008) no. 3, pp. 285-296 | MR
[15] Model-based clustering of microarray expression data via latent Gaussian mixture models, Bioinformatics, Volume 26 (2010) no. 21, pp. 2705-2712
[16] Modelling high-dimensional data by mixtures of factor analyzers, Computational Statistics and Data Analysis (2003) no. 41, pp. 379-388 | MR | Zbl
[17] Dimensionally reduced mixtures of regression models, Electronic Proceedings of KNEMO, Knowledge Extraction and Modelling (2006) | Zbl
[18] Heteroscedastic Factor Mixture Analysis, Statistical Modeling: An International journal (forthcoming) (2010) no. to appear | MR | Zbl
[19] Sparse linear discriminant analysis with applications to high dimensional low sample size data, International Journal of Applied Mathematics, Volume 39 (2009) no. 1 | MR | Zbl
[20] Variable selection for model-based clustering, Journal of the American Statistical Association, Volume 101 (2006) no. 473, pp. 168-178 | MR | Zbl
[21] A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis, Biostatistic, Volume 10 (2009) no. 3, pp. 515-534 | Zbl
[22] A mixed factor model for dimension reduction and extraction of a group structure in gene expression data, IEEE Computational Systems Bioinformatics Conference, Volume 8 (2004), pp. 161-172