Sunday, February 04, 2007

Nonlinear Dimensionality Reduction

Tradition-ally,multidimensional scaling (MDS) (Hastie et al.,2001) and principal component analysis (PCA) (Hastieet al., 2001) have been used for dimensionality reduc-tion.MDS and PCA perform well if the input data lieon or are close to a linear subspace, but are not de-Appearing in Proceedings of the 23rd International Conferenceon Machine Learning, Pittsburgh, PA, 2006. Copy-right 2006 by the author(s)/owner(s).signed to discover nonlinear structures, and often failto do so.Weinberger et al (Wein-berger et al., 2005) proposed using semi-definite programming and kernel matrix factorization to maximizethe variance in feature space while preserving the distance and angles between nearest neighbors.
Classical methods, such as LLE, ISOMAP, and LTSAare all unsupervised learning algorithms, that is, theyassume no prior information on the input data. Fur-thermore, these algorithms do not always yield lowdimensional coordinates that bear any physical meaning.

No comments: