A unifying view of component analysis (from a computer vision perspective)

Fernando De la Torre

Abstract

  Component Analysis (CA) methods (e.g. Kernel Principal Component Analysis, Independent Component Analysis, Tensor factorization) have been used as a feature extraction step for modeling, classification and clustering in numerous visual, graphics and signal processing tasks over the last four decades. CA techniques are especially appealing because many can be formulated as eigen-problems, offering great potential for efficient learning of linear and non-linear representations of the data without local minima.

In the first part of the talk, I will review standard CA techniques (Principal Component Analysis, Canonical Correlation Analysis, Linear Discrimiant Analysis, Non-negative Matrix Factorization, Independent Component Analysis) and three standard extensions (Kernel methods, latent variable models and tensors). In the second part of the talk, I will describe a unified framework for energy-based learning in CA methods. I will also propose several extensions of CA methods to learn linear and non-linear representations of data to improve performance, over the current use of CA features, in state-of-the-art algorithms for classification (e.g. support vector machines), clustering (e.g. spectral graph methods) and modeling/visual tracking (e.g. active appearance models) problems.


Slides: A unifying view of component analysis


Back to the Main Page

Pradeep Ravikumar
Last modified: Thu Mar 29 18:45:00 EDT 2007