Jun Zhang et al. (1997) investigate three distinct methods of face recognition applicable to computer vision, each a noteworthy domain of statistical analysis in its own right:
1) Eigenface algorithm
2) Elastic matching
3) Autoassociation and classification nets
The eigenface method encodes the statistical variation among face images using some form of dimensionality reduction method (like PCA), where the resulting characteristic differences in the feature space don't necessarily correspond to isolated facial features such as eyes, ears and noses (in other words, the indispensable components of the feature vector are not pre-determined).
Elastic matching generates nodal graphs (ie wireframe model) that correspond to specific contour points of a face, such as the eyes, chin, tip of the nose, etc, and recognition is based on a comparison of image graphs against a known database. Since image graphs can be rotated during the matching process, this system tends to be more robust to large variation in the images.
Classification net recognition utilizes the same geometric characteristics as elastic matching, but fundamentally differs by being a supervised machine learning technique (often involving the use of support vector machines).
Although eigenface detection can underperform other methods when variation in lighting or facial alignment is large, it has the benefit of being easy to implement, computationally efficient, and able to recognize faces in an unsupervised manner, and therefore tends to be a de facto standard. Many state-of-the-art detection techniques also rely on some form of dimensionality reduction prior to recognition, even if feature vector extraction is handled differently.