You are on page 1of 27

Ho Chi Minh City University of Technology

Faculty of Electrical and Electronics Engineering


Department of Telecommunications
Lectured by Ha Hoang Kha, Ph.D.
Ho Chi Minh City University of Technology
Email: hahoangkha@gmail.com
Principal Component Analysis
PCA

Face detection and recognition
Detection Recognition Sally
PCA 2 H. H. Kha
Applications of Face Recognition
Digital photography
PCA 3 H. H. Kha
Applications of Face Recognition
Digital photography
Surveillance
PCA 4 H. H. Kha
Consumer application: iPhoto 2009

http://www.apple.com/ilife/iphoto/
PCA 5 H. H. Kha
Starting idea of eigenfaces
1. Treat pixels as a vector


2. Recognize face by nearest neighbor
x
n
y y ...
1
x y =
T
k
k
k argmin
PCA 6 H. H. Kha
The space of all face images
When viewed as vectors of
pixel values, face images
are extremely high-
dimensional
100x100 image = 10,000
dimensions
Slow and lots of storage
But very few 10,000-
dimensional vectors are
valid face images
We want to effectively
model the subspace of face
images
PCA 7 H. H. Kha
Efficient Image Storage

PCA 8 H. H. Kha
Efficient Image Storage

PCA 9 H. H. Kha
Efficient Image Storage

PCA 10 H. H. Kha
Geometrical Interpretation

PCA 11 H. H. Kha
Geometrical Interpretation

PCA 12 H. H. Kha
Dimensionality Reduction
The set of faces is a subspace of the set
of images

Suppose it is K dimensional

We can find the best subspace using
PCA

This is like fitting a hyper-plane to the
set of faces

- spanned by vectors u
1
, u
2
, ..., u
K


Any face:
PCA 13 H. H. Kha
x +w
1
u
1
+w
2
u
2
++w
k
u
k
Principal Component Analysis
A N x N pixel image of a face, represented as a vector
occupies a single point in N
2
-dimensional image space.
Images of faces being similar in overall configuration, will
not be randomly distributedin this huge image space.
Therefore, they can be described by a low dimensional
subspace.

Main idea of PCA for faces:
To find vectors that best account for variation of face images in
entire image space.
These vectors are called eigenvectors.
Construct a face space and project the images into this face space
(eigenfaces).

Image Representation
Training set of m images of size
N*N are represented by vectors of
size N
2
x
1
,x
2
,x
3
,,x
M



Example









3 3
1 5 4
2 1 3
3 2 1

(
(
(

1 9
1
5
4
2
1
3
3
2
1

(
(
(
(
(
(
(
(
(
(
(
(

Principal Component Analysis (PCA)


Given: N data points x
1
, ,x
N
in R
d

We want to find a new set of features that are linear
combinations of original ones:
u(x
i
) = u
T
(x
i
)
(: mean of data points)

Choose unit vector u in R
d
that captures the most data
variance
PCA 16 H. H. Kha
Principal Component Analysis
Direction that maximizes the variance of the projected data:
Projection of data point
Covariance matrix of data
The direction that maximizes the variance is the eigenvector
associated with the largest eigenvalue of
N
N
1/N
Maximize
subject to ||u||=1
PCA 17 H. H. Kha
Eigenfaces (PCA on face images)

1. Compute covariance matrix of face images

2. Compute the principal components (eigenfaces)
K eigenvectors with largest eigenvalues

3. Represent all face images in the dataset as linear
combinations of eigenfaces
Perform nearest neighbor on these coefficients
PCA 18 H. H. Kha
Eigenfaces example
Training
images
x
1
,,x
N
PCA 19 H. H. Kha
Eigenfaces example

Top eigenvectors: u
1
,u
k
Mean:
PCA 20 H. H. Kha
Representation and reconstruction
Face x in face space coordinates:






=
PCA 21 H. H. Kha
Representation and reconstruction
Face x in face space coordinates:





Reconstruction:
= +
+ w
1
u
1
+w
2
u
2
+w
3
u
3
+w
4
u
4
+
=
^
x
=
PCA 22 H. H. Kha
Recognition with eigenfaces
Process labeled training images
Find mean and covariance matrix
Find k principal components (eigenvectors of ) u
1
,u
k
Project each training image x
i
onto subspace spanned by
principal components:
p
i
=(w
i1
,,w
ik
) = (u
1
T
(x
i
), , u
k
T
(x
i
))

Given novel image x
Project onto subspace:
p=(w
1
,,w
k
) = (u
1
T
(x

), , u
k
T
(x

))
Optional: check reconstruction error x x to determine
whether image is really a face
Classify as closest training face in k-dimensional
subspace
^
PCA 23 H. H. Kha
Recognition
The distance of p to each face class is defined by

k
2
= ||p-p
k
||
2
; k = 1,,N

A distance threshold
c
, is half the largest distance
between any two face images:

c
= max
j,k
{||p
j
-p
k
||}; j,k = 1,,N

PCA 24 H. H. Kha
Recognition

Find the distance between the original image x and
its reconstructed image from the eigenface space, x
f
,

2
= || x x ||
2

Recognition process:
IF
c

then input image is not a face image;
IF <
c

AND
k

c
for all k
then input image contains an unknown face;
IF <
c
AND
k
*=min
k
{
k
} <
c

then input image contains the face of individual k*

PCA 25 H. H. Kha
^
PCA

General dimensionality reduction technique

Preserves most of variance with a much more compact
representation
Lower storage requirements (eigenvectors + a few
numbers per face)
Faster matching

What are the problems for face recognition?

PCA 26 H. H. Kha
Limitations
Global appearance method: not robust to
misalignment, background variation
PCA 27 H. H. Kha

You might also like