You are on page 1of 3

Steps in Face recognition algorithm

1. Detection 2. Extraction 3. Recognition

HAAR WAVELET TRANSFORMS In the standard decomposition of an image, the one-dimensional wavelet transform is first applied to each row of pixel values. This gives an average value along with detail coefficients for each row. The transformed rows are treated as the image, and the 1D transform is applied to each column. The resulting values are all detail coefficients and a single overall average coefficient. In the non-standard decomposition, first one step of horizontal pair wise averaging and differencing is performed on the pixel values in each row of the image. Next, vertical pair wise averaging and differencing is applied to each column of the result. This process is repeated recursively on the quadrant containing averages in both directions. The Low Pass Filter coefficients are [12 12] The High Pass Filter coefficients are [12 - 12]

H H G X H G G

A1

H1

V1

D1

Tree representation of one level 2D wavelet decomposition

Three approaches:
1. Decomposition of the image using two dimensional Discrete Wavelet Transform(2D-DWT) to obtain

the Scaling and Wavelet components.Further decomposition of the scaling components till the last level (with single element in each component).The approximation coefficient/scaling component at the last level (single element) is concatenated with those at the previous level(4 elements) to form the feature vector.

2. The image is decomposed one level using 2D DWT. Histogram equalization is performed on the scaling component obtained at this level. The horizontal, vertical and diagonal details obtained at this level are multiplied by a scaling factor of 1.25.Inverse DWT is performed on the array obtained by combining the above components. The new image obtained from the above step is given to the eigenface algorithm. The accuracy improved to 98% as against 9496% obtained when the eigenface algorithm is applied to an image directly. 3. The image is divided into 16 non-overlapping 32X32 blocks. Each block is converted to a row vector with the 1024 pixel values. This process is repeated for 4 images of the training set for each person. This gives 4 1024-element row vectors. Pixel-wise mean is obtained for these vectors giving a single 1024element vector for each block. This vector is then mean adjusted to obtain training vector. Now the same process is repeated for the test image to obtain a 1024 element test vector. This process is done for all blocks of training and test images. The similarity measure between the test vector and training vector of the corresponding blocks is obtained using L1 Norm.
Block 1A (32 X 32) Image A Block 2A (32 X 32) Block 16A (32 X 32) Row vector1A Row vector 2A Row vector16A

Block 1D (32 X 32) Image D Block 16D (32 X 32)


Block 1t (32X32)

Row vector1D

BLOCK SELECTION ALGORITHM

2D DWT

Feature vector

Row vector16D Row vector1t Row vector16t

T est image

Block 16t (32X32)

4 blocks with maximum similarity measure are selected. 2D DWT is applied on these blocks to obtain 5 element vectors as in first approach. These vectors are concatenated to obtain the final feature vector. This approach resulted in 99% accuracy, an improvement of 1% over the second approach.

Row vector 1A (1024 pixel values) Row vector 1B (1024 pixel values) Row vector 1C (1024 pixel values) Row vector 1D (1024 pixel values) Pixel wise mean (1024 element vector) Mean a djustment Training vector 1

T raining vector 16

Row vector 1t (1024 pixel values)

Mean adjus tment

T vector 1 est

T vector 16 est

BLOCK SELECTION ALGORITHM

Training vector 1 T est vector 1

Similarity measure 1 (corresponding to block 1) 4 blocks with maximum value

Training vector 16 T est vector 16

Similarity measure 16 (corresponding to block 16)

BL OCKS E EL CTION AL GOR M (C ITH ONTD ..)

You might also like