You are on page 1of 18

R.V.

COLLEGE OF ENGINEERING, BANGALORE-560059


(Autonomous Institution Affiliated to VTU, Belgaum)

FACE DETECTION, TRACKING AND RECOGNITION

ASSIGNMENT REPORT
COURSE- DIGITAL SIGNAL PROCESSING FOR COMMUNICATION

Submitted by
AJITH A L
AKSHAYA
AMRUTA NAVADAGI

Submitted to
Smt. B. ROJAREDDY
Asst. Professor
Department of Telecommunication
R.V.COLLEGE OF ENGINEERING
(2016-17)

0
CONTENTS Page No
Theory 2
Introduction 2

Software 3

Applications 4

Advantages 5

Disadvantages 5

KLT tracker and face recognition 7

Program 9

Result 14

Conclusions and future scope 16

References 17

1
THEORY
INTRODUCTION
Face is vital part ofhuman being that represents important information like expression,attention,
identity etc. of an individual. Face varies from individual to individual due to physical, culturaland
environmental changes.

The problem of face detection can be stated as identifying an individual from images of the
face.Face detection as the keyword itself reveals its meaning that it concerns about where a face is
located in an image.Face detection and tracking is a computer technology that identifies human faces
in digital images and videos.It is a procedure by which we can be able to extract face region from a
humanbody.It can be regarded as a specific case of object-class detection. In object-class detection,
the task is to find the locations and sizes of all objects in an image that belong to a given class like
identifying pedestrians, vehicles, etc. Face tracking can be termed as a process of identifying tilting or
movement of a face.

Face detection and tracking algorithms focus on the detection of frontal human faces. It is analogous
to image detection in which the image of a person is matched bit by bit with known standard.
Similarly in face detection, face matchingis done with the data/images available in image stores of
database. A database is a standard test data set with which the image subjected to detection and
tracking is compared to give out results. While there are many databases in use currently, the choice
of an appropriate database to be used should be made based on the task given like aging, expressions,
lighting, etc. Another way is to choose the data set specific to the property to be tested like how
algorithm behaves when given images with lighting changes or images with different facial
expressions, etc. Any facial feature changes in the image selected for detection and tracking process
when compared to that available in database, the algorithm will automatically invalidate the matching
process.

Facial recognition (or face recognition) is a type of biometric software application that can identify a
specific individual in a digital image by analysing and comparing patterns. Facial recognition systems
based on faceprints can quickly and accurately identify target individuals when the conditions are
favourable. However, if the subjects face is partially obscured or in profile rather than facing forward,
or if the light is insufficient, the software is less reliable. Nevertheless, the technology is evolving
quickly and there are several emerging approaches, such as 3D modelling, that may overcome current
problems with the systems. Here in matlab face recognition is done by feature extraction. The feature
extraction in matlab is histogram of oriented gradience(HOG).

2
SOFTWARE

MATLAB is a short form of matrix laboratory and is a multi-paradigmnumerical computing


environment and fourth generation programming language. A proprietary programming language
developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data,
implementation of algorithms, creation of user interfaces, and interfacing with programs written in
other languages, including C, C++, C#, Java, Fortran and Python. Face Detection and Tracking can be
implemented using different software. In this report, Face Detection and Tracking using KLT is
implemented using MATLAB because of its obvious advantages.

FACE DETECTOR and REGISTRATION


FEATURE EXTRACTION
INPUT
RE

CLASSIFIER COMPARES WITH DATABASE


IF FACE IDENTIFIED
TRACKER

Kanade Lucas Tomasi (KLT)

Figure 1: Typical Block Diagram of Face Detection and Tracking System

Kanade Lucas Tomasi (KLT) is one among many algorithms of face detection and tracking of digital
images and videos. Kanade along with his students Lucas and Tomasi originated KLT algorithm in the
year 1991 and progress was made until 1994. KLT algorithm is a feature based tracking algorithm.
They described how to track a face in an image frame to frame and what features to be considered
during detection and tracking. The goal of face detection and Tracking system isto locate the
occurrence of face of an individual in the frame and detection systemretrieves the identity of person
for authorization and tracking helps identify movements of an individuals face.

The figure 1 represents a Typical Block Diagram of Face Detection and Tracking system using KLT.
The image/video of an individual whose face is to be detected is given as an input to the face
detection system. Now face is localized forming a frame indicating anticipation of those parts of an
image where a face may be present.And then the detected region is normalized, so that the alignments
of various facial features arein the proper location. This process is called face registration. The
registered face undergoes feature extraction. In feature extraction, the features of face such as eyes,
eyebrows, nose, mouth, ears are extracted so that to verify whether the anticipated parts are actually
carrying out a face or not by comparing with available database which is achieved by the classifier.

3
The classifier checks if the image is matching to that available in database and predicts if the face is
identified. Once identified, the image is subjected to tracking which helps track tilting or movements
of face.

KLT algorithm is used as a basis for tracking of face in several face detection algorithms. The one
described above is the general case. Few of the face detection algorithms are,

PCA (Principal Component Analysis)

One of the most used and cited statistical method is the Principal Component Analysis (PCA). It is a
mathematical procedure thatperforms a dimensionality reduction by extracting the principal
componentsof the multi-dimensional data. The first principal component is the linearcombination of
the original dimensions that has the highest variability. Thenth principal component is the linear
combination with the maximum variability,being orthogonal to the n-1 first principal components.

LDA (Linear Discriminant Analysis)

LDA is widely used to find linear combinations of features while preservingclass separability. Unlike
PCA, LDA tries to model the differences betweenclasses. Classic LDA is designed to take into
account only two classes. Specifically,it requires data points for different classes to be far from each
other,while points from the same class are close. Consequently, LDA obtains differencedprojection
vectors for each class.

ICA (Independent Component Analysis)

Independent Component Analysis aims to transform the data as linear combinationsof statistically
independent data points. Therefore, its goal is toprovide an independent rather than uncorrelated
image representation. ICAis an alternative to PCA which provides a more powerful data
representation. Its a discriminant analysis criterion, which can be used to enhancePCA.

4
APPLICATIONS

Face detection and tracking is used in number of applications, few of them are as listed,

Computer Recognition

It seeks to automate tasks that the human visual system can do. Computer vision tasks include
methods for acquiring, processing, analyzing and understanding digital images, and in general, deal
with the extraction of high-dimensional data from the real world in order to produce numerical or
symbolic information in the forms of decisions. Computer recognition or Computer vision controls,
navigates, detects, organizes, interacts, inspects and models the information obtained. Examples of
computer recognition include industrial robot, autonomous vehicle or a mobile robot, people counting,
computer human interaction, etc.

Pattern Recognition

Pattern recognition is a branch of machine learning that focuses on the recognition of patterns and
regularities in data. Examples include discriminant analysis, classification, regression, sequence
labelling, regular expression matching, parts of speech tagging, text editors etc.

Biometrics and Information Security

Biometrics refers to metrics related to human characteristics like palm veins, iris, retina, DNA, gait,
voice, etc. Biometrics authentication is used in computer science as a form of identification, access
control or use of smart cards. It is also used to identify individuals in groups that are under
surveillance. Well known examples of biometrics include Drivers Licenses, National ID, Passports,
Voter Registration, User Authentication, etc.Information securityis the practice of preventing
unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction of
information.

Surveillance

Surveillance is the monitoring of the behaviour, activities, or other changing information, usually of
people for the purpose of influencing, managing, directing, or protecting them. This can include
observation from a distance by means of electronic equipment such as CCTV cameras, or interception
of electronically transmitted information such as Internet traffic or phone calls and it can
includesimple, relatively no or low technology methods such as human intelligence agents and postal
interception. The word surveillance comes from a French phrase for "watching over".

Surveillance is used by governments for intelligence gathering, the prevention of crime, the protection
of a process, person, group or object, or for the investigation of crime.

5
ADVANTAGES

Photos of faces are widely used in passports and drivers licenses where the possession
authentication protocol is augmented with a photo for manual inspection purpose and there is
wide public acceptance for this biometric identifier.
Face recognition systems are the least intrusive from a biometric sampling point of view,
requiring no contact, nor even the awareness of the subject.
The biometric works, or at least works in theory, with legacy photograph databases,
videotape, or other image sources.
Face recognition can, at least in theory, be used for screening of unwanted individuals in a
crowd, in real time.
It is a fairly good biometric identifier for small scale verification applications.

DISADVANTAGES

A face needs to be well lighted by controlled light sources in automated face authentication
systems. This is only a first challenge in a long list of technical challenges that are associated
with robust face authentication.
Face currently is a poor biometric for use in a pure identification protocol.
An obvious circumvention method is disguise i.e., there is some criminal association with
face identifiers since this biometric has long been used by law enforcement agencies.

KLT BASED TRACKING

6
The KLT algorithm follows several steps in Face Detection and Tracking listed as below,

Detect a Face

First task is to detect the location of a face using vision.CascadeObjectDetector System object in a
video frame which is achieved by cascade object detector. By default, the detector is configured to
detect faces, but it can be used to detect other types of object.

Identify Facial Features to Track

The KLT algorithm tracks a set of feature points across the video frames. Once the detection locates
the face, the next step is to identify feature points that can be reliably tracked.

Initialize a Tracker to Track the Points

With the feature points identified, you can now use the vision.PointTracker System object to track
them. For each point in the previous frame, the point tracker attempts to find the corresponding point
in the current frame. Then the estimateGeometricTransform function is used to estimate the
translation, rotation, and scale between the old points and the new points. This transformation is
applied to the bounding box around the face.

Initialize a Video Player to Display the Results

Create a video player object for displaying video frames.

Track the Face

Track the points from frame to frame, and use estimateGeometricTransform function to estimate the
motion of the face.

FACE RECOGNITION

7
Face recognition involves the followning steps,

Gallery/Database

First we need to create a face gallery that contains a set of already known faces and is stored in a
computer.

Feature extraction

After the database is created, each image in the database must have different features so that it can be
recognised. Feature extraction has to be modelled and classified.The input image(query image) also
must undergo a feature extraction.
Classification/Comparision

In this the features of the query and the database is compared and the most appropriate image which
has very less difference with be the output of this block.

PROGRAM

8
FACE DETECTIONON AND TRACKING USING KLT ALGORITHM

% Create a cascade detector object.


faceDetector = vision.CascadeObjectDetector();
% Read a video frame and run the face detector.
videoFileReader = vision.VideoFileReader('tilted_face.avi');
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
% Convert the first box to a polygon.
% This is needed to be able to visualize the rotation of the
object.
x = bbox(1, 1); y = bbox(1, 2); w = bbox(1, 3); h = bbox(1,
4);
bboxPolygon = [x, y, x+w, y, x+w, y+h, x, y+h];
% Draw the returned bounding box around the detected face.
videoFrame = insertShape(videoFrame, 'Polygon',
bboxPolygon);
figure; imshow(videoFrame); title('Detected face');% Detect feature points in the face region.points =
detectMinEigenFeatures(rgb2gray(videoFrame), 'ROI',bbox);% Display the detected points.figure,
imshow(videoFrame), hold on, title('Detectedfeatures');plot(points);
% Create a point tracker and enable the bidirectional
error constraint to make it more robust in the presence
of noise and clutter.
pointTracker =
vision.PointTracker('MaxBidirectionalError', 2);
% Initialize the tracker with the initial point locations
and the initial video frame.
points = points.Location;
initialize(pointTracker, points, rgb2gray(videoFrame));
% Make a copy of the points to be used for computing
%thegeometrictransformation between the points in
%the previous and thecurrent frames
oldPoints = points;while ~isDone(videoFileReader)
% get the next frame
videoFrame = step(videoFileReader);
% Track the points. Note that some points may be lost.

9
[points, isFound] = step(pointTracker, rgb2gray(videoFrame));
visiblePoints = points(isFound, :);
oldInliers = oldPoints(isFound, :);
if size(visiblePoints, 1) >= 2 % need at least 2 points
% Estimate the geometric transformation between the old
points and the new points and eliminate outliers
[xform, oldInliers, visiblePoints] =
estimateGeometricTransform(...
oldInliers, visiblePoints, 'similarity',
'MaxDistance', 4);
Track the Points
% Apply the transformation to the bounding box
[bboxPolygon(1:2:end), bboxPolygon(2:2:end)] ...
= transformPointsForward(xform,
bboxPolygon(1:2:end), bboxPolygon(2:2:end));
% Insert a bounding box around the object being tracked
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon);
% Display tracked points
videoFrame = insertMarker(videoFrame, visiblePoints, '+', ...
'Color', 'white');
% Reset the points
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
end
Track the Points
% Display the annotated video frame using the
video player object
step(videoPlayer, videoFrame);
end
% Clean up
release(videoFileReader);
release(videoPlayer);
release(pointTracker);

SIMPLE FACE RECOGNITION

%% Simple Face Recognition Example

10
% Copyright 2014-2015 The MathWorks, Inc.
%% Load Image Information from ATT Face Database Directory
faceDatabase = imageSet('FaceDatabaseATT','recursive');

%% Display Montage of First Face


figure;
montage(faceDatabase(1).ImageLocation);
title('Images of Single Face');

%% Display Query Image and Database Side-Side


personToQuery = 1;
galleryImage = read(faceDatabase(personToQuery),1);
figure;
fori=1:size(faceDatabase,2)
imageList(i) = faceDatabase(i).ImageLocation(5);
end
subplot(1,2,1);imshow(galleryImage);
subplot(1,2,2);montage(imageList);
diff = zeros(1,9);

%% Split Database into Training & Test Sets


[training,test] = partition(faceDatabase,[0.8 0.2]);

%% Extract and display Histogram of Oriented Gradient Features for single face
person = 5;
[hogFeature, visualization]=...
extractHOGFeatures(read(training(person),1));
figure;
subplot(2,1,1);imshow(read(training(person),1));title('Input Face');
subplot(2,1,2);plot(visualization);title('HoG Feature');
%% Extract HOG Features for training set
trainingFeatures = zeros(size(training,2)*training(1).Count,4680);
featureCount = 1;
fori=1:size(training,2)
for j = 1:training(i).Count
trainingFeatures(featureCount,:) = extractHOGFeatures(read(training(i),j));

11
trainingLabel{featureCount} = training(i).Description;
featureCount = featureCount + 1;
end
personIndex{i} = training(i).Description;
end
%% Create 40 class classifier using fitcecoc
faceClassifier = fitcecoc(trainingFeatures,trainingLabel);
%% Test Images from Test Set
person = 1;
queryImage = read(test(person),1);
queryFeatures = extractHOGFeatures(queryImage);
personLabel = predict(faceClassifier,queryFeatures);
% Map back to training set to find identity
booleanIndex = strcmp(personLabel, personIndex);
integerIndex = find(booleanIndex);
subplot(1,2,1);imshow(queryImage);title('Query Face');
subplot(1,2,2);imshow(read(training(integerIndex),1));title('Matched Class');
%% Test First 5 People from Test Set
figure;
figureNum = 1;
for person=1:5
for j = 1:test(person).Count
queryImage = read(test(person),j);
queryFeatures = extractHOGFeatures(queryImage);
personLabel = predict(faceClassifier,queryFeatures);
% Map back to training set to find identity
booleanIndex = strcmp(personLabel, personIndex);
integerIndex = find(booleanIndex);
subplot(2,2,figureNum);imshow(imresize(queryImage,3));title('Query Face');
subplot(2,2,figureNum+1);imshow(imresize(read(training(integerIndex),1),3));title('Matched
Class');
figureNum = figureNum+2;
end
figure;
figureNum = 1;
end

12
RESULT
KLT

DETECTION

TRACKING

13
FACE RECOGNITION

IMAGES OF SINGLE FACE FIRST FACE AND THE GALLERY

COMPARISION OF THE FIRST FIVE PEOPLE

14
15
CONCLUSION

The paper gives an overview of face tracking, detectionand facial feature localization algorithm.
Image based face recognition is still a very challenging topic after decades of exploration. KLT
employs a simple face tracking system that automatically detects and tracks a single face. The only
thing is to make sure the person is facing the camera in the initial frame for the detection step.The
system has been designed to be generally applicable to a variety ofapplications, and as such accepts
colour or black and white images both stilland video.Face recognition is a technology just reaching
sufficient maturity for it toexperience a rapid growth in its practical applications. Much research
effortaround the world is being applied to expanding the accuracy and capabilitiesof this biometric
domain, with a consequent broadening of its application inthe near future. Verification systems for
physical and electronic access securityare available today, but the future holds the promise and the
threat of passivecustomization and automated surveillance systems enabled by face recognition.

In the face recognition system 90 percent accuracy is achieved. This can be further increased by
appropriate sampling.

FUTURE SCOPE

The tracking should be done for multiple faces and even for less illuminated light.
The simple face recognition system done on complex images by cropping the images and focusing
only the face and recognising who the person with more than 90 percent accuracy.

16
REFERENCES

[1] P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, in
Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern
Recognition, vol. 1, 2001, pp. 511518.

[2] Jawad Nagi, Syed Khaleel Ahmed, Farrukh Nagi, 2008, A MATLAB based Face Recognition
System usingImage Processing and Neural Networks, 4th International Colloquium on Signal
Processing and its Applications, March 7-9, 2008, Kuala Lumpur, Malaysia.

[3] Marian Spilka, Gregor Rozinaj, 2016, Face Recognition Methods by Using Low Resolution
Devices, 58th International Symposium ELMAR-2016, 12-14 September 2016, Zadar, Croatia.

[4] S. V. Tathe, A. S. Narote, S. P. Narote, 2016, Human Face Detection and Recognition in Videos,
2016 Intl. Conference on Advances in Computing, Communications and Informatics (ICACCI), Sept.
21-24, 2016, Jaipur, India.

[5] Mrs. Sunita Roy1 and Mr. Susanta Podder, Face detection and its applications, IJREAT
International Journal of Research in Engineering & Advanced Technology, Volume 1, Issue 2, April-
May, 2013.

[6] E. Hjelmas, and B. K. Low, Face detection: A survey, Computer Vision and Image
Understanding, Vol. 83, No. 3, Sept. 2001, pp. 236-274.

[7] C. R. del Blanco, F. Jaureguizar, and N. Garcia, An efficient multiple object detection and
tracking framework for automatic counting and video surveillance applications, IEEE Transactions
on Consumer Electronics, vol. 58, no. 3, pp. 857862, August 2009.

[8] M. H. Yang, D. J. Kriegman, and N. Ahuja, Detecting face in images: a survey, IEEE Trans.
Patter Analysis and Machine Intelligence, vol. 24, pp. 3458, 2002.

[9] Mrs. Sunita Roy et.al., "A Tutorial Review on Face Detection", International Journal of
Engineering Research & Technology (IJERT), Vol. 1 Issue 8, October - 2012, ISSN: 2278-0181.

[10] T. Sasaki, S. Akamatsu, and Y. Suenaga, Face image normalization based on colour information,
Tech. Rep. I.E.I.C.E., IE91-2, pp. 915 (1991).

[11]Ion Marques, Face Recognition Algorithms, Proyecto Fin de Carrera, June 16, 2010.

17

You might also like