You are on page 1of 12

AMERICAN UNIVERSITY OF BEIRUT

FACULTY OF ENGINEERING AND ARCHITECTURE


DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING

EECE695C – Adaptive Filtering and Neural Networks

Fingerprint Identification – Project 2


I. Introduction

Fingerprints are imprints formed by friction ridges of the skin and thumbs. They
have long been used for identification because of their immutability and individuality.
Immutability refers to the permanent and unchanging character of the pattern on each
finger. Individuality refers to the uniqueness of ridge details across individuals; the
probability that two fingerprints are alike is about 1 in 1.9x1015.
However, manual fingerprint verification is so tedious, time consuming and expensive that
is incapable of meeting today’s increasing performance requirements. An automatic
fingerprint identification system is widely adopted in many applications such as building or
area security and ATM machines [1-2].

Two approaches will be described in this project for fingerprint recognition:


• Approach 1: Based on minutiae located in a fingerprint
• Approach 2: Based on frequency content and ridge orientation of a fingerprint

II. First Approach

Most automatic systems for fingerprint comparison are based on minutiae matching
Minutiae are local discontinuities in the fingerprint pattern. A total of 150 different
minutiae types have been identified. In practice only ridge ending and ridge bifurcation
minutiae types are used in fingerprint recognition. Examples of minutiae are shown in
figure 1.

(a) (b)
Figure 1. (a) Different minutiae types, (b) Ridge ending & Bifurcation
Many known algorithms have been developed for minutiae extraction based on orientation
and gradients of the orientation fields of the ridges [3]. In this project we will adopt the
method used by Leung where minutiae are extracted using feedforward artificial neural
networks [1].

The building blocks of a fingerprint recognition system are:

Image Edge Thinning Feature Classifier


Physical Acquisition Detection Extractor Classification
Fingerprint Decision

Figure 2. Fingerprint recognition system

a) Image Acquisition

A number of methods are used to acquire fingerprints. Among them, the inked
impression method remains the most popular one. Inkless fingerprint scanners are also
present eliminating the intermediate digitization process.
In our project we will use the database available for free at University of Bologna
(http://bias.csr.unibo.it/fvc2000/) as well as building an AUB database; each one must
gather 36 inked fingerprints images from 3 persons (12 images per finger).
Fingerprint quality is very important since it affects directly the minutiae extraction
algorithm. Two types of degradation usually affect fingerprint images: 1) the ridge lines are
not strictly continuous since they sometimes include small breaks (gaps); 2) parallel ridge
lines are not always well separated due to the presence of cluttering noise. The resolution
of the scanned fingerprints must be 500 dpi while the size is 300x300.

b) Edge Detection

An edge is the boundary between two regions with relatively distinct gray level
properties. The idea underlying most edge-detection techniques is on the computation of a
local derivative operator such as ‘Roberts’, ‘Prewitt’ or ‘Sobel’ operators.
In practice, the set of pixels obtained from the edge detection algorithm seldom
characterizes a boundary completely because of noise, breaks in the boundary and other
effects that introduce spurious intensity discontinuities. Thus, edge detection algorithms
typically are followed by linking and other boundary detection procedures designed to
assemble edge pixels into meaningful boundaries.
For a detailed explanation refer to “Digital Image Processing” by Gonzalez, chapters 3 - 4.
It is also useful to check the Image Toolbox Demos available in MATLAB.

c) Thinning

An important approach to representing the structural shape of a plane region is to


reduce it to a graph. This reduction may be accomplished by obtaining the skeleton of the
region via thinning (also called skeletonizing) algorithm.
The thinning algorithm while deleting unwanted edge points should not:
• Remove end points.
• Break connectedness
• Cause excessive erosion of the region

For a detailed explanation refer to “Digital Image Processing” by Gonzalez, chapter 9. It is


also useful to check the following link:
http://www.fmrib.ox.ac.uk/~steve/susan/thinning/node2.html

d) Feature Extraction

Extraction of appropriate features is one of the most important tasks for a


recognition system. The feature extraction method used in [1] will be explained below.
A multilayer perceptron (MLP) of three layers is trained to detect the minutiae in the
thinned fingerprint image of size 300x300. The first layer of the network has nine neurons
associated with the components of the input vector. The hidden layer has five neurons and
the output layer has one neuron. The network is trained to output a “1” when the input
window in centered on a minutiae and a “0” when it is not. Figure 3 shows the initial
training patterns which are composed of 16 samples of bifurcations in eight different
orientations and 36 samples of non-bifurcations. The networking will be trained using:
• The backpropagation algorithm with momentum and learning rate of 0.3.
• The Al-Alaoui backpropagation algorithm.
State the number of epochs needed for convergence as well as the training time for the two
methods. Once the network is trained, the next step is to input the prototype fingerprint
images to extract the minutiae. The fingerprint image is scanned using a 3x3 window given

Figure 3. Training set


(a) (b) (c) (d)
Figure 4. Core points on different fingerprint patterns. (a) tented arch, (b) right loop, (c)
left loop, (d) whorl

e) Classifier

After scanning the entire fingerprint image, the resulting output is a binary image
revealing the location of minutiae. In order to prevent any falsely reported output and select
“significant” minutiae, two more rules are added to enhance the robustness of the
algorithm:
1) At those potential minutiae detected points, we re-examine them by increasing the
window size by 5x5 and scanning the output image.
2) If two or more minutiae are to close together (few pixels away) we ignore all of
them.

To insure translation, rotation and scale-invariance, the following operations will be


performed:
• The Euclidean distance d(i) from each minutiae detected point to the center is
calculated. The referencing of the distance data to the center point guarantees the
property of positional invariance.
• The data will be sorted in ascending order from d(0) to d(N), where N is the number
of detected minutiae points, assuring rotational invariance.
• The data is then normalized to unity by shortest distance d (0), i.e: dnorm(i) =
d(0)/d(i); This will assure scale invariance property.

In the algorithm described above, the center of the fingerprint image was used to calculate
the Euclidean distance between the center and the feature point. Usually, the center or
reference point of the fingerprint image is what is called the “core” point.
A core point, is located at the approximate center, is defined as the topmost point on the
innermost upwardly curving ridgeline.
The human fingerprint is comprised of various types of ridge patterns, traditionally
classified according to the decades-old Henry system: left loop, right loop, arch, whorl, and
tented arch. Loops make up nearly 2/3 of all fingerprints, whorls are nearly 1/3, and
perhaps 5-10% are arches. Figure 4 shows some fingerprint patterns with the core point is
marked. Many singularity points detection algorithms were investigated to locate core
points, among them the famous “Poincaré” index method [4-5] and the one described in
[6]. For simplicity we will assume that the core point is located at the center of the
fingerprint image.
After extracting the location of the minutiae for the prototype fingerprint images, the
calculated distances will be stored in the database along with the ID or name of the person
to whom each fingerprint belongs.
The last phase is the verification phase where testing fingerprint image:
1) is inputted to the system
2) minutiae are extracted
3) Minutiae matching: comparing the distances extracted minutiae to the one stored in
the database
4) Identify the person

State the results obtained (i.e: recognition rate).

III. Second Approach

Most methods for fingerprint identification use minutiae as the fingerprint features.
For small scale fingerprint recognition system, it would not be efficient to undergo all the
preprocessing steps (edge detection, smoothing, thinning ..etc), instead Gabor filters will be
used to extract features directly from the gray level fingerprint as shown figure 5. No
preprocessing stage is needed before extracting the features [7].

Image Feature Classifier


Physical Acquisition Extractor Classification
Fingerprint Decision

Figure 5. Building blocks for the 2nd approach

a) Image Acquisition

The procedure is the same explained in the 1st approach.

b) Feature Extractor

Gabor filter based features have been successfully and widely applied to face
recognition, pattern recognition and fingerprint enhancement. The family of 2-D Gabor
filters was originally presented by Daugman (1980) as a framework for understanding the
orientation and spatial frequency selectivity properties of the filter. Daugman
mathematically elaborated further his work in [8].
In a local neighborhood the gray levels along the parallel ridges and valleys exhibit some
ideal sinusoidal shaped plane waves associated with some noise as shown in figure 6 [3].
Figure 6. Sinusoidal plane wave

The general formula of the Gabor filter is defined by:

 1  xθ2 yθ2 
h( x, y ) = exp −  2k + 2k  exp(i 2πfxθ k ) (1)
 
 2  σ x σ y 

Where
• xθ k = x cosθ k + y sin θ k
• xθ k = − x sin θ k + y cosθ k
• f is the frequency of the sinusoidal plane
• θ is the orientation of the Gabor filter
• σx and σy are the standard deviations of the Gaussian envelope along the x and y
axes
The next step is to specify the values of the filter’s parameters; the frequency is calculated
as the inverse of the distance between two successive ridges. The number of orientation is
specified by “m” where θ k = π (k − 1) / m , k = 1, 2, …., m. The standard deviations σx and
σy are determined empirically. In [7] σx = σy = 2 was used, it is advisable to try other
values also.

Equation (1) can be written in the complex form giving:


h( x, y ) = heven + i.hodd
 1  xθ2 yθ2k 
heven ( x, y ) = exp −  k
+ 2  cos(2πfxθ k ) (2)
 
 x σ y 
σ 2
 2

 1  xθ2 yθ2k 
hodd ( x, y ) = exp −  2 + 2  sin( 2πfxθ k )
k

 2  σ x σ y 

Figure 7 shows the filter response in spatial and frequency domain for a zero orientation.

Figure 7. Gabor filter response

Table 1 extracted from [8] described the filter properties in space and spectral domains.

2D Space Domain 2D Frequency Domain

Table 1. Filter properties


The fingerprint print image will be scanned by a 8x8 window; for each block the magnitude
of the Gabor filter is extracted with different values of m (m = 4 and m = 8). The features
extracted (new reduced size image) will be used as the input to the classifier.

b) Classifier

The classifier is based on the k-nearest neighborhood algorithm KNN. “Training”


of the KNN consists simply of collecting k images per individual as the training set. The
remaining images consists the testing set.
The classifier finds the k points in the training set that are the closest to x (relative to the
Euclidean distance) and assigns x the label shared by the majority of these k nearest
neighbors. Note that k is a parameter of the classifier; it is typically set to an odd value in
order to prevent ties.

Figure 8 shows how the KNN algorithm works for a two class problem. The KNN query
starts at the test point x and grows a spherical region until it encloses k training samples,
and it labels the test point by a majority vote of these samples. In this k = 5 case, the test
point x would be labeled in the category of the red points [9].

Figure 8. The KNN algorithm

The last phase is the verification phase where the testing fingerprint image:
1) is inputted to the system
2) magnitude features are extracted
3) perform the KNN algorithm
4) Identify the person

State the recognition rate obtained.

c) Suggested enhancement

In order to enhance the performance of the 2nd approach below is a list of proposed ideas:
• Instead of using only the magnitude Gabor filter features, try to use also the phase
of the filter [10].
• Try to use the Mahalanobis distance given by: D = ( x − m) T C −1 ( x − m) where m is
the mean and C is the covariance matrix. Appendix A provides an example of
Mahalanobis distance.
• Try to other classifiers such as backpropagation and ALBP. Indicate the number of
layers used as well as the number of neurons.
• The Gabor filter assumes a sinusoidal plane wave which is not always the case as
depicted in figure 9. Try to use the modified Gabor filter described in [11].

Figure 9. A fingerprint with corresponding ridges and valleys.


References
[1] W.F. Leung, S.H. Leung, W.H. Lau and A. Luk, "Fingerprint Recognition Using
Neural Network", proc. of the IEEE workshop Neural Network for Signal Processing, pp.
226-235, 1991.

[2] A. Jain, L. Hong and R. Boler, “Online Fingerprint Verification”, IEEE trans, 1997,
PAMI-19, (4), pp. 302-314

[3] L. Hong, Y. Wan, A.K. Jain, “Fingerprint image enhancement: Algorithm and
performance evaluation”, IEEE Trans. Pattern Anal. Machine Intell, 1998, 20 (8), 777–
789.

[4] Q. Zhang and K. Huang, “Fingerprint classification based on extaction and analysis of
singularities and pseudoridges”, 2002

[5] http://www.owlnet.rice.edu/~elec301/Projects00/roshankg/elec301.htm

[6] A. Luk, S.H. Leung,”A Two Level Classifier For Fingerprint Recognition”, in Proc.
IEEE 1991 International Symposium on CAS, Singapore, 1991, pp. 2625-2628.

[7] C.J. Lee and S.D. Wang, “Fingerprint feature extraction using Gabor Filters”, IEE
Electronics Letters, vol.35, 1999, pp. 288-290

[8] J.G Daugman, Uncertainty relation for resolution in space, spatial frequency, and
orientation optimized by two-dimensional visual cortical filters. J. Optical Soc. Amer. 2
(7), 1985, pp. 1160–1169.

[9] R. Duda and P. Hart, “Pattern Classification”, Wiley publisher, 2nd edition, 2001.

[10] M.T. Leung, W.E. Engeler and P. Frank, “Fingerprint Image Processing Using Neural
Network”, proc. 10th conf. on Computer and Communication Systems, pp.
582-586, Hong Kong 1990.

[11] J. Yang , L. Liu and alt., “A Modified Gabor Filter Design Method for Fingerprint
Image Enhancement”, to be published in the Pattern Recognition Letters
Appendix A

You might also like