You are on page 1of 12

384 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO.

4, JULY 2010

A Frequency-based Approach for Features Fusion


in Fingerprint and Iris Multimodal Biometric
Identification Systems
Vincenzo Conti, Carmelo Militello, Filippo Sorbello, Member, IEEE, and Salvatore Vitabile, Member, IEEE

Abstract—The basic aim of a biometric identification system is systems, operating on a single biometric feature, have many
to discriminate automatically between subjects in a reliable and limitations, which are as follows [1].
dependable way, according to a specific-target application. Mul- 1) Trouble with data sensors: Captured sensor data are often
timodal biometric identification systems aim to fuse two or more
physical or behavioral traits to provide optimal False Acceptance affected by noise due to the environmental conditions (in-
Rate (FAR) and False Rejection Rate (FRR), thus improving sys- sufficient light, powder, etc.) or due to user physiological
tem accuracy and dependability. In this paper, an innovative multi- and physical conditions (cold, cut fingers, etc).
modal biometric identification system based on iris and fingerprint 2) Distinctiveness ability: Not all biometric features have
traits is proposed. The paper is a state-of-the-art advancement the same distinctiveness degree (for example, hand-
of multibiometrics, offering an innovative perspective on features
fusion. In greater detail, a frequency-based approach results in geometry-based biometric systems are less selective than
a homogeneous biometric vector, integrating iris and fingerprint the fingerprint-based ones).
data. Successively, a hamming-distance-based matching algorithm 3) Lack of universality: All biometric features are universal,
deals with the unified homogenous biometric vector. The proposed but due to the wide variety and complexity of the human
multimodal system achieves interesting results with several com- body, not everyone is endowed with the same physical
monly used databases. For example, we have obtained an inter-
esting working point with FAR = 0% and FRR = 5.71% using features and might not contain all the biometric features,
the entire fingerprint verification competition (FVC) 2002 DB2B which a system might allow.
database and a randomly extracted same-size subset of the BATH Multimodal biometric systems are a recent approach devel-
database. At the same time, considering the BATH database and oped to overcome these problems. These systems demonstrate
the FVC2002 DB2A database, we have obtained a further interest- significant improvements over unimodal biometric systems, in
ing working point with FAR = 0% and FRR = 7.28% ÷ 9.7%.
terms of higher accuracy and high resistance to spoofing.
Index Terms—Fusion techniques, identification systems, iris and There is a sizeable amount of literature that details differ-
fingerprint biometry, multimodal biometric systems. ent approaches for multimodal biometric systems, which have
been proposed [1]–[4]. Multibiometrics data can be combined at
different levels: fusion at data-sensor level, fusion at the feature-
I. INTRODUCTION extraction level, fusion at the matching level, and fusion at the
N AN ACTUAL technological scenario, where Information decision level. As pointed out in [5], features-level fusion is eas-
I and Communication Technologies (ICT) provide advanced
services, large-scale and heterogeneous computer systems need
ier to apply when the original characteristics are homogeneous
because, in this way, a single resultant feature vector can be
strong procedures to protect data and resources access from calculated. On the other hand, feature-level fusion is difficult to
unauthorized users. Authentication procedures, based on the achieve because: 1) the relationship between the feature spaces
simple username–password approach, are insufficient to provide could not be known; 2) the feature set of multiple modalities
a suitable security level for the applications requiring a high level may be incompatible; and 3) the computational cost to process
of protection for data and services. the resultant vector is too high.
Biometric-based authentication systems represent a valid al- In this paper, a template-level fusion algorithm resulting in a
ternative to conventional approaches. Traditionally biometric unified biometric descriptor and integrating fingerprint and iris
features is presented. Considering a limited number of meaning-
ful descriptors for fingerprint and iris images, a frequency-based
Manuscript received May 29, 2009; revised November 20, 2009; accepted
codifying approach results in a homogenous vector composed
February 7, 2010. Date of publication April 22, 2010; date of current version of fingerprint and iris information. Successively, the Hamming
June 16, 2010. This paper was recommended by Associate Editor E. R. Weippl. Distance (HD) between two vectors is used to obtain its simi-
V. Conti, C. Militello, and F. Sorbello are with the Department of Com-
puter Engineering, University of Palermo, Palermo 90128, Italy (e-mail:
larity degree. To evaluate and compare the effectiveness of the
conti@unipa.it; militello@unipa.it; sorbello@unipa.it). proposed approach, different tests on the official fingerprint veri-
S. Vitabile is with the Department of Biopathology, Medical and Foren- fication competition (FVC) 2002 DB2 fingerprint database [30]
sic Biotechnologies, University of Palermo, Palermo 90127, Italy (e-mail:
vitabile@unipa.it).
and the University of Bath Iris Image Database (BATH) iris
Color versions of one or more of the figures in this paper are available online database [31] have been performed. In greater details, the test
at http://ieeexplore.ieee.org. conducted on the FVC2002 DB2B database and a subset of the
Digital Object Identifier 10.1109/TSMCC.2010.2045374
BATH database (ten users) have resulted in False Acceptance

1094-6977/$26.00 © 2010 IEEE

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 385

Rate (FAR) = 0% and a False Rejection Rate (FRR) = 5.71%, Conti et al. [12] proposed a multimodal biometric sys-
while the tests conducted on the FVC2002 DB2A database and tem using two different fingerprint acquisitions. The matching
the BATH database (50 users) have resulted in an FAR = 0% module integrates fuzzy-logic methods for matching-score fu-
and an FRR = 7.28% ÷ 9.7%. sion. Experimental trials using both decision-level fusion and
The paper is organized as follows. Section II presents the main matching-score-level fusion were performed. Experimental re-
related works. Section III illustrates the main techniques for mul- sults have shown an improvement of 6.7% using the matching-
timodal biometric authentication systems. Section IV describes score-level fusion rather than a monomodal authentication
the proposed multimodal authentication system. Section V system.
shows the achieved experimental results. Section VI deals with Yang and Ma [2] used fingerprint, palm print, and hand ge-
the comparison of the state-of-the-art solutions. Finally, some ometry to implement personal identity verification. Unlike other
conclusions and future works are reported in Section VII. multimodal biometric systems, these three biometric features
can be taken from the same image. They implemented matching-
II. RELATED WORKS score fusion at different levels to establish identity, performing
A variety of articles can be found, which propose different ap- a first fusion of the fingerprint and palm-print features, and suc-
proaches for unimodal and multimodal biometric systems. Tra- cessively, a matching-score fusion between the multimodal sys-
ditionally, unimodal biometric systems have many limitations. tem and the palm-geometry unimodal system. The system was
Multimodal biometric systems are based on different biometric tested on a database containing the features self-constructed by
features and/or introduce different fusion algorithms for these 98 subjects.
features. Many researchers have demonstrated that the fusion Besbes et al. [13] proposed a multimodal biometric system
process is effective, because fused scores provide much better using fingerprint and iris features. They use a hybrid approach
discrimination than individual scores. Such results have been based on: 1) fingerprint minutiae extraction and 2) iris tem-
achieved using a variety of fusion techniques (see Section III plate encoding through a mathematical representation of the
for further details). In what follows, the most meaningful works extracted iris region. This approach is based on two recognition
of the aforementioned fields are described. modalities and every part provides its own decision. The final
An unimodal fingerprint verification and classification system decision is taken by considering the unimodal decision through
is presented in [7]. The system is based on a feedback path for an “AND” operator. No experimental results have been reported
the feature-extraction stage, followed by a feature-refinement for recognition performance.
stage to improve the matching performance. This improvement Aguilar et al. [14] proposed a multibiometric method using
is illustrated in the contest of a minutiae-based fingerprint ver- a combination of fast Fourier transform (FFT) and Gabor fil-
ification system. The Gabor filter is applied to the input image ters to enhance fingerprint imaging. Successively, a novel stage
to improve its quality. for recognition using local features and statistical parameters is
Ratha et al. [9] proposed a unimodal distortion-tolerant fin- used. The proposed system uses the fingerprints of both thumbs.
gerprint authentication technique based on graph representa- Each fingerprint is separately processed; successively, the uni-
tion. Using the fingerprint minutiae features, a weighted graph modal results are compared in order to give the final fused
of minutiae is constructed for both the query fingerprint and the result. The tests have been performed on a fingerprint database
reference fingerprint. The proposed algorithm has been tested composed of 50 subjects obtaining FAR = 0.2% and FRR =
with excellent results on a large private database with the use of 1.4%.
an optical biometric sensor. Subbarayudu and Prasad [15] presented experimental re-
Concerning iris recognition systems in [10], the Gabor fil- sults of the unimodal iris system, unimodal palmprint system,
ter and 2-D wavelet filter are used for feature extraction. This and multibiometric system (iris and palmprint). The system
method is invariant to translation and rotation and is tolerant fusion utilizes a matching scores feature in which each sys-
to illumination. The classification rate on using the Gabor is tem provides a matching score indicating the similarity of the
98.3% and the accuracy with wavelet is 82.51% on the Institute feature vector with the template vector. The experiment was
of Automation of the Chinese Academy of Sciences (CASIA) conducted on the Hong Kong Polytechnic University Palm-
database. print database. A total of 600 images are collected from 100
In the approach proposed in [11], multichannel and Gabor different subjects.
filters have been used to capture local texture information of the In contrast to the approaches found in literature and detailed
iris, which are used to construct a fixed-length feature vector. earlier, the proposed approach introduces an innovative idea
The results obtained were FAR = 0.01% and FRR = 2.17% on to unify and homogenize the final biometric descriptor using
CASIA database. two different strong features—the fingerprint and the iris. In
Generally, unimodal biometric recognition systems present opposition to [2], this paper shows the improvements introduced
different drawbacks due its dependency on the unique bio- by adopting the fusion process at the template level as well as
metric feature. For example, feature distinctiveness, feature the related comparisons against the unimodal elements and the
acquisition, processing errors, and features that are temporally classical matching-score fusion-based multimodal system. In
unavailable can all affect system accuracy. A multimodal bio- addition, the system proposed in this paper has been tested on
metric system should overcome the aforementioned limits by the official fingerprint FVC2002 DB2 and iris BATH databases
integrating two or more biometric features. [30], [31].

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
386 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

III. MULTIMODAL BIOMETRIC AUTHENTICATION SYSTEMS


Fusion strategies can be divided into two main categories:
premapping fusion (before the matching phase) and postmap-
ping fusion (after the matching phase). The first strategy deals
with the feature-vector fusion level [17]. Usually, these tech-
niques are not used because they result in many implementation
problems [1]. The second strategy is realized through fusion at
the decision level, based on some algorithms, which combine
single decisions for each component of the system. Further-
more, the second strategy is also based on the matching-score
level, which combines the matching scores of each component
system.
The biometric data can be combined at several different levels
of the identification process. Input can be fused in the following
levels [1], [5].
1) Data-sensor level: Data coming from different sensors
can be combined, so that the resulting information are in
some sense better than they would be possible when these
sources were individually used. The term better in that
case can mean more accurate, more complete, or more
dependable. Fig. 1. General schema of the proposed multimodal system.
2) Feature-extraction level: The information extracted from
sensors of different modalities is stored in vectors on the
basis of their modality. These feature vectors are then com- IV. PROPOSED MULTIMODAL BIOMETRIC SYSTEM
bined to create a joint feature vector, which is the basis for In this paper, a multimodal biometric system, based on fin-
the matching and recognition process. One of the poten- gerprint and iris characteristics, is proposed. As shown in Fig. 1,
tial problems in this strategy is that, in some cases, a very the proposed multimodal biometric system is composed of two
high-dimensional feature vector results from the fusion main stages: the preprocessing stage and matching stage. Iris and
process. In addition, it is hard to generate homogeneous fingerprint images are preprocessed to extract the ROIs, based
feature vectors from different biometrics in order to use a on singularity regions, surrounding some meaningful points.
unified matching algorithm. Despite to the classic minutiae-based approach, the fingerprint-
3) Matching-score level: This is based on the combination singularity-regions-based approach requires a low execution
of matching scores, after separate feature extraction and time, since image analysis is based on a few points (core and
comparison between stored data and test data for each sub- delta) rather than 30–50 minutiae. Iris image preprocessing is
system have been compiled. Starting from the matching performed by segmenting the iris region from eye and delet-
scores or distance measures of each subsystem, an over- ing the eyelids and eyelashes. The extracted ROIs are used as
all matching score is generated using linear or nonlinear input for the matching stage. They are normalized, and then,
weighting. processed through a frequency-based approach, in order to gen-
4) Decision level: With this approach, each biometric sub- erate a homogeneous template. A matching algorithm is based
system completes autonomously the processes of feature on the HD to find the similarity degree. In what follows, each
extraction, matching, and recognition. Decision strategies phase is briefly described.
are usually of Boolean functions, where the recognition
yields the majority decision among all present subsystems. A. Preprocessing Stage
Fusion at template level is very difficult to obtain, since bio-
metric features have different structures and distinctiveness. In An ROI is a selected part of a sample or an image used to
this paper, we introduce a frequency approach based on Log- perform particular tasks. In what follows, the fingerprint singu-
Gabor filter [18], to generate a unified homogeneous template larity regions extraction process and the iris region extraction
for fingerprint and iris features. In greater detail, the proposed process are described.
approach performs fingerprint matching using the segmented 1) Fingerprint Singularity Region Extraction: Singularity
regions (Regions Of Interests, ROIs) surrounding fingerprint regions are particular fingerprint zones surrounding singularity
singularity points. On the other hand, iris preprocessing aims to points, namely the “core” and the “delta.” Several approaches
detect the circular region surrounding the iris. As a result, we for singularity-point detection had been proposed in litera-
adopted a Log-Gabor-algorithm-based codifier to encode both ture. They can be broadly categorized into techniques based on
fingerprint and iris features, obtaining a unified template. Suc- 1) the Poincarè index; 2) heuristics; 3) irregularity or curvature
cessively, the HD on the fused template has been used for the operators; and 4) template matching.
similarity index computation. By far, the most popular method has been proposed by
Kawagoe and Tojo [19]. The method is based on the Poincarè

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 387

index since it assumes that the core, double core, and delta gen-
erate a Poincarè index equal to 180◦ , 360◦ , and −180◦ , respec-
tively. Fingerprint singularity region extraction is composed of
three main blocks: directional image extraction, Poincarè index
calculation, and singularity-point detection.
Singularity points are not included in fingerprint images when
either the acquired image is only a partial image, or it is an arch
fingerprint. In these cases, singularity points cannot be detected,
so that the whole process will fail. For the aforementioned rea-
sons, a new technique, showing good accuracy rates and low
computational cost, is introduced to detect pseudosingularity
points. Fig. 2. Classification of a partially acquired image. (a) Original fingerprint
image with the overlapping line between the core and pseudodelta point. (b) Map
a) Core and delta extraction: Singularity-point detection of highest differences where the white line direction identifies pseudodelta point.
is performed by checking the Poincarè indexes associated with c) Midpoint M d , the core, and pseudodelta-point segment with the orthogonal
the fingerprint direction matrix. As pointed out before, the sin- LR segment.
gularity points with a Poincarè index equal to 180◦ , −180◦ ,
360◦ are associated with the core, the delta, and the double core, In unbroken and well-acquired images, high value points
respectively. In greater detail, the directional image extraction identify a path between the core and delta points.
phase is composed of the following four sequential tasks: In a partially acquired left-loop, right-loop, and tented arch
1) Gx and Gy gradients calculation using Sobel operators; fingerprint, this value identifies a path between the extracted
2) Dx and Dy derivatives calculation; singularity point and the missing point. Fig. 2(a) shows an ex-
3) θ(i, j) angle calculation of the (i, j) block; ample of a partial left-loop fingerprint, in which there is only one
4) Gaussian smoothing filter application on angle matrix. core point (highlighted by a green circle). Fig. 2(b) represents
Finally, the singularity points are detected, accordingly to the the matrix of the higher angle differences. The white line starts
Poincarè indexes. from the core point and goes toward the missing delta point.
b) Pseudosingularity-point detection and fingerprint clas- The proposed algorithm follows this line and approximates a
sification: The extraction step described in the previous section “pseudodelta point” (highlighted by a red triangle), which will
may be affected by several drawbacks to the fingerprint acquisi- be used for image classification.
tion process. In addition, arch fingerprints have no core and no Fingerprint classification is performed considering the mu-
delta points, so that the previous process will give no points as tual position between the core and pseudosingularity point. As
output. Generally, fingerprint images do not contain the same shown in Fig. 2(c), the directional map analysis refers to three
number of singularity points. A whorl class fingerprint has two points, identified by the core and pseudodelta line midpoint Md .
core and two delta points, a left-loop or a right-loop fingerprint The midpoint is used to set the L point and the R point, on the
has one core and one delta, and a tented arch fingerprint has orthogonal line, in such a way that Md is the midpoint of the LR
only a core point, while an arch fingerprint has no singularity segment.
points. The mutual positions between core, pseudodelta point, Md ,
A directional map is used to identify the real class of the L, and R identify fingerprint class applying the following rules.
processed fingerprint image. Let us call α as the angle between 1) Left-loop class:
a directional segment and the horizontal axis. Fingerprint topo-
abs(core delta angle − R angle)
logical structure shows that the core–delta path follows only
high angular variation points in a vertical direction. For this < abs(core delta angle − L angle)
reason, each α > π/2 will be set to α–π, so that the directional
abs(core delta angle − Md angle) > tolerance angle).
map is mapped in the range [−π/2, π/2]. The new mapping
makes possible to highlight the points with an angular variation 2) Right-loop class:
closed to π/2 in the directional map [see the white curve in
abs(core delta angle−R angle)
Fig. 2(b)].
Accordingly to (1), for each kernel (i, j), the differences com- > abs(core delta angle−L angle)
puted among each directional map element (i, j) and its 8_neigh-
abs(core delta angle−Md angle) > tolerance angle).
bors are used to detect the zones with the highest vertical differ-
ences. Finally, according to (2), the point having the maximum 3) Tented arch class:
angular difference is selected
abs(core delta angle−R angle)
< abs(core delta angle−L angle)
differencek (i, j) = angle(i, j) − k neighbor(i, j),
abs(core delta angle−Md angle) < tolerance angle).
k = 1,. . . , 8 (1)
where “core_delta_angle” is the angle between the core and
max difference angle(i, j) = max(differencek (i, j)). (2) pseudodelta-point segment and the horizontal axis, “R_angle”

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
388 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

Fig. 5. Pupil segmentation. Thresholding application and the morphological


opening operation.

The first step is able to identify the pupil, but it cannot elim-
inate the presence of noise due to the acquisition phase. The
second step is based on a morphological opening operation per-
formed using a structural element of circular shape. As shown in
Fig. 5, the step ends when the morphological opening operation
reduces the pupil area to approximate the structural element.
Fig. 3. For arch fingerprints, the pseudosingularity point refers to the maxi- Successively, the pupil radius and center are identified. The
mum curvature points. (a) Arch fingerprint. (b) 2-D view of the curvature map.
identification algorithm is executed in two steps: the first step
detects connected circular areas and almost connected circular
areas, trying to get the better pair (radius, center) with respect the
previous phase. The second step, starting from a square around
the coordinates of the obtained centroid, measures the maxi-
mum gradient variations along the circumferences centered in
the identified points with a different radius.
c) Iris segmentation: The iris boundary is detected in two
steps. Image-intensity information is converted into a binary
Fig. 4. Iris ROI extraction scheme: boundary localization, pupil segmentation, edge map. Successively, the set of edge points is subjected to
iris segmentation, and eyelids and eyelashes erosion.
voting to instantiate the contour of the parametric values.
The edge-map is recovered using the Canny algorithm for
is the angle of the R point in the directional map, “L_angle” is edge detection [22]. This operation is based on the thresholding
the angle of the L point in the directional map, and “Md angle” of the magnitude of the image-intensity gradient. In order to
is the angle of the Md point in the directional map. incorporate directional tuning, image-intensity derivatives are
Whorl fingerprint topology is characterized by two closed and weighted to amplify certain ranges of orientation. For example,
centered core points, so that the double core detection selects in order to recognize this boundary contour, the derivatives are
the fingerprint class. weighted to be selective for vertical edges. Then, a voting pro-
Arch fingerprints have no singularity points. In this case, cedure of Canny operator extracted points is applied to erase the
the proposed approach detects the maximum curvature point, disconnected points along the edge. In greater detail, the Hough
namely pseudocore point, and the neighbor pixels, i.e., the transform [23] is defined, as in (3), for the circular boundary
needed ROI (see Fig. 3). and a set of recovered edge points xj , yj (with j = 1, . . ., n)
2) Iris ROIs Extraction: As shown in Fig. 4, iris ROI seg-

n
mentation process is composed of four tasks: boundary local- H (xc , yc , r) = h (xj , yj , xc , yc , r) (3)
ization, pupil segmentation, iris segmentation, and eyelid and j =1
eyelash erosion.
a) Boundary localization: The boundaries in an iris image where
are extracted by means of edge-detection techniques to compute  
1, if g (xj , yj , xc , yc , r) = 0
the parameters of the iris and pupil neighbors. h (xj , yj , xc , yc , r) = (4)
The approach aims to detect the circumference center and 0, otherwise
radius of the iris and pupil region, even if the circumferences
with
are usually not concentric [20]. Finally, the eyelids and eyelashes
form are located. g (xj , yj , xc , yc , r) = (xj − xc )2 + (yj − yc )2 − r2 . (5)
b) Pupil segmentation: The pupil-identification phase is
composed of two steps. The first step is an adaptive thresholding, For each edge point (xj , yj ), g(xj , yj , xc , yc , r) = 0 for every
and the second step is a morphological opening operation [21]. parameter (xc , yc , r) that represents a circle through that point.

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 389

Fig. 7. (a) Polar coordinate system for an iris ROI and the corresponding
Fig. 6. Examples of the segmentation process. (a) Original BATH database linearized visualization. (b) Two examples showing the linearized iris and fin-
iris image. (b) Image with pupil and iris circumference, eyelid, and eyelash gerprint ROI images, respectively.
points. (c) Extracted iris ROI after the segmentation.

The triplet maximizing H corresponds to the largest number of variation during the fingerprint-acquisition phase. Equalizing
edge points that represents the contour of interest. the histogram, ROIs show a uniform level of brightness.
d) Eyelid and eyelash erosion: Eyelids and eyelashes are The coordinate transformation process produces a 448 × 96
considered to be “noise,” and therefore, are seen to degrade biometric pattern for each meaning ROI: 448 is the number
the system performance. Eyelashes are of two types: separable of the chosen radial samples (to avoid data loss in the round
eyelashes and multiple eyelashes. The eyelashes present in our angle), while 96 pixels are the highest difference between iris
dataset belong to the separable type. Initially, the eyelids are and pupil radius in the iris images, or the ROI radius in the
isolated by fitting a line to the upper and lower eyelid using the fingerprint images. In order to achieve invariance with regards
linear Hough transform. Successively, the Canny algorithm is to roto-translation and scaling distortion, the r polar coordinate
used to create the edge map, and only the horizontal gradient is normalized in the [0, 1] range. Fig. 7(a) depicts the polar
information is considered. coordinate system for an iris ROI and the corresponding lin-
As shown in Fig. 6, the real part of the Gabor filter (1-D earized visualization. In Fig. 7(b), two examples of iris and
Gabor filter) in the spatial domain has been used [24] for eyelash fingerprint ROI images are depicted. For each Cartesian point
location. The convolution between the separable eyelash and of the ROI, image is assigned a polar coordinates pair (r, θ),
the real part of the Gabor filter is very small. For this reason, with r ∈ [R1 , R2 ] and θ ∈ [0, 2π], where R1 is the pupil radius
if a resultant point is smaller than an empirically predefined and R2 is the iris radius. For fingerprint ROIs, R1 = 0.
threshold, the point belongs to an eyelash. Each point in the Since iris eyelashes and eyelids generate some corrupted ar-
eyelash must be connected to another eyelash point or to an eas, a noise mask corresponding to the aforementioned cor-
eyelid. If at any point, one of the two previous criterions is rupted areas is associated with each linearized ROI. In addition,
fulfilled, its neighbor pixels are required to check whether or the phase component will be meaningless in the regions where
not they belong to an eyelash or eyelid. If none of the neighbor the amplitude is zero, so that these regions are also added to
pixels has been classified as a point in an eyelid or in eyelashes, the noise mask. Fig. 8 depicts the biometric template and the
it is not consider as a pixel in an eyelash. relative noise masks. In this example, the noise mask associated
with a fingerprint singularity region [see Fig. 8(c)] is completely
B. Matching Algorithm black because the region is inside the fingerprint image and no
noise is considered to be present. On the contrary, if ROIs are
Fusion is performed by combining the biometric template partially discovered, the noise mask will contain the white zones
extracted from every pair of fingerprints and irises representing (no information), as shown in the Fig. 8(a).
a user. First, the identifiers extracted from the original images 2) Homogenous Template Generation: The homogenous
are stored in different feature vectors. Successively, each vector biometric vector from fingerprint and iris data is composed of bi-
is normalized in polar coordinates. Then, they are combined. nary sequences representing the unimodal biometric templates.
Finally, HD is used for matching score computation. In what The resulting vector is composed of a header and a biomet-
follows, the applied techniques for ROI normalization, template ric pattern. The biometric pattern is composed of two subpat-
generation, and HD will be described. terns as well. The first pattern is related to the extracted finger-
1) ROI Normalization: Since the fingerprint and iris im- print singularity points, reporting the codified and normalized
ages of different people may have different sizes, a normal- ROIs.
ization operation must be performed after ROIs extraction. For The second pattern is related to the extracted iris code, re-
a given person, biometric feature size may vary because of illu- porting the codified and normalized ROIs. In greater detail, the
mination changes during the iris-acquisition phase or pressure 1-B header contains the following information.

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
390 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

Fig. 8. (a) and (c) Noise masks used to extract useful information from the
iris and fingerprint descriptors. The black area is the useful area used to perform
the matching process. The white areas are related to the noisy zones and con-
sequently they are discarded in the matching process. The iris and fingerprint Fig. 9. Codified fingerprint or iris ROIs obtained applying the four levels
descriptors are reported in (b) and (d), respectively. quantized in the 1-D Log-Gabor filter.

1) Core number (2 bit): indicates the number of core points


in the fingerprint image (0 if no ROI has been extracted
around the core).
2) Delta number (2 bit): indicates the number of delta points
in the fingerprint image (0 if no ROI has been extracted
around the delta).
3) Fingerprint class (3 bit): indicates the membership to
which of the five fingerprint classes.
4) Iris ROI extraction (1 bit): 0 if the segmentation step has
failed.
Normalized fingerprint and iris ROIs are codified using
the Log-Gabor approach [18]. Different from magnitude-based
methods [38], Gabor filters are tool used to provide local fre-
quency information [25], [37]. However, the standard Gabor Fig. 10. 3-D biometric pattern obtained by iris coding, fingerprint core region
filter has two limitations: its bandwidth and the limited infor- coding, and fingerprint delta region coding. The associated template header will
mation extraction on a broad spectrum. An alternative Gabor address the meaning voxels in the 3-D template.
filter is the Log-Gabor filter. This filter can be designed with
arbitrary bandwidth, and it represents a Gabor filter constructed
3) HD-Based Matching: The matching score is calculated
as a Gaussian on a logarithmic scale. The frequency response
through the HD calculation between two final fused templates.
of this filter is defined by the following equation:
The template obtained in the encoding process will need a cor-
 
− log2 (f /f0 ) responding matching metric that provides a measure of the sim-
G(f ) = exp (6) ilarity degree between the two templates. The result of the mea-
2 log2 (σ/f0 )
sure is then compared with an experimental threshold to decide
where f0 is the central frequency and σ is the filter bandwidth. whether or not the two representations belong to the same user.
In our approach, the implementation of the Log-Gabor filter The metric used in this paper is also used by Daugman [27], [28]
proposed by Masek [26] has been considered. As depicted in in his recognition system.
Fig. 9, each row of the normalized pattern is considered as a If two patterns X and Y have to be compared, the HD is
1-D signal, processed by a convolution operation using the 1-D defined as the sum of discordant bits in homologous position
Log-Gabor filter. (XOR operation between X and Y bits). In other words
The phase component, obtained from the 1-D Log-Gabor
1 
N
filter real and imaginary parts, is then quantized in four levels,
HD = XOR(Xj , Yj ) (7)
using the Daugman method [27], [28]. Therefore, each filter N j =1
generates a 2-bits coding for each iris/fingerprint ROI pixel.
The phase-quantization coding is performed through the Gray where N is the total number of bits.
code, so that only 1 bit changes when moving from one quadrant If two patterns are completely independent, the HD between
to the next one. This will minimize the number of differing bits them should be equal to 0.5, since independence implies that
when two patterns are slightly misaligned [26]. the two strings of bits are completely random so that 0.5 is the
The different coded biometric patterns are then concatenated, ability to set every bit to 1 and vice versa. If the two patterns of
thus obtaining a 3-D biometric pattern, where each element is the same biometric descriptor are processed, then, their distance
represented by a voxel (see Fig. 10). should be zero.

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 391

TABLE I TABLE II
COMPOSITION AND DETAILS OF THE USED FINGERPRINT AND IRIS DATABASES RECOGNITION RATES OF THE UNIMODAL BIOMETRIC SYSTEMS
(IN ITALIC THE REDUCED DATABASES) FOR THE ENTIRE DATABASES

TABLE III
TEST SETS COMPOSITION AND FEATURES

The algorithm used in this paper uses a mask to identify the


useful area for the matching process. Therefore, the HD is cal-
culated based only on the significant bits of the two templates.
The modified and used formula is (8), where Xj and Yj are the
correspondent bits to compare, Xn j and Yn j are the correspond-
ing bits in the noisy mask, and N is the number of the bits to
represent every template
1 FVC2002 DB2A-S1 database has been generated considering
HD = N the first 50 users, while the FVC2002 DB2A-S2 database has
N− k =1 OR(Xn k , Yn k )
been generated considering the last 50 users.

N
× AND(XOR(Xj , Yj ),Xn j , Yn j ). (8) A. Recognition Analysis of the Multimodal System
j =1
The multimodal recognition system performance evaluation
As suggested by Daugman [27], [28], to avoid false results
has been performed using the well-known FRR and FAR in-
caused by the rotation problem, a template is shifted to the
dexes. For an authentication system, the FAR is the number of
right and left with respect to the corresponding template, and
times that an incorrectly accepted unauthorized access occurred,
for each shift operation, the new HD is then calculated. Each
while the FRR is the number of times that an incorrectly rejected
shift corresponds to a rotation of 2◦ . Among all obtained values,
authorized access resulted.
the minimum distance is considered (corresponding to the best
To evaluate and compare the performance of the proposed
matching between the two templates).
approach, several tests have been conducted. The first test has
been conducted on the full FVC2002 DB2A database using a
V. EXPERIMENTAL RESULTS classical unimodal minutiae-based recognition system [7], [8].
The proposed multimodal biometric authentication system This approach has resulted in FAR = 0.38% and FRR = 14.29%.
achieves interesting results on standard and commonly used The performance of the fingerprint unimodal recognition sys-
databases. To show the effectiveness of our approach, the tem using the previously described frequency-based approach
FVC2002 DB2 database [30] and the BATH database [31] have on the same full FVC2002 DB2A database has also been eval-
been used for fingerprints and irises, respectively. The obtained uated.This approach has resulted in FAR = 1.37% and FRR =
experimental results, in terms of recognition rates and execution 22.45%. Table II shows the achieved results using two methods.
times, are here outlined. The listed FAR and FRR indexes have In Table II, the result achieved by the iris unimodal recog-
been calculated following the FVC guidelines [30]. Table I gives nition system using the previously described frequency-based
a brief description of the features of the used databases. approach on the full BATH database [28] is also reported.
The reduced BATH-S1 database has been generated with Successively, several test sets considering the appropriate
ten user random extractions from the full iris database. For number of fingerprint and iris acquisitions have been gener-
each user, the first eight iris acquisitions have been selected. ated to test the proposed multimodal approach. Table III shows
The BATH-S2 database has been generated considering the 50 the used test sets composition.
database users. For each user, the first eight iris acquisitions have An initial test has been conducted on the DBtest1 dataset
been selected. The BATH-S3 database has been generated con- using a classical fusion approach at the matching-score level
sidering the 50 database users as well. For each user, a second and utilizing an Euclidean metric applied to the HD of each
pattern of eight iris acquisitions (9–16) has been selected. The subsystem. With this approach, the following results have been

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
392 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

TABLE IV TABLE V
RECOGNITION RATES OF THE PROPOSED TEMPLATE-LEVEL FUSION RECOGNITION RATES OF THE PROPOSED TEMPLATE-LEVEL FUSION
ALGORITHM COMPARED TO UNIMODAL SYSTEMS ALGORITHM COMPARED TO UNIMODAL SYSTEMS
FOR REDUCED DATABASES (TEN USERS) FOR REDUCED DATABASES (50 USERS)

Fig. 11. ROC curves for the unimodal biometric systems and the correspond-
ing multimodal system with the DBte st1 dataset.

obtained: FAR = 0.07% and FRR = 11.78%. The score has been
obtained weighting the matching scores (0.65 for iris and 0.35
for fingerprints) of each unimodal biometric system. The afore-
mentioned weight pair is the better tradeoff in order to meet the Fig. 12. ROC curves for the unimodal biometric systems and the correspond-
following constraints: 1) literature approaches show that iris- ing multimodal system with the DBte st4 dataset.
based systems achieve higher recognition accuracies than the
fingerprint-based ones [7], [9], [27], [28] and 2) our experimen- As shown in Table V, the conducted tests produce comparable
tal trials confirm that the aforementioned weights optimize the results on the used datasets, underlying the presented approach
recognition accuracy performance. robustness. Fig. 12 shows the ROC curves for the systems deal-
Successively, the proposed fusion strategy, at the template ing with the DBtest4 dataset. Analogous curves have been ob-
level, has been applied and tested on the same dataset (DBtest1 ) tained with the remaining datasets.
obtaining the results listed in Table IV. The correspondent mul-
timodal authentication system uses the previously described ho- B. Execution Time Analysis of the Multimodal Software System
mogeneous biometric vector, and it does not require any weight
The multimodal systems have been implemented using
for iris and fingerprint unimodal systems to evaluate the match-
the MATLAB environment on a general-purpose Intel P4 at
ing score. With this approach, the following results have been
3.00GHz processor with 2-GB RAM memory. Table VI shows
obtained: FAR = 0% and FRR = 5.71%. Fig. 11 shows the
the average software execution times for the preprocessing
receiver-operating characteristic (ROC) curves for the systems
and matching tasks. The fingerprint preprocessing time can
reported in Table IV. ROC curves are obtained by plotting the
change, since it depends either on singularity-point detection,
FAR index versus the FRR index, with different values of the
pseudosingularity-point detection, or maximum curvature point
matching threshold.
detection.
Finally, following the items listed in Table III, the remaining
four datasets have been considered to further evaluate the pro-
posed template-level fusion strategy. Table V shows the achieved VI. DISCUSSIONS AND COMPARISONS
results in terms of FAR and FRR indexes. The results achieved Multimodal biometric identification systems aim to fuse two
by the two unimodal recognition systems on the same pertinent or more physical or behavioral pieces of information to provide
databases are also reported in Table V. optimal FAR and FRR indexes improving system dependability.

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 393

TABLE VI with the results of different approaches found in literature on


SOFTWARE EXECUTION TIMES FOR THE PREPROCESSING
AND MATCHING TASKS
the same dataset or similar dataset. A novel technique for iris
recognition using texture and phase features is proposed in [35].
Texture features are extracted on the normalized iris strip using
Haar Wavelet, while phase features are obtained using Log-
Gabor Wavelet. The matching scores generated from individual
modules are combined using the sum of their score technique.
The system is tested on the BATH database giving an accuracy
of 95.62%. The combined system at a matching-score-level fu-
sion increased the system performance with FAR = 0.36% and
In contrast to the majority of work published on this topic FRR = 8.38%.
that has been based on matching-score-level fusion or decision- In order to test the effectiveness of the proposed multimodal
level fusion, this paper presents a template-level fusion method approach, several datasets have been used. First, two differ-
for a multimodal biometric system based on fingerprints and ent multimodal systems have been tested and compared on the
irises. In greater detail, the proposed approach performs finger- standard FVC2002 DB2B fingerprint image database and the
print matching using the segmented regions (ROIs) surrounding BATH-S1 iris image database: the former was based on a
fingerprint singularity points. On the other hand, iris prepro- matching-score-level fusion technique, while the latter was
cessing aims to detect the circular region surrounding the iris. based on the proposed template-level fusion technique. The
To achieve these results, we adopted a Log-Gabor-algorithm- obtained results show that the proposed template-level fusion
based codifier to encode both fingerprint and iris features, thus technique carries out an enhanced system showing interesting
obtaining a unified template. Successively, the HD on the fused results in terms of FAR and FRR (see Tables II and IV for further
template was used for the similarity index computation. The details). The aforementioned result suggests that the template-
improvements are now described and discussed, which were in- level fusion gives better performance than the matching-score-
troduced by adopting the fusion process at the template level level fusion. This statement confirms the results presented in
as well as the related comparison against the unimodal bio- [36]. In this paper, Khalifa and Amara presented the results of
metric systems and the classical matching-score-fusion-based four different fusions of modalities at different levels for two
multimodal systems. The proposed approach for fingerprint and unimodal biometric verification systems, based on offline sig-
iris segmentation, coding, and matching has been tested as uni- nature and handwriting. However, the better result was obtained
modal identification systems using the official FVC2002 DB2A using a fusion strategy at the feature-extraction level. In con-
fingerprint database and the BATH iris database. Even if, the clusion, we can affirm that when a fusion strategy is performed
frequency-based approach, using fingerprint (pseudo) singular- at the feature-extraction level, a homogeneous template is gen-
ity point information, introduces an error on system recognition erated, so that a unified matching algorithm is used, at which
accuracy (see Table II), the achieved recognition results have time the corresponding multimodal identification system shows
shown an interesting performance if compared with the litera- better results when compared to the result achieved using other
ture approaches on similar datasets. On the other hand, in the fusion strategies.
frequency-based approach, it is very difficult to use the classi- Lastly, several 50-users databases have been generated, com-
cal minutiae information, due to its great number. In this case, bining the available FVC2002 DB2A fingerprint database and
the frequency-based approach should consider a high number the BATH iris database. The achieved results, reported in
of ROIs, resulting in the whole fingerprint image coding, and Table V, show uniform performance on the used datasets.
consequently, in high-dimensional feature vector. In literature, few multimodal biometric systems based on
Shi et al. [32] proposed a novel fingerprint-matching method template-level fusion have been published, rendering it is
based on the Hough transform. They tested the method us- very difficult to comment and analyze the experimental re-
ing the FVC2002 DB2A database, depicting two ROC curves sults obtained in this paper. Besbes et al. [13] proposed a
with FAR and FRR indexes comparable to our results. Nagar multimodal biometric system using fingerprint and iris fea-
et al. [33] used minutiae descriptors to capture orientation and tures. They use a hybrid approach based on: 1) fingerprint
ridge frequency information in a minutia’s neighbor. They vali- minutiae extraction and 2) iris template encoding through a
dated their results on the FVC2002 DB2A database, showing a mathematical representation of the extracted iris region. How-
working point with FAR = 0.7% at a genuine accept rate (GAR) ever, no experimental results have been reported in the pa-
of 95%. However, they did not use the complete database, but per. As pointed out before, a mixed multimodal system based
only two samples for each user; therefore, they considered only on features fusion and matching-score fusion has been pro-
200 images. In [34], Yang et al. proposed a novel helper data posed in [2]. The paper presents the overall result of the en-
based on the topo-structure to reduce the alignment calculation tire system on self-constructed, proprietary databases. The pa-
amount. They tested their approach with FVC2002 DB2A ob- per reports the ROC graph with the unimodal and the mul-
taining an FAR between 0% and 0.02% with a GAR between timodal system results. The ROC curves show the improve-
88% or 92%, changing particular thresholds. ments introduced by the adopted fusion strategy. No FAR and
Concerning the iris identification system, the achieved per- FRR values are reported. Table VII summarizes the previous
formance can be considered very interesting when compared results.

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
394 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 40, NO. 4, JULY 2010

TABLE VII
COMPARISON OF THE RECOGNITION RATES OF OUR APPROACH AND THE OTHER LITERATURE APPROACHES

VII. CONCLUSION AND FUTURE WORKS an FRR = 5.71%, while the tests conducted on the FVC2002
For an ideal authentication system, FAR and FRR indexes DB2A and BATH databases resulted in an FAR = 0% and an
are equal to 0. The aforementioned result may be reached by FRR = 7.28% ÷ 9.7%.
online biometric authentication systems, because they have the Future works will be aimed to design and prototype an em-
freedom to reject the low-quality acquired items. On the con- bedded recognizer-integrating features acquisition and process-
trary, official ready-to-use databases (FVC databases, CASIA, ing in a smart device without biometric data transmission be-
BATH, etc.) contain images with different quality, including tween the different components of a biometric authentication
low-, medium-, and high-quality biometric acquisitions, as well system [6], [16].
as partial and corrupted images. For this reason, these biomet- REFERENCES
ric authentication systems do not achieve the ideal result. To
[1] A. Ross and A. Jain, “Information fusion in biometrics,” Pattern
increase the related security level, system parameters are then Recogn. Lett., vol. 24, pp. 2115–2125, 2003. DOI: 10.1016/S0167–
fixed in order to achieve the FAR = 0% point and a correspond- 8655(03)00079-5.
ing FRR point. [2] F. Yang and B. Ma, “A new mixed-mode biometrics informa-
tion fusion based-on fingerprint, hand-geometry and palm-print,” in
In this paper, a template-level fusion algorithm working on Proc. 4th Int. IEEE Conf. Image Graph., 2007, pp. 689–693. DOI:
a unified biometric descriptor was presented. The aforemen- 10.1109/ICIG.2007.39.
tioned result leads to a matching algorithm that is able to pro- [3] J. Cui, J. P. Li, and X. J. Lu, “Study on multi-biometric feature fusion and
recognition model,” in Proc. Int. IEEE Conf. Apperceiving Comput. Intell.
cess fingerprint-codified templates, iris-codified templates, and Anal. (ICACIA), 2008, pp. 66–69. DOI: 10.1109/ICACIA.2008.4769972.
iris and fingerprint-fused templates. In contrast to the classi- [4] S. K. Dahel and Q. Xiao, “Accuracy performance analysis of multimodal
cal minutiae-based approaches, the proposed system performs biometrics,” in Proc. IEEE Syst., Man Cybern. Soc., Inf. Assur. Workshop,
2003, pp. 170–173. DOI: 10.1109/SMCSIA.2003.1232417.
fingerprint matching using the segmented regions (ROIs) sur- [5] A. Ross, K. Nandakumar, and A. K. Jain, Handbook of Multibiometrics.
rounding (pseudo) singularity points. This choice overcomes Berlin, Germany: Springer-Verlag. ISBN 978–0-387-22296-7.
the drawbacks related to the fingerprint minutiae information: [6] UK Biometrics Working Group (BWG). Biometrics Security Con-
cerns. (2009, Nov.). [Online]. Available: http://www.cesg.gov.uk/policy_
the frequency-based approach should consider a high number technologies/biometrics/index.shtml, 2003.
of ROIs, resulting in the whole fingerprint image coding, and [7] S. Prabhakar, A. K. Jain, and J. Wang, “Minutiae verification and classifi-
consequently, in high-dimensional feature vector. cation,” presented at the Dept. Comput. Eng. Sci., Univ. Michigan State,
East Lansing, MI, 1998.
At the same time, iris preprocessing aims to detect the circular [8] V. Conti, C. Militello, S. Vitabile, and F. Sorbello, “A multimodal tech-
region surrounding the feature, generating an iris ROI as well. nique for an embedded fingerprint recognizer in mobile payment systems,”
For best results, we adopted a Log-Gabor-algorithm-based cod- Int. J. Mobile Inf. Syst., vol. 5, no. 2, pp. 105–124, 2009.
[9] N. K. Ratha, R. M. Bolle, V. D. Pandit, and V. Vaish, “Robust fin-
ifier to encode both fingerprint and iris features, thus obtaining gerprint authentication using local structural similarity,” in Proc. 5th
a unified template. Successively, the HD on the fused template IEEE Workshop Appl. Comput. Vis., Dec. 4–6, 2000, pp. 29–34. DOI
has been used for the similarity index computation. 10.1109/WACV.2000.895399.
[10] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification on iris
The multimodal biometric system has been tested on different patterns,” in Proc. 15th Int. Conf. Pattern Recogn., 2000, vol. 2, pp. 805–
congruent datasets obtained by the official FVC2002 DB2 fin- 808.
gerprint database [30] and the BATH iris database [31]. The first [11] L. Ma, Y. Wang, and D. Zhang, “Efficient iris recognition by characterizing
key local variations,” IEEE Trans. Image Process., vol. 13, no. 6, pp. 739–
test conducted on ten users has resulted in an FAR = 0% and 750, Jun. 2004.

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.
CONTI et al.: FREQUENCY-BASED APPROACH FOR FEATURES FUSION 395

[12] V. Conti, G. Milici, P. Ribino, S. Vitabile, and F. Sorbello, “Fuzzy fusion [38] A. Ross, A. K. Jain, and J. Reisman, “A hybrid fingerprint matcher,”
in multimodal biometric systems,” in Proc. 11th LNAI Int. Conf. Knowl.- Pattern Recogn., vol. 36, no. 7, pp. 1661–1673, Jul. 2003.
Based Intell. Inf. Eng. Syst. (KES 2007/WIRN 2007), Part I LNAI 4692.
B. Apolloni et al., Eds. Berlin, Germany: Springer-Verlag, 2010,
pp. 108–115. Vincenzo Conti received the Laurea (summa cum
[13] F. Besbes, H. Trichili, and B. Solaiman, “Multimodal biometric system laude) and the Ph.D. degrees in computer engineer-
based on fingerprint identification and Iris recognition,” in Proc. 3rd Int. ing from the University of Palermo, Palermo, Italy,
IEEE Conf. Inf. Commun. Technol.: From Theory to Applications (ICTTA in 2000 and 2005, respectively.
2008), pp. 1–5. DOI: 10.1109/ICTTA.2008.4530129. Currently, he is a Postdoc Fellow with the De-
[14] G. Aguilar, G. Sanchez, K. Toscano, M. Nakano, and H. Perez, “Multi- partment of Computer Engineering, University of
modal biometric system using fingerprint,” in Proc. Int. Conf. Intell. Adv. Palermo. His research interests include biometric
Syst. 2007, pp. 145–150. DOI: 10.1109/ICIAS.2007.4658364. recognition systems, programmable architectures,
[15] V. C. Subbarayudu and M. V. N. K. Prasad, “Multimodal biometric sys- user ownership in multi-agent systems, and bioin-
tem,” in Proc. 1st Int. IEEE Conf. Emerging Trends Eng. Technol., 2008, formatics. In each of these research fields he has pro-
pp. 635–640. DOI 10.1109/ICETET.2008.93. duced many publications in national and international
[16] P. Ambalakat. Security of biometric authentication systems. 21st Com- journals and conferences. He has participated to several research projects funded
put. Sci. Semin. (SA1-T1-1). (2009, Nov.). [Online]. Available: http: by industries and research institutes in his research areas.
//www.rh.edu/∼rhb/cs_seminar_2005/SessionA1/ambalakat.pdf, 2005
[17] A. Ross and R. Govindarajan, “Feature level fusion using hand and face
biometrics,” in Proc. SPIE Conf. Biometric Technol. Human Identification
II, Mar. 2005, vol. 5779, pp. 196–204. Carmelo Militello received the Laurea (summa cum
[18] D. J. Field, “Relations between the statistics of natural images and the laude) degree in computer engineering in 2006 from
response profiles of cortical cells,” J. Opt. Soc. Amer., vol. 4, pp. 2379– the University of Palermo, Palermo, Italy, with the
2394, 1987. following thesis: “An Embedded Device Based on
[19] M. Kawagoe and A. Tojo, “Fingerprint pattern classification,” Pat- Fingerprints and SmartCard for Users Authentica-
tern Recogn., vol. 17, no. 3, pp. 295–303, 1984. DOI: 10.1016/0031- tion. Study and Realization on Programmable Logi-
3203(84)90079-7. cal Devices.” From January 2007 to December 2009,
[20] M. L. Pospisil, “The human Iris structure and its usages,” Acta Univ. he participated to Ph.D. student course in the Depart-
Palacki Phisica, vol. 39, pp. 87–95, 2000. ment of Computer Engineering (DINFO), University
[21] R. C. Gonzalez and R. E. Woods, Digital Image Processing. Englewood of Palermo.
Cliffs, NJ: Prentice-Hall, 2008. He is currently a component of the Innovative
[22] J. Canny, “A computational approach to edge detection,” IEEE Trans. Computer Architectures (IN.C.A.) Group of the DINFO, coordinated the Prof.
Pattern Anal. Mach. Intell., vol. 8, no. 6, pp. 679–698, Nov. 1986. Filippo Sorbello. His research interests include embedded biometric systems
[23] P. V. C. Hough, “Method and means for recognizing complex patterns,” prototyped on reconfigurable architectures.
U.S. Patent 3 069 654, Dec. 18, 1962.
[24] A. Kumar and G. K. H. Pang, “Defect detection in textured materials
using gabor filters,” IEEE Trans. Ind. Appl., vol. 38, no. 2, pp. 425–440, Filippo Sorbello (M’91) received the Laurea
Mar./Apr. 2002. degree in electronic engineering from the University
[25] L. Hong, Y. Wan, and A. Jain, “Fingerprint image enhancement, algorithm of Palermo, Palermo, Italy, in 1970.
and performance evaluation,” IEEE Trans. Pattern Anal. Mach. Intell., He is a Professor of computer engineering with
vol. 20, no. 8, pp. 777–789, Aug. 1998. the Department of Computer Engineering (DINFO),
[26] L. Masek. (2003). “Recognition of human Iris patterns for bio- University of Palermo, Palermo, Italy. He is a found-
metric identification,” Master’s thesis, Univ. Western Australia, Aus- ing member and served as the Department Head for
tralia. (2009, Nov.). [Online]. Available: http://www.csse.uwa.edu.au/- the first two terms. From 1995 to 2009, he served as
pk/studentprojects/libor/, 2003 the Director of the Office for Information Tecnology
[27] J. Daugman, “The importance of being random: Statistical principles of (CUC) of the University of Palermo. His research
iris recognition,” Pattern Recogn., vol. 36, pp. 279–291, 2003. interests include neural networks applications, real
[28] J. Daugman, “How iris recognition works,” IEEE Trans. Circuits time image processing, biometric authentication systems, multi-agent system
Syst. Video Technol., vol. 14, no. 1, pp. 21–30, Jan. 2004. DOI: security, and digital computer architectures. He has chaired and participated as
10.1109/TCSVT.2003.818350. member of the program committee of several national and international confer-
[29] Celoxica Ltd. (2009, Nov.). [Online]. Available: http://agilityds.com/ ences. He has coauthored more than 150 scientific publications.
products/ Prof. Sorbello is a member of the IEEE Computer, the Association of the
[30] Fingerprint Verification Competition FVC2002. (2009, Nov.). [Online]. Computing Machinery (ACM), the Italian Association for Artificial Intelligence
Available: http://bias.csr.unibo.it/fvc2002/ (AIIA), Italian Association for Computing (AICA), Italian Association of Elec-
[31] BATH Iris Database, University of Bath Iris Image Database. trical, Electronic, and Control and Computer Engineers (AEIT).
(2009, Nov.). [Online]. Available: http://www.bath.ac.uk/eleceng/
research/sipg/irisweb/
[32] J. Q. Z. Shi, X. Zhao, and Y. Wang, “A novel fingerprint matching method Salvatore Vitabile (M’07) received the Laurea de-
based on the hough transform without quantization of the hough space,” gree in electronic engineering and the Dr. degree in
in Proc. 3rd Int. Conf. Image Graph. (ICIG 2004), pp. 262–265. ISBN computer science from the University of Palermo,
0-7695-2244-0. Palermo, Italy, in 1994 and 1999, respectively.
[33] A. Nagar, K. Nandakumar, and A. K. Jain, “Securing fingerprint template: He is currently an Assistant Professor with the
Fuzzy vault with minutiae descriptors,” in Proc. 19th Int. Conf. Pattern Department of Biopathology, Medical, and Forensic
Recogn. (ICPR 2008), pp. 1–4. ISBN 978-1-4244-2174-9. Biotechnologies (DIBIMEF), University of Palermo,
[34] J. Li, X. Yang, J. Tian, P. Shi, and P. Li, “Topological structure-based Palermo, Italy. His research interests include neu-
alignment for fingerprint fuzzy vault,” presented at the 19th Int. Conf. ral networks applications, biometric authentication
Pattern Recogn. (ICPR), Bejing, China. ISBN 978-1-4244-2174-9. systems, exotic architecture design and prototyping,
[35] H. Mehrotra, B. Majhi, and P. Gupta, “Multi-algorithmic Iris authentica- real-time driver assistance systems, multi-agent sys-
tion system,” presented at the World Acad. Sci., Eng. Technol., Buenos tem security, and medical image processing. He has coauthored more than 100
Aires, Argentina, vol. 34. 2008. ISSN 2070-3740. scientific papers in referred journals and conferences.
[36] A. B. Khalifa and N. E. B. Amara, “Bimodal biometric verification with Dr. Vitabile has joined the Editorial Board of the International Journal of In-
different fusion levels,” in Proc. 6th Int. Multi-Conf. Syst., Signals Devices, formation Technology and Communications and Convergence. He has chaired,
2009, SSD ’09, pp. 1–6. DOI: 10.1109/SSD.2009.4956731. organized, and served as member of the technical program committees of several
[37] Y. Guo, G. Zhao, J. Chen, M. Pietikäinen, and Z. Xu, “A new gabor phase international conferences, symposia, and workshops. He is currently a member
difference pattern for face and ear recognition,” presented at the 13th Int. of the Board of Directors of SIREN (Italian Society of Neural Networks) and
Conf. Comput. Anal. Images Patterns, Münster, Germany, Sep. 2–4, 2009. the IEEE Engineering in Medicine and Biology Society.

Authorized licensed use limited to: Hindustan College of Engineering. Downloaded on July 23,2010 at 15:25:05 UTC from IEEE Xplore. Restrictions apply.

You might also like