Professional Documents
Culture Documents
Presented By:
As a characteristic:
As a process:
Biometric systems have been researched and tested for a few decades, but have only recently
entered into the public consciousness because of high profile applications, usage in entertainment
media (though often not realistically) and increased usage by the public in day-to-day activities.
Introduction
Biometric-based solutions are able to provide for confidential financial transactions and personal
data privacy. The need for biometrics can be found in federal, state and local governments, in the
military, and in commercial applications. Enterprise-wide network security infrastructures,
government IDs, secure electronic banking, investing and other financial transactions, retail
sales, law enforcement, and health and social services are already benefiting from these
technologies.
To elaborate on this definition, physiological biometrics are based on measurements and data
derived from direct measurement of a part of the human body. Fingerprint, iris-scan, retina-scan,
hand geometry, and facial recognition are leading physiological biometrics.
Applications
Biometrics is a rapidly evolving technology which is being widely used in forensics such as
criminal identification and prison security, and has the potential to be used in a large range of
civilian application areas. Biometrics can be used to prevent unauthorized access to ATMs,
cellular phones, smart cards, desktop PCs, workstations, and computer networks. It can be used
during transactions conducted via telephone and internet (electronic commerce and electronic
banking). In automobiles, biometrics can replace keys with key-less entry devices.
Biometrics technologies
Biometric systems convert data derived from behavioral or physiological characteristics into
templates, which are used for subsequent matching. This is a multi-stage process whose stages
are described below.
Enrollment - The process whereby a user’s initial biometric sample or samples are collected,
assessed, processed, and stored for ongoing use in a biometric system. Enrollment takes place in
both 1:1 and 1:N systems. If users are experiencing problems with a biometric system, they may
need to re-enroll to gather higher quality data.
Submission - The process whereby a user provides behavioral or physiological data in the form
of biometric samples to a biometric system. A submission may require looking in the direction of
a camera or placing a finger on a platen. Depending on the biometric system, a user may have to
remove eyeglasses, remain still for a number of seconds, or recite a pass phrase in order to
provide a biometric sample.
Acquisition device – The hardware used to acquire biometric samples. The following acquisition
devices are associated with each biometric technology:
Feature extraction - The automated process of locating and encoding distinctive characteristics
from a biometric sample in order to generate a template. The feature extraction process may
include various degrees of image or sample processing in order to locate a sufficient amount of
accurate data. For example, voice recognition technologies can filter out certain frequencies and
patterns, and fingerprint technologies can thin the ridges present in a fingerprint image to the
width of a single pixel. Furthermore, if the sample provided is inadequate to perform feature
extraction, the biometric system will generally instruct the user to provide another sample, often
with some type of advice or feedback.
The manner in which biometric systems extract features is a closely guarded secret, and varies
from vendor to vendor. Common physiological and behavioral characteristics used in feature
extraction include the following:
There are a greater variety of fingerprint devices available than any other biometric. Fingerprint
recognition is the front-runner for mass-market biometric-ID systems.
Fingerprint scanning has a high accuracy rate when users are sufficiently educated. Fingerprint
authentication is a good choice for in-house systems where enough training can be provided to
users and where the device is operated in a controlled environment. The small size of the
fingerprint scanner, ease of integration - can be easily adapted to keyboards, and most
significantly the relatively low costs make it an affordable, simple choice for workplace access
security.
Plans to integrate fingerprint scanning technology into laptops using biometric technology
include a single chip using more than 16,000 location elements to map a fingerprint of the living
cells that lay below the top layers of dead skin. Therefore, the reading is still detectable if the
finger has calluses, is damaged, worn, soiled, moist, dry or otherwise hard-to-read finger
surfaces--a common obstacle. This subsurface capability eliminates any attainment or detection
failures.
Accuracy and Integrity
With any security system, users will wonder, can fingerprint recognition system be beaten? In
most cases, false negatives (a failure to recognize a legitimate user) are more likely than false
positives. Overcoming a fingerprint system by presenting it with a "false or fake" fingerprint is
likely to be a difficult deed. However, such scenarios will be tried, and the sensors on the market
use a variety of means to circumvent them. For instance, someone may attempt to use latent print
residue on the sensor just after a legitimate user accesses the system. At the other end of the
scale, there is the gruesome possibility of presenting a finger to the system that is no longer
connected to its owner. Therefore, sensors attempt to determine whether a finger is live, and not
made of latex (or worse). Detectors for temperature, blood-oxygen level, pulse, blood flow,
humidity, or skin conductivity would be integrated.
Unfortunately, no technology is perfect--false positives and spoiled readings do occur from time
to time. But for those craving to break free from the albatross that the password has become as
both a security and time-management issue fingerprint scanners are worth looking into. It is
estimated that 40 percent of helpdesk calls are password related. Whether incorporated into the
keyboard or mouse, or used as a standalone device, scanners are more affordable than ever, allow
encryption of files keyed to a fingerprint, and can, perhaps most importantly, help minimize
stress over that stolen laptop.
Fingerprint Matching:
Among all the biometric techniques, fingerprint-based identification is the oldest method which
has been successfully used in numerous applications. Everyone is known to have unique,
immutable fingerprints. A fingerprint is made of a series of ridges and furrows on the surface of
the finger. The uniqueness of a fingerprint can be determined by the pattern of ridges and
furrows as well as the minutiae points. Minutiae points are local ridge characteristics that occur
at either a ridge bifurcation or a ridge ending.
Fingerprint matching techniques can be placed into two categories: minutae-based and
correlation based. Minutiae-based techniques first find minutiae points and then map their
relative placement on the finger. However, there are some difficulties when using this approach.
It is difficult to extract the minutiae points accurately when the fingerprint is of low quality. Also
this method does not take into account the global pattern of ridges and furrows. The correlation-
based method is able to overcome some of the difficulties of the minutiae-based approach.
However, it has some of its own shortcomings. Correlation-based techniques require the precise
location of a registration point and are affected by image translation and rotation.
Fingerprint matching based on minutiae has problems in matching different sized (unregistered)
minutiae patterns. Local ridge structures can not be completely characterized by minutiae. We
are trying an alternate representation of fingerprints which will capture more local information
and yield a fixed length code for the fingerprint. The matching will then hopefully become a
relatively simple task of calculating the Euclidean distance will between the two codes.
We are developing algorithms which are more robust to noise in fingerprint images and deliver
increased accuracy in real-time. A commercial fingerprint-based authentication system requires a
very low False Reject Rate (FAR) for a given False Accept Rate (FAR). This is very difficult to
achieve with any one technique. We are investigating methods to pool evidence from various
matching techniques to increase the overall accuracy of the system. In a real application, the
sensor, the acquisition system and the variation in performance of the system over time is very
critical. We are also field testing our system on a limited number of users to evaluate the system
performance over a period of time.
Applications
Fingerprint technology is used by hundreds of thousands of people daily to access networks and
PCs, enter restricted areas, and to authorize transactions. The technology is used broadly in a
range of vertical markets and within a range of horizontal applications, primarily PC/Network
Access, Physical Security/Time and Attendance, and Civil ID. Most deployments are 1:1, though
there are a number of "one-to-few" deployments in which individuals are matched against
modest databases, typically of 10-100 users. Large-scale 1: N applications, in which a user is
identified from a large fingerprint database, are classified as AFIS.
The human fingerprint is comprised of various types of ridge patterns, traditionally classified
according to the decades-old Henry system: left loop, right loop, arch, whorl, and tented arch.
Loops make up nearly 2/3 of all fingerprints, whorls are nearly 1/3, and perhaps 5-10% are
arches. These classifications are relevant in many large-scale forensic applications, but are rarely
used in biometric authentication. This fingerprint is a right loop.
Minutiae (Figure 1), the discontinuities that interrupt the otherwise smooth flow of ridges, are the
basis for most fingerprint authentication. Codified in the late 1800's as Galton features, minutiae
are at their most rudimentary ridge endings, the points at which a ridge stops, and bifurcations,
the point at which one ridge divides into two. Many types of minutiae exist, including dots (very
small ridges), islands (ridges slightly longer than dots, occupying a middle space between two
temporarily divergent ridges), ponds or lakes (empty spaces between two temporarily divergent
ridges), spurs (a notch protruding from a ridge), bridges (small ridges joining two longer adjacent
ridges), and crossovers (two ridges which cross each other).
Other features are essential to fingerprint authentication. The core is the inner point, normally in
the middle of the print, around which swirls, loops, or arches center. It is frequently
characterized by a ridge ending and several acutely curved ridges. Deltas are the points, normally
at the lower left and right hand of the fingerprint, around which a triangular series of ridges
center.
The ridges are also marked by pores, which appear at steady intervals. Some initial attempts have
been made to use the location and distribution of the pores as a means of authentication, but the
resolution required to capture pores consistently is very high.
Once a high-quality image is captured, there are a several steps required to convert its distinctive
features into a compact template. This process, known as feature extraction, is at the core of
fingerprint technology. Each of the 50 primary fingerprint vendors has a proprietary feature
extraction mechanism; the vendors guard these unique algorithms very closely. What follows is a
series of steps used, in some fashion, by many vendors - the basic principles apply even to those
vendors who use alternative mechanisms.
The image must then be converted to a usable format. If the image is grayscale, areas lighter than
a particular threshold are discarded, and those darker are made black. The ridges are then thinned
from Fingerprint Form Factors
Form factor is a term used to describe the manner in which a biometric sensor is imbedded into
an acquisition device, Biometric sensor, in particular fingerprint sensors, can be imbedded on top
of a device, on its side, recessed or protruding. Some biometric devices require users to sweep
their fingers across the them while others require that users place their fingers on the sensors and
hold them still until they are authenticated.
Though the placement of the biometric sensor is important from an ergonomic standpoint,
several other considerations are equally important form factors. One of them is the type of device
that the user interacts with. Several broad categories of device types are listed below.
5-8 pixels in width down to one pixel, for precise location of endings and bifurcations.
Fingerprint Matching:
Among all the biometric techniques, fingerprint-based identification is the oldest method which
has been successfully used in numerous applications. Everyone is known to have unique,
immutable fingerprints. A fingerprint is made of a series of ridges and furrows on the surface of
the finger. The uniqueness of a fingerprint can be determined by the pattern of ridges and
furrows as well as the minutiae points. Minutiae points are local ridge characteristics that occur
at either a ridge bifurcation or a ridge ending.
Fingerprint matching techniques can be placed into two categories: minutae-based and
correlation based. Minutiae-based techniques first find minutiae points and then map their
relative placement on the finger. However, there are some difficulties when using this approach.
It is difficult to extract the minutiae points accurately when the fingerprint is of low quality. Also
this method does not take into account the global pattern of ridges and furrows. The correlation-
based method is able to overcome some of the difficulties of the minutiae-based approach.
However, it has some of its own shortcomings. Correlation-based techniques require the precise
location of a registration point and are affected by image translation and rotation.
Fingerprint matching based on minutiae has problems in matching different sized (unregistered)
minutiae patterns. Local ridge structures can not be completely characterized by minutiae. We
are trying an alternate representation of fingerprints which will capture more local information
and yield a fixed length code for the fingerprint. The matching will then hopefully become a
relatively simple task of calculating the Euclidean distance will between the two codes.
We are developing algorithms which are more robust to noise in fingerprint images and deliver
increased accuracy in real-time. A commercial fingerprint-based authentication system requires a
very low False Reject Rate (FAR) for a given False Accept Rate (FAR). This is very difficult to
achieve with any one technique. We are investigating methods to pool evidence from various
matching techniques to increase the overall accuracy of the system. In a real application, the
sensor, the acquisition system and the variation in performance of the system over time is very
critical. We are also field testing our system on a limited number of users to evaluate the system
performance over a period of time.
Fingerprint Image Enhancement:
A critical step in automatic fingerprint matching is to automatically and reliably extract minutiae
from the input fingerprint images. However, the performance of a minutiae extraction algorithm
relies heavily on the quality of the input fingerprint images. In order to ensure that the
performance of an automatic fingerprint identification/verification system will be robust with
respect to the quality of the fingerprint images, it is essential to incorporate a fingerprint
enhancement algorithm in the minutiae extraction module. We have developed a fast fingerprint
enhancement algorithm, which can adaptively improve the clarity of ridge and furrow structures
of input fingerprint images based on the estimated local ridge orientation and frequency. We
have evaluated the performance of the image enhancement algorithm using the goodness index
of the extracted minutiae and the accuracy of an online fingerprint verification system.
Experimental results show that incorporating the enhancement algorithms improves both the
goodness index and the verification accuracy.
Facial Recognition:
Introduction
Humans often use faces to recognize individuals and advancements in computing capability over
the past few decades now enable similar recognitions automatically. Early face recognition
algorithms used simple geometric models, but the recognition process has now matured into a
science of sophisticated mathematical representations and matching processes. Major
advancements and initiatives in the past ten to fifteen years have propelled face recognition
technology into the spotlight. Face recognition can be used for both verification and
identification
Although the concept of recognizing someone from facial features is intuitive, facial recognition,
as a biometric, makes human recognition a more automated, computerized process. What sets
apart facial recognition from other biometrics is that it can be used for surveillance purposes. For
example, public safety authorities want to locate certain individuals such as wanted criminals,
suspected terrorists, and missing children. Facial recognition may have the potential to help the
authorities with this mission.
1. First, an image of the face is acquired. This acquisition can be accomplished by digitally
scanning an existing photograph or by using an electro-optical camera to acquire a live picture of
a subject. As video is a rapid sequence of individual still images, it can also be used as a source
of facial images.
2. Second, software is employed to detect the location of any faces in the acquired image. This
task is difficult, and often generalized patterns of what a face “looks like” (two eyes and a mouth
set in an oval shape) are employed to pick out the faces.
3. Once the facial detection software has targeted a face, it can be analyzed. As noted in slide
three, facial recognition analyzes the spatial geometry of distinguishing features of the face.
Different vendors use different methods to extract the identifying features of a face. Thus,
specific details on the methods are proprietary. The most popular method is called Principle
Components Analysis (PCA), which is commonly referred to as the eigenface method. PCA has
also been combined with neural networks and local feature analysis in efforts to enhance its
performance. Template generation is the result of the feature extraction process. A template is a
reduced set of data that represents the unique features of an enrollee’s face. It is important to note
that because the systems use spatial geometry of distinguishing facial features, they do not use
hairstyle, facial hair, or other similar factors.
4. The fourth step is to compare the template generated in step three with those in a database of
known faces. In an identification application, this process yields scores that indicate how closely
the generated template matches each of those in the database. In a verification application, the
generated template is only compared with one template in the database – that of the claimed
identity.
5. The final step is determining whether any scores produced in step four are high enough to
declare a match. The rules governing the declaration of a match are often configurable by the end
user, so that he or she can determine how the facial recognition system should behave based on
security and operational considerations.
Predominant Approaches
PCA commonly referred to as the use of eigenfaces, is the technique pioneered by Kirby and
Sirivich in 1988. With PCA, the probe and gallery images must be the same size and must first
be normalized to line up the eyes and mouth of the subjects within the images. The PCA approach
is then used to reduce the dimension of the data by means of data compression basics and reveals
the most effective low dimensional structure of facial patterns. This reduction in dimensions
removes information that is not useful and precisely decomposes the face structure into
orthogonal (uncorrelated) components known as eigenfaces. Each face image may be represented
as a weighted sum (feature vector) of the eigenfaces, which are stored in a 1D array. A probe
image is compared against a gallery image by measuring the distance between their respective
feature vectors. The PCA approach typically requires the full frontal face to be presented each
time; otherwise the image results in poor performance. The primary advantage of this technique is
that it can reduce the data needed to identify the individual to 1/1000th of the data presented.
LDA is a statistical approach for classifying samples of unknown classes based on training
samples with known classes. This technique aims to maximize between-class (i.e., across users)
variance and minimize within-class (i.e., within user) variance. In Figure 2 where each block
represents a class, there are large variances between classes, but little variance within classes.
When dealing with high dimensional face data, this technique faces the small sample size
problem that arises where there are a small number of available training samples compared to the
dimensionality of the sample space.
Figure: Example of Six Classes Using LDA
EBGM relies on the concept that real face images have many non-linear characteristics that are
not addressed by the linear analysis methods discussed earlier, such as variations in illumination
(outdoor lighting vs. indoor fluorescents), pose (standing straight vs. leaning over) and
expression (smile vs. frown). A Gabor wavelet transform creates a dynamic link architecture that
projects the face onto an elastic grid. The Gabor jet is a node on the elastic grid, notated by
circles on the image below, which describes the image behavior around a given pixel. It is the
result of a convolution of the image with a Gabor filter, which is used to detect shapes and to
extract features using image processing. [A convolution expresses the amount of overlap from
functions, blending the functions together.] Recognition is based on the similarity of the Gabor
filter response at each Gabor node. This biologically-based method using Gabor filters is a
process executed in the visual cortex of higher mammals. The difficulty with this method is the
requirement of accurate landmark localization, which can sometimes be achieved by combining
PCA and LDA methods.
Drawbacks:
Speaker, or voice, recognition is a biometric modality that uses an individual’s voice for
recognition purposes. The speaker recognition process relies on features influenced by both the
physical structure of an individual’s vocal tract and the behavioral characteristics of the
individual. A popular choice for remote authentication due to the availability of devices for
collecting speech samples and its ease of integration, speaker recognition is different from some
other biometric methods in that speech samples are captured dynamically or over a period of
time, such as a few seconds. Analysis occurs on a model in which changes over time are
monitored, which is similar to other behavioral biometrics such as dynamic signature, gait, and
keystroke recognition.
Voice recognition technology utilizes the distinctive aspects of the voice to verify the identity of
individuals. Voice recognition is occasionally confused with speech recognition, a technology
which translates what a user is saying (a process unrelated to authentication). Voice recognition
technology, by contrast, verifies the identity of the individual who is speaking. The two
technologies are often bundled – speech recognition is used to translate the spoken word into an
account number, and voice recognition verifies the vocal characteristics against those associated
with this account.
Voice recognition can utilize any audio capture device, including mobile and land telephones and
PC microphones. The performance of voice recognition systems can vary according to the
quality of the audio signal as well as variation between enrollment and verification devices, so
acquisition normally takes place on a device likely to be used for future verification.
One of the challenges facing large-scale implementations of biometrics is the need to deploy new
hardware to employees, customers and users. One strength of telephony-based voice recognition
implementations is that they are able to circumvent this problem, especially when they are
implemented in call center and account access applications. Without additional hardware at the
user end, voice recognition systems can be installed as a subroutine through which calls are
routed before access to sensitive information is granted. The ability to use existing telephones
means that voice recognition vendors have hundreds of millions of authentication devices
available for transactional usage today.
Similarly, voice recognition is able to leverage existing account access and authentication
processes, eliminating the need to introduce unwieldy or confusing authentication scenarios.
Automated telephone systems utilizing speech recognition are currently ubiquitous due to the
savings possible by reducing the amount of employees necessary to operate call centers. Voice
recognition and speech recognition can function simultaneously using the same utterance,
allowing the technologies to blend seamlessly. Voice recognition can function as a reliable
authentication mechanism for automated telephone systems, adding security to automated
telephone-based transactions in areas such as financial services and health care.
Though inconsistent with many users’ perceptions, certain voice recognition technologies are
highly resistant to imposter attacks, even more so than some fingerprint systems. While false
non-matching can be a common problem, this resistance to false matching means that voice
recognition can be used to protect reasonably high-value transactions.
Since the technology has not been traditionally used in law enforcement or tracking applications
where it could be viewed as a Big Brother technology, there is less public fear that voice
recognition data can be tracked across databases or used to monitor individual behavior. Thus,
voice recognition largely avoids one of the largest hurdles facing other biometric technologies,
that of perceived invasiveness.
Approach
Figure: Voice Sample: The voice input signal (top of image) shows the input loudness with
respect to the time domain. The lower image (blue) depicts the spectral information of the voice
signal. This information is plotted by displaying the time versus the frequency variations
Signature recognition
Signature verification is the process used to recognize an individual’s hand-written signature.
Dynamic signature verification technology uses the behavioral biometrics of a hand written
signature to confirm the identity of a computer user. This is done by analyzing the shape, speed,
stroke, pen pressure and timing information during the act of signing. Natural and intuitive, the
technology is easy to explain and trust.
There is an important distinction between simple signature comparisons and dynamic signature
verification. Both can be computerized, but a simple comparison only takes into account what
the signature looks like. Dynamic signature verification takes into account how the signature was
made. With dynamic signature verification it is not the shape or look of the signature that is
meaningful, it is the changes in speed, pressure and timing that occur during the act of signing.
Only the original signer can recreate the changes in timing and X, Y, and Z (pressure).
A pasted bitmap, a copy machine or an expert forger may be able to duplicate what a signature
looks like, but it is virtually impossible to duplicate the timing changes in X, Y and Z (pressure).
The practiced and natural motion of the original signer would required to repeat the patterns
shown.
There will always be slight variations in a person’s handwritten signature, but the consistency
created by natural motion and practice over time creates a recognizable pattern that makes the
handwritten signature a natural for biometric identification.
Signature verification is natural and intuitive. The technology is easy to explain and trust. The
primary advantage that signature verification systems have over other types of biometric
technologies is that signatures are already accepted as the common method of identity
verification. This history of trust means that people are very willing to accept a signature based
verification system.
Dynamic signature verification technology uses the behavioral biometrics of a hand written
signature to confirm the identity of a computer user. Unlike the older technologies of passwords
and keycards - which are often shared or easily forgotten, lost, and stolen - dynamic signature
verification provides a simple and natural method for increased computer security and trusted
document authorization.
Signature-scan technology utilizes the distinctive aspects of the signature to verify the identity of
individuals. The technology examines the behavioral components of the signature, such as stroke
order, speed and pressure, as opposed to comparing visual images of signatures. Unlike
traditional signature comparison technologies, signature-scan measures the physical activity of
signing. While a system may also leverage a comparison of the visual appearance of a signature,
or “static signature,” the primary components of signature-scan are behavioral.
The signature, along with the variables present during the signing process, is transmitted to a
local PC for template generation. Verification can take place against a local PC or a central PC,
depending on the application. In employee-facing signature-scan applications such as purchase
order authentication, local processing may be preferred; there may be just a single PC used for
such authorization. For customer-facing applications, such as retail or banking authentication,
centralized authentication is likely necessary because the user may sign at one of many locations.
The results of signature-scan comparisons must be tied into existing authentication schemes or
used as the basis of new authentication procedures. For example, in a transactional authentication
scenario, the “authorize transaction” message might be sent after a signature is acquired by a
central PC. When signature-scan is integrated into this process, an additional routine requires
that the signature characteristics be successfully matched against those on file in order for the
“authorize transaction” message to go forward. In other applications, the results of a signature-
scan match may simply be noted and appended to a transaction. For example, in document
authentication, an unsuccessful comparison may be flagged for future resolution while not
halting a transaction. The simplest example would be a signature used for handheld device login:
the successful authentication message merely needs to be integrated into the login module,
similarly to a PIN or password
Signature-Scan has several strengths. Because of the large amount of data present in a signature-
scan template, as well as the difficulty in mimicking the behavior of signing, signature scan-
technology is highly resistant to imposter attempts. As a result of the low False Acceptance Rates
(FAR), a measure of the likelihood that a user claiming a false identity will be accepted,
deployers can have a high confidence level that successfully matched users are who they claim to
be. Signature-scan also benefits from its ability to leverage existing processes and hardware,
such as signature capture tablets and systems based on public key infrastructure (PKI), a popular
method for data encryption. Since most people are accustomed to providing their signatures
during customer interactions, the technology is considered less invasive than some other
biometrics.
Working:
A video image of the iris of the eye is needed to produce a digitized 512 byte IrisCode® record.
The image can be taken from up to 3 to 21 inches away (depending upon the camera used in the
application), therefore no physical contact is required. Once an individual is entered into the
database, recognition is affirmed in just seconds.
Iris recognition technology examines more than 240 degrees of freedom in the human iris to
create the patented IrisCode© record, a 512-byte data template used to identify individuals
and/or authenticate user privileges.
The IrisCode™ creation process starts with video-based image acquisition. This is a purely
passive process achieved using CCD Video Cameras. This image is then processed and encoded
into an IrisCode™ record, which is stored in an IrisCode™ database. This stored record is then
used for identification in any live transaction when an iris is presented for comparison.
Iris recognition
Iris scan biometrics employs the unique characteristics and features of the human iris in order to
verify the identity of an individual. The iris is the area of the eye where the pigmented or colored
circle, usually brown or blue, rings the dark pupil of the eye.
The iris-scan process begins with a photograph. A specialized camera, typically very close to the
subject, no more than three feet, uses an infrared imager to illuminate the eye and capture a very
high-resolution photograph. This process takes only one to two seconds and provides the details
of the iris that are mapped, recorded and stored for future matching/verification.
Eyeglasses and contact lenses present no problems to the quality of the image and the iris-scan
systems test for a live eye by checking for the normal continuous fluctuation in pupil size.
The inner edge of the iris is located by an iris-scan algorithm which maps the iris’ distinct
patterns and characteristics. An algorithm is a series of directives that tell a biometric system
how to interpret a specific problem. Algorithms have a number of steps and are used by the
biometric system to determine if a biometric sample and record is a match.
Iris’ are composed before birth and, except in the event of an injury to the eyeball,
remain unchanged throughout an individual’s lifetime. Iris patterns are extremely
complex, carry an astonishing amount of information and have over 200 unique
spots. The fact that an individual’s right and left eyes are different and that patterns
are easy to capture, establishes iris-scan technology as one of the biometrics that is
very resistant to false matching and fraud.
The false acceptance rate for iris recognition systems is 1 in 1.2 million, statistically better than
the average fingerprint recognition system. The real benefit is in the false-rejection rate, a
measure of authenticated users who are rejected. Fingerprint scanners have a 3 percent false-
rejection rate, whereas iris scanning systems boast ratees at the 0 percent level.
Iris-scan technology has been piloted in ATM environments in England, the US, Japan and
Germany since as early as 1997. In these pilots the customer’s iris data became the verification
tool for access to the bank account, thereby eliminating the need for the customer to enter a PIN
number or password. When the customer presented their eyeball to the ATM machine and the
identity verification was positive, access was allowed to the bank account. These applications
were very successful and eliminated the concern over forgotten or stolen passwords and received
tremendously high customer approval ratings.
Airports have begun to use iris-scanning for such diverse functions as employee
identification/verification for movement through secure areas and allowing registered frequent
airline passengers a system that enables fast and easy identity verification in order to expedite
their path through passport control.
Other applications include monitoring prison transfers and releases, as well as projects designed
to authenticate on-line purchasing, on-line banking, on-line voting and on-line stock trading to
name just a few. Iris-scan offers a high level of user security, privacy and general peace of mind
for the consumer.
A highly accurate technology such as iris-scan has vast appeal because the inherent argument for
any biometric is, of course, increased security
Benefits of Using Iris Technology
• The iris is a thin membrane on the interior of the eyeball. Iris patterns are extremely
complex.
• Patterns are individual (even in fraternal or identical twins).
• Patterns are formed by six months after birth, stable after a year. They remain the same
for life.
• Imitation is almost impossible.
• Patterns are easy to capture and encode
Misidentification
Coded Pattern Security Applications
Method rate
High-security
Iris Recognition Iris pattern 1/1,200,000 High
facilities
Fingerprinting Fingerprints 1/1,000 Medium Universal
Outline, shape and
Facial Low-security
distribution of eyes and 1/100 Low
Recognition facilities
nose
Shape of letters, writing Low-security
Signature 1/100 Low
order, pen pressure facilities
Voice printing Voice characteristics 1/30 Low Telephone service
The Iris
Iris recognition is based on visible (via regular and/or infrared light) qualities of the iris. A
primary visible characteristic is the trabecular meshwork (permanently formed by the 8th month
of gestation), a tissue which gives the appearance of dividing the iris in a radial fashion. Other
visible characteristics include rings, furrows, freckles, and the corona, to cite only the more
familiar.
IrisCodeTM
Expressed simply, iris recognition technology converts these visible characteristics as a phase
sequence into a 512 byte IrisCode(tm), a template stored for future identification attempts. From
the iris' 11mm diameter, Dr. Daugman's algorithms provide 3.4 bits of data per square mm. This
density of information is such that each iris can be said to have 266 'degrees of freedom', as
opposed to 13-60 for traditional biometric technologies. This '266' measurement is cited in most
iris recognition literature; after allowing for the algorithm's correlative functions and for
characteristics inherent to most human eyes, Dr. Daugman concludes that 173 "independent
binary degrees-of-freedom" can be extracted from his algorithm - an exceptionally large number
for a biometric. A key differentiator of iris-scan technology is the fact that 512 byte templates are
generated for every iris, which facilitates match speed (capable of matching over 500,000
templates per second)
Iris Acquisition
The first step is location of the iris by a dedicated camera no more than 3 feet from the eye. After
the camera situates the eye, the algorithm narrows in from the right and left of the iris to locate
its outer edge. This horizontal approach accounts for obstruction caused by the eyelids. It
simultaneously locates the inner edge of the iris (at the pupil), excluding the lower 90° because
of inherent moisture and lighting issues.
Iris-Scan Issues
Iris-scan technology requires reasonably controlled and cooperative user interaction - the
enrollee must hold still in a certain spot, even if only momentarily. Many users struggle to
interact with the system until they become accustomed to its operations. In applications whose
user interaction is frequent (e.g. employee physical access), the technology grows easier to use;
however, applications in which user interaction is infrequent (e.g. national ID) may encounter
ease-of-use issues. Over time, with improved acquisition devices, this issue should grow less
problematic.
The accuracy claims associated with iris-scan technology may overstate the real-world efficacy
of the technology. Because the claimed equal error rates are derived from assessment and
matching of ideal iris images (unlike those acquired in the field), actual results may not live up to
the astronomical projections provided by leading suppliers of the technology.
Iris-Scan Applications
Iris-scan technology has traditionally been deployed in high-security employee-facing physical
access implementations, although 2002 saw a number of novel, high-profile iris-scan
deployments in new applications. Iridian - the technology’s primary developer - is dedicated to
moving the technology to the desktop, and has had some success in small-scale logical access
deployments. The most prominent recent deployments of iris-scan technology have been
passenger authentication programs at airports in the U.S., U.K., Amsterdam, and Iceland; the
technology is also used in corrections applications in the U.S. to identify inmates. A number of
developing countries are considering iris-scan technology for national ID and other large-scale
1:N applications, although to date it is still believed that the largest deployed Iridian database
spans under 100,000 enrollees.
Iris Recognition Technology is the ideal solution in any environment whether you have one door
to protect or one hundred. The unique processing capabilities of the KnoWho server mean that
Iris Recognition is the ONLY technology capable of operating efficiently in situations where
1000's or even millions of persons must be enrolled.
Additionally, Common Criteria and ASIO T4 accreditation mean that Iris Recognition is the only
biometric technology approved for use in Government departments.
Benefits:
Drawbacks:
References:
"Using your body as a key; legal aspects of biometrics".
http://cwis.kub.nl/~frw/people/kraling/content/biomet.htm
Biometrics Consortuim: www.biometrics.org
Y. Adini, Y. Moses, and S. Ullman, “Face recognition: the problem
ofcompensating for changes in illumination direction,” IEEE Trans.
Pattern Anal. Machine Intell., vol. 19, pp. 721–732, July 1997.
L. Berggren, “Iridology: A critical review,” Acta Ophthalmol., vol. 63,no. 1, pp.
1–8, 1985.
www.biometricinfo.org