Professional Documents
Culture Documents
Abstract
In this work, case studies of incidents where individuals were found to have altered
their fingerprints for circumventing AFIS are compiled and the impact of fingerprint
alteration on the accuracy of a commercial fingerprint matcher is investigated. The alterations
are classified into three major categories and possible countermeasures are suggested. A
technique is developed to automatically detect altered fingerprints based on analyzing
orientation field and minutiae distribution. The proposed technique and the NFIQ algorithm
are evaluated on a large database of altered fingerprints provided by a law enforcement
agency. Experimental results show the feasibility of the proposed approach in detecting
altered fingerprints and highlight the need to further pursue this problem.
Eminent Technology Solutions
Who we are
Eminent Technology Solutions shall meet the global challenges with its pool of highly
qualified Professionals. The company has competencies in Customized Software
Development, Out-sourcing of manpower and consultancy in the areas of Information
Systems, Analysis, Design, Development and Implementation.
Company's Mission Statement
Company's Values
Trust each other with utmost respect. Continuous skill improvement with professional
work environment.
CHAPTER I
INTRODUCTION
Figure.1 Typical fingerprint with its features and Figure.2 Typical termination and
bifurcation
The use of altered fingerprints to mask one’s identity constitutes a serious “attack”
against a border control biometric system since it defeats the very purpose for which the
system was deployed in the first place, i.e., to identify individuals in a watch list. It should be
noted that altered fingerprints are different from fake fingerprints. The use of fake fingers
made of glue, latex, or silicone is a well-publicized method to circumvent fingerprint
systems. Altered fingerprints, however, are real fingers that are used to conceal one’s identity
in order to evade identification by a biometric system.
While fake fingers are typically used by individuals to adopt another person’s
identity, altered fingers are used to mask one’s own identity. In order to detect attacks based
on fake fingers, many software and hardware solutions have been proposed. However, the
problem of altered fingerprints has hitherto not been studied in the literature and there are no
reported techniques to identify them. Furthermore, the lack of public databases comprised of
altered fingerprint images has stymied research in this area. One of the goals of this paper is
to highlight the importance of the problem, analyze altered fingerprints, and propose an
automatic detection algorithm for them.
The altered fingerprints into three categories based on the changes in ridge pattern due
to alteration. This categorization will assist us in following manner: 1) Getting a better
understanding of the nature of alterations that can be encountered, 2) detecting altered
fingerprints by modeling well-defined subcategories, and 3) developing methods for altered
fingerprint restoration.
Obliteration
This may be because obliteration, which completely destroys ridge structures, is much
simpler to perform than distortion/imitation, which requires a surgical procedure.
Furthermore, detecting distorted or imitated fingerprints is much more difficult for human
examiners than obliterated fingerprints. Obliterated fingerprints can evade fingerprint quality
control software, depending on the area of the damage. If the affected finger area is small, the
existing fingerprint quality assessment software may fail to detect it as an altered fingerprint,
but AFIS is likely to successfully match the damaged fingerprint to the original mated
fingerprint. But, if the altered area is sufficiently large, fingerprint quality control software
can easily detect the damage. To identify individuals with severely obliterated fingerprints, it
may be necessary to treat these fingerprints as latent images, perform AFIS search using
manually marked features, and adopt an appropriate fusion scheme for tenprint search. In rare
cases, even if the finger surface is completely damaged, the dermal papillary surface, which
contains the same pattern as the epidermal pattern, may be used for identification.
Distortion
Friction ridge patterns on fingertips can be turned into unnatural ridge patterns by
removing portions of skin from a fingertip and either grafting them back in different positions
or replacing them with friction ridge skin from the palm or sole. Distorted fingerprints have
unusual ridge patterns which are not found in natural fingerprints. These abnormalities
include abnormal spatial distribution of singular points or abrupt changes in orientation field
along the scars. Distorted fingerprints can also successfully pass the fingerprint quality test
since their local ridge structure remains similar to natural fingerprints while their global ridge
pattern is abnormal. Fingerprints altered by “Z” cut are of special interest since they retain
their original ridge structure, enabling reconstruction of the original fingerprint before
alteration. Therefore, it is imperative to upgrade current fingerprint quality control software
to detect the distorted fingerprints. Once detected, the following operations may be performed
to assist AFIS: 1) identify unaltered regions of the fingerprint and manually mark the features
in these regions and 2) reconstruct the original fingerprint as in the “Z” cut case.
Imitation
Friction ridge patterns on fingertips can still preserve fingerprint-like pattern after an
elaborate procedure of fingerprint alteration: 1) a portion of skin is removed and the
remaining skin is pulled and stitched together, 2) friction ridge skin from other parts of the
body is used to fill the removed part of the fingertip to reconcile with the remaining ridge
structure, or 3) transplantation of the entire fingertip. Imitated fingerprints can not only
successfully pass the fingerprint quality assessment software, they can also confound human
examiners. To match altered fingerprints, matching algorithms that are robust to distortion
and inconsistency are to be developed. In the case where fingerprints from different fingers
are swapped, fingerprint matching without using finger position information.
CHAPTER II
Existing System
Fingerprint alteration has even been performed at a much larger scale involving a
group of individuals. It has been reported that hundreds of asylum seekers had cut, abraded,
and burned their fingertips to prevent identification by EURODAC, a European Union-wide
fingerprint system for identifying asylum seekers. Although the number of publicly disclosed
cases of altered fingerprints is not very large, it is extremely difficult to estimate the actual
number of individuals who have successfully evaded identification by fingerprint systems as
a result of fingerprint alteration. Almost all the people identified as having altered their
fingerprints were not detected by AFIS, but by some other means.
Proposed System
1) Compiling case studies of incidents where individuals are found to have altered
their fingerprints for circumventing AFIS.
3) Classifying the alterations into three major categories and suggesting possible
countermeasures.
5) Evaluating the proposed technique and the NFIQ algorithm on a large database of
altered fingerprints provided by a law enforcement agency.
The proposed algorithm based on the features extracted from the orientation field and
minutiae satisfy the three essential requirements for alteration detection algorithm:
Modules
a. Normalization
d. Feature extraction
A. NORMALIZATION
The orientation field of the fingerprint is computed using the gradient-based method.
The initial orientation field is smoothed averaging filter, followed by averaging the
orientations in pixel blocks. A foreground mask is obtained by measuring the dynamic range
of gray values of the fingerprint image in local blocks and morphological process for filling
holes and removing isolated blocks is performed.
D. FEATURE EXTRACTION
The error map is computed as the absolute difference between and used to construct
the feature vector.
In this module, a minutia in the fingerprint indicates ridge characteristics such as ridge
ending or ridge bifurcation. Almost all fingerprint recognition systems use minutiae for
matching.
ER Diagram
SYSTEM REQUIREMENT
HARDWARE REQUIREMENTS
Mouse : Logitech.
SOFTWARE REQUIREMENTS
Feasibility Study
Technical Feasibility
The consideration that is normally associated with the technical feasibility includes where the
project is to be developed and implemented. The proposed software should have the security
for data and should be very fast in processing the data efficiently. A basic knowledge to
operate a computer is sufficient to handle the system, since the system is designed to provide
user-friendly access.
Economic Feasibility
Economic justification is generally the “Bottom Line” consideration for most systems. It
includes a broad range of concerns that include the Cost-benefit analysis. The cost-benefit
analysis delineates costs for project development and weights then against tangible and
development of the system. Hence, there are tangible and intangible benefits the project
development.
Operational Feasibility
The new system must be accepted by the user. In this system, the administrator is one of the
users. As users are responsible for initiating the development of a new system, this is rooted
out.
Cost-benefit analysis (CBA) is an analytical tool for assessing and the pros and cons of
moving forward with a business proposal.
A formal CBA tallies all of the planned project costs, quantifies each of the tangible benefits
and calculates key financial performance metrics such as return on investment (ROI), net
present value (NPV), internal rate of return (IRR) and payback period. The costs associated
with taking action are then subtracted from the benefits that would be gained. As a general
rule, the costs should be less than 50 percent of the benefits and the payback period shouldn't
exceed 12 months.
System design concentrates on moving from problem domain to solution domain. This
important phase is composed of several steps. It provides the understanding and procedural
details necessary for implementing the system recommended in the feasibility study.
Emphasis is on translating the performance requirements into design specification.
The design of any software involves mapping of the software requirements into Functional
modules. Developing a real time application or any system utilities involves two processes.
The first process is to design the system to implement it. The second is to construct the
executable code.
Software design has evolved from an intuitive art dependent on experience to a science,
which provides systematic techniques for the software definition. Software design is a first
step in the development phase of the software life cycle.
Before design the system user requirements have been identified, information has been
gathered to verify the problem and evaluate the existing system. A feasibility study has been
conducted to review alternative solution and provide cost and benefit justification. To
overcome this proposed system is recommended. At this point the design phase begins.
The process of design involves conceiving and planning out in the mind and making a
drawing. In software design, there are three distinct activities: External design, Architectural
design and detailed design. Architectural design and detailed design are collectively referred
to as internal design. External design of software involves conceiving and planning out and
specifying the externally observable characteristics of a software product.
Input Design:
Systems design is the process of defining the architecture, components, modules, interfaces,
and data for a system to satisfy specified requirements. Systems design could be seen as the
application of systems theory to product development. There is some overlap with the
disciplines of systems analysis, systems architecture and systems engineering.
Input Design is the process of converting a user oriented description of the inputs to a
computer-based business system into a programmer-oriented specification.
• Input data were found to be available for establishing and maintaining master and
transaction files and for creating output records
• The most suitable types of input media, for either off-line or on-line devices, where
selected after a study of alternative data capture techniques.
• The sequence of fields should match the sequence of the fields on the source
document.
Design input requirements must be comprehensive. Product complexity and the risk
associated with its use dictate the amount of detail
• These specify what the product does, focusing on its operational capabilities and the
processing of inputs and resultant outputs.
• These specify how much or how well the product must perform, addressing such
issues as speed, strength, response times, accuracy, limits of operation, etc.
Output Design:
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs.
In output design it is determined how the information is to be displaced for immediate need
and also the hard copy output. It is the most important and direct source information to the
user. Efficient and intelligent output design improves the system’s relationship to help user
decision-making.
3. Create document, report, or other formats that contain information produced by the
system.
The output form of an information system should accomplish one or more of the following
objectives.
• Future.
• Trigger an action.
• Confirm an action.
CHAPTER VI
Literature Survey
S.
No Title Year Methodology Disadvantage
1. Fast fingerprint 2014 A distributed Due to the higher number
identification for framework for of minutiae of the rolled
large databases fingerprint matching to fingerprints, the matching
tackle large databases process is more
in a reasonable time is computationally complex.
proposed.
5. A new RBFN with 2016 A Radial Basis Low accuracy and high
modified optimal Function Network learning time.
clustering algorithm (RBFN) based on
for clear and Modified Optimal
occluded fingerprint Clustering Algorithm
identification (MOCA) is developed
for clear and occluded
fingerprint
identification.
Software Description
A programming infrastructure created by Microsoft for building, deploying, and
running applications and services that use .NET technologies, such as desktop applications
and Web services.
ASP.NET.
Microsoft started development of the .NET Framework in the late 1990s, originally under the
name of Next Generation Windows Services (NGWS). By late 2000 the first beta versions of
.NET 1.0 were released. The .NET Framework (pronounced dot net) is a software
framework developed by Microsoft that runs primarily on Microsoft Windows. It includes a
large library and provides language interoperability (each language can use code written in
other languages) across several programming languages. Programs written for the .NET
Framework execute in a software environment (as contrasted to hardware environment),
known as the Common Language Runtime (CLR), an application virtual machine that
provides services such as security, memory management, and exception handling. The class
library and the CLR together constitute the .NET Framework.
An application software platform from Microsoft introduced in 2002 and commonly called
.NET ("dot net"). The .NET platform is similar in purpose to the Java EE platform, and like
Java's JVM runtime engine, .NET's runtime engine must be installed in the computer in order
to run .NET applications.
.NET is similar to Java because it uses an intermediate bytecode language that can be
executed on any hardware platform that has a runtime engine. It is also unlike Java, as it
provides support for multiple programming languages. Microsoft languages are C# (C
Sharp), J# (J Sharp), Managed C++, JScript.NET and Visual Basic.NET. Other languages
have been reengineered in the European version of .NET, called the Common Language
Infrastructure .
.NET Versions
.NET Framework 1.0 introduced the Common Language Runtime (CLR) and .NET
Framework 2.0 added enhancements. .NET Framework 3.0 included the Windows
programming interface (API) originally known as "WinFX," which is backward compatible
with the Win32 API. .NET Framework 3.0 added the following four subsystems and was
installed with Windows, starting with Vista. .NET Framework 3.5 added enhancements and
introduced a client-only version (see .NET Framework Client Profile). .NET Framework 4.0
added parallel processing and language enhancements.
TheUserInterface(WPF)
Windows Presentation Foundation (WPF) provides the user interface. It takes advantage of
advanced 3D graphics found in many computers to display a transparent, glass-like
appearance.
Messaging (WCF)
Windows Workflow Foundation (WWF) is used to integrate applications and automate tasks.
Workflow structures can be defined in the XML Application Markup Language.
User Identity (WCS)
Windows CardSpace (WCS) provides an authentication system for logging into a Web site
and transferring personal information.
DESIGN FEATURES
Interoperability
Because computer systems commonly require interaction between newer and older
applications, the .NET Framework provides means to access functionality implemented in
newer and older programs that execute outside the .NET environment. Access
to COM components is provided in the System .Runtime. InteropServices and
System.Enterprise Services namespaces of the framework; access to other functionality is
achieved using the P/Invoke feature.
The Common Language Runtime (CLR) serves as the execution engine of the .NET
Framework. All .NET programs execute under the supervision of the CLR, guaranteeing
certain properties and behaviors in the areas of memory management, security, and exception
handling.
Language independence
The Base Class Library (BCL), part of the Framework Class Library (FCL), is a
library of functionality available to all languages using the .NET Framework. The BCL
provides classes that encapsulate a number of common functions, including file reading and
writing, graphic rendering, database interaction, XML document manipulation, and so on. It
consists of classes, interfaces of reusable types that integrates with CLR(Common Language
Runtime).
Simplified deployment
The .NET Framework includes design features and tools which help manage
the installation of computer software to ensure it does not interfere with previously installed
software, and it conforms to security requirements.
Security
The design addresses some of the vulnerabilities, such as buffer overflows, which
have been exploited by malicious software. Additionally, .NET provides a common security
model for all applications.
Portability
While Microsoft has never implemented the full framework on any system except
Microsoft Windows, it has engineered the framework to be platform-agnostic and cross-
platform implementations are available for other operating systems (see Silverlight and
the Alternative implementations section below). Microsoft submitted the specifications for
the Common Language Infrastructure (which includes the core class libraries, Common Type
System, and the Common Intermediate Language), the C# language and the C++/CLI
language[8] to both ECMAand the ISO, making them available as official standards. This
makes it possible for third parties to create compatible implementations of the framework and
its languages on other platforms.
ARCHITECTURE:
.NET has its own security mechanism with two general features: Code Access
Security (CAS), and validation and verification. Code Access Security is based on evidence
that is associated with a specific assembly. Typically the evidence is the source of the
assembly (whether it is installed on the local machine or has been downloaded from the
intranet or Internet). Code Access Security uses evidence to determine the permissions
granted to the code. Other code can demand that calling code is granted a specified
permission. The demand causes the CLR to perform a call stack walk: every assembly of
each method in the call stack is checked for the required permission; if any assembly is not
granted the permission a security exception is thrown.
Class library
System
System.Diagnostics
System.Globalization
System.Resources
System.Text
System.Runtime.Serialization
System.Data
The .NET Framework includes a set of standard class libraries. The class library is
organized in a hierarchy of namespaces. Most of the built-in APIs are part of
either System.* or Microsoft.* namespaces. These class libraries implement a large number
of common functions, such as file reading and writing, graphic rendering, database
interaction, and XML document manipulation, among others. The .NET class libraries are
available to all CLI compliant languages.
Memory management
The .NET Framework CLR frees the developer from the burden of managing memory
(allocating and freeing up when done); it handles memory management itself by detecting
when memory can be safely freed. Instantiations of .NET types (objects) are allocated from
the managed heap; a pool of memory managed by the CLR.. When there is no reference to an
object, and it cannot be reached or used, it becomes garbage, eligible for collection. NET
Framework includes a garbage collector which runs periodically, on a separate thread from
the application's thread, that enumerates all the unusable objects and reclaims the memory
allocated to them.
VB.NET
VB.NET uses statements to specify actions. The most common statement is an expression
statement, consisting of an expression to be evaluated, on a single line. As part of that
evaluation, functions or subroutines may be called and variables may be assigned new values.
To modify the normal sequential execution of statements, VB.NET provides several control-
flow statements identified by reserved keywords. Structured programming is supported by
several constructs including two conditional execution constructs
( If … Then … Else … End If and Select Case ... Case ... End Select ) and three iterative
The For … To statement has separate initialization and testing sections, both of which must
be present. (See examples below.) The For Each statement steps through each value in a list.
There is no unified way of defining blocks of statements. Instead, certain keywords, such
as "If … Then" or "Sub" are interpreted as starters of sub-blocks of code and have
matching termination keywords such as "End If" or "End Sub".
Statements are terminated either with a colon (":") or with the end of line. Multiple line
statements in Visual Basic .NET are enabled with " _" at the end of each such line. The
need for the underscore continuation character was largely removed in version 10 and
later versions.[2]
The equals sign ("=") is used in both assigning values to variable and in comparison.
Round brackets (parentheses) are used with arrays, both to declare them and to get a
value at a given index in one of them. Visual Basic .NET uses round brackets to define
the parameters of subroutines or functions.
A single quotation mark ('), placed at the beginning of a line or after any number
of space or tab characters at the beginning of a line, or after other code on a line, indicates
that the (remainder of the) line is a comment.
CHAPTER VII
System Testing
Software Testing
Software testing is the process of evaluation a software item to detect differences between
given input and expected output. Also the feature of a software item is assessed. Testing
assesses the quality of the product. Software testing is a process that should be done during
the development process. In other words software testing is a verification and validation
process.
Types of testing
There are different levels during the process of Testing .Levels of testing include the
different methodologies that can be used while conducting Software Testing. Following are
the main levels of Software Testing:
Functional Testing.
Non-Functional Testing.
Functional Testing
Functional Testing of the software is conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements. There are five steps that are
involved when testing an application for functionality.
Steps Description
An effective testing practice will see the above steps applied to the testing policies of
every organization and hence it will make sure that the organization maintains the strictest of
standards when it comes to software quality.
Unit Testing
This type of testing is performed by the developers before the setup is handed over to
the testing team to formally execute the test cases. Unit testing is performed by the respective
developers on the individual units of source code assigned areas. The developers use test data
that is separate from the test data of the quality assurance team. The goal of unit testing is to
isolate each part of the program and show that individual parts are correct in terms of
requirements and functionality.
Limitations of Unit Testing
Testing cannot catch each and every bug in an application. It is impossible to evaluate
every execution path in every software application. The same is the case with unit testing.
There is a limit to the number of scenarios and test data that the developer can use to
verify the source code. So after he has exhausted all options there is no choice but to stop unit
testing and merge the code segment with other units.
Integration Testing
This is the next level in the testing and tests the system as a whole. Once all the
components are integrated, the application as a whole is tested rigorously to see that it meets
Quality Standards. This type of testing is performed by a specialized testing team. System
testing is so important because of the following reasons:
System Testing is the first step in the Software Development Life Cycle, where the
application is tested as a whole.
The application is tested thoroughly to verify that it meets the functional and technical
specifications.
System Testing enables us to test, verify and validate both the business requirements
as well as the Applications Architecture.
Regression Testing
Minimize the gaps in testing when an application with changes made has to be tested.
Testing the new changes to verify that the change made did not affect any other area
of the application.
Acceptance Testing
This is arguably the most importance type of testing as it is conducted by the Quality
Assurance Team who will gauge whether the application meets the intended specifications
and satisfies the client requirements. The QA team will have a set of pre written scenarios
and Test Cases that will be used to test the application. More ideas will be shared about the
application and more tests can be performed on it to gauge its accuracy and the reasons why
the project was initiated. Acceptance tests are not only intended to point out simple spelling
mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the application
that will result in system crashers or major errors in the application. By performing
acceptance tests on an application the testing team will deduce how the application will
perform in production. There are also legal and contractual requirements for acceptance of
the system.
Alpha Testing
This test is the first stage of testing and will be performed amongst the teams
(developer and QA teams). Unit testing, integration testing and system testing when
combined are known as alpha testing. During this phase, the following will be tested in the
application:
Spelling Mistakes
Broken Links
Cloudy Directions
The Application will be tested on machines with the lowest specification to test
loading times and any latency problems.
Beta Testing
This test is performed after Alpha testing has been successfully performed. In beta
testing a sample of the intended audience tests the application. Beta testing is also known as
pre-release testing. Beta test versions of software are ideally distributed to a wide audience on
the Web, partly to give the program a "real-world" test and partly to provide a preview of the
next release. In this phase the audience will be testing the following:
Users will install, run the application and send their feedback to the project team.
Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.
The more issues you fix that solve real user problems, the higher the quality of your
application will be.
Having a higher-quality application when you release to the general public will
increase customer satisfaction.
Non-Functional Testing
This section is based upon the testing of the application from its non-functional
attributes. Non-functional testing of Software involves testing the Software from the
requirements which are nonfunctional in nature related but important a well such as
performance, security, and user interface etc. Some of the important and commonly used non-
functional testing types are mentioned as follows:
Performance Testing
It is mostly used to identify any bottlenecks or performance issues rather than finding
the bugs in software. There are different causes which contribute in lowering the performance
of software:
Network delay.
Performance testing is considered as one of the important and mandatory testing type in terms
of following aspects:
Capacity
Stability
Scalability
It can be either qualitative or quantitative testing activity and can be divided into different sub
types such as Load testing and Stress testing.
Load Testing
This testing type includes the testing of Software behavior under abnormal conditions.
Taking away the resources, applying load beyond the actual load limit is Stress testing.
The main intent is to test the Software by applying the load to the system and taking over the
resources used by the Software to identify the breaking point. This testing can be performed
by testing different scenarios such as:
Running different processes that consume resources such as CPU, Memory, server
etc.
Usability Testing
This section includes different concepts and definitions of Usability testing from
Software point of view. It is a black box technique and is used to identify any error(s) and
improvements in the Software by observing the users through their usage and operation.
According to Nielsen, Usability can be defined in terms of five factors i.e. Efficiency of use,
Learn-ability, Memorability, Errors/safety, satisfaction. According to him the usability of the
product will be good and the system is usable if it possesses the above factors.
Nigel Bevan and Macleod considered that Usability is the quality requirement which
can be measured as the outcome of interactions with a computer system. This requirement
can be fulfilled and the end user will be satisfied if the intended goals are achieved effectively
with the use of proper resources. Molich in 2000 stated that user friendly system should fulfill
the following five goals i.e. Easy to Learn, Easy to Remember, Efficient to Use, Satisfactory
to Use and Easy to Understand.
In addition to different definitions of usability, there are some standards and quality
models and methods which define the usability in the form of attributes and sub attributes
such as ISO-9126, ISO-9241-11, ISO-13407 and IEEE std.610.12 etc.
UI vs Usability Testing
UI testing involves the testing of Graphical User Interface of the Software. This
testing ensures that the GUI should be according to requirements in terms of color, alignment,
size and other properties.
On the other hand Usability testing ensures that a good and user friendly GUI is
designed and is easy to use for the end user. UI testing can be considered as a sub part of
Usability testing.
Security Testing
Security testing involves the testing of Software in order to identify any flaws ad gaps
from security and vulnerability point of view. Following are the main aspects which Security
testing should ensure:
Confidentiality.
Integrity.
Authentication.
Availability.
Authorization.
Non-repudiation.
Portability Testing
Portability testing includes the testing of Software with intend that it should be re-useable and
can be moved from another Software as well. Following are the strategies that can be used for
Portability testing.
Portability testing can be considered as one of the sub parts of System testing, as this
testing type includes the overall testing of Software with respect to its usage over
different environments.
CHAPTER VIII
Conclusion
The success of AFIS and their extensive deployment all over the world have
prompted some individuals to take extreme measures to evade identification by altering their
fingerprints. The problem of fingerprint alteration or obfuscation is very different from that of
fingerprint spoofing, where an individual uses a fake fingerprint in order to adopt the identity
of another individual. While the problem of spoofing has received substantial attention in the
literature, the problem of obfuscation has not been addressed in the biometric literature, in
spite of numerous documented cases of fingerprint alteration for the purpose of evading
identification. While obfuscation may be encountered with other biometric modalities (such
as face and iris), this problem is especially significant in the case of fingerprints due to the
widespread deployment of AFIS in both government and civilian applications and the ease
with which fingerprints can be obfuscated.
[1] J. Feng, A.K. Jain, and A. Ross, “Detecting Altered Fingerprints,” Proc. 20th Int’l Conf.
Pattern Recognition, pp. 1622-1625, Aug. 2010.
[2] D. Maltoni, D. Maio, A.K. Jain, and S. Prabhakar, Handbook of Fingerprint Recognition,
second ed. Springer-Verlag, 2009.
[4] The Fed. Bureau of Investigation (FBI), Integrated Automated Fingerprint Identification
System (IAFIS), http://www.fbi.gov/hq/cjisd/iafis.htm, 2011.
[5] H. Cummins, “Attempts to Alter and Obliterate Finger-prints,” J. Am. Inst. Criminal Law
and Criminology, vol. 25, pp. 982-991, 1935.
[10] A. Antonelli, R. Cappelli, D. Maio, and D. Maltoni, “Fake Finger Detection by Skin
Distortion Analysis,” IEEE Trans. Information Forensics and Security, vol. 1, no. 3, pp. 360-
373, Sept. 2006.
[11] K.A. Nixon and R.K. Rowe, “Multispectral Fingerprint Imaging for Spoof Detection,”
Proc. SPIE, Biometric Technology for Human Identification II, A.K. Jain and N.K. Ratha,
eds., pp. 214-225, 2005.
[12] E. Tabassi, C. Wilson, and C. Watson, “Fingerprint Image Quality,” NISTIR 7151,
http://fingerprint.nist.gov/NFIS/ir_7151.pdf, Aug. 2004.
[13] F. Alonso-Fernandez, J. Fierrez, J. Ortega-Garcia, J. GonzalezRodriguez, H. Fronthaler,
K. Kollreider, and J. Bigun, “A Comparative Study of Fingerprint Image-Quality Estimation
Methods,” IEEE Trans. Information Forensics and Security, vol. 2, no. 4, pp. 734-743, Dec.
2007.
[20] Asylum Seekers Torch Skin off Their Fingertips So They Can’t Be ID’d by Police,
http://www.mirror.co.uk/sunday-mirror/2008/06/29/asylum-seekers-torch-skin-off-their-
fingertips-so-they-cant-be-id-d-by-police-98487-20624559/, 2008.
[25] NIST Special Database 4, NIST 8-Bit Gray Scale Images of Fingerprint Image Groups
(FIGS), http://www.nist.gov/srd/nistsd4.htm, 2011.
[28] M.V. de Water, “Can Fingerprints Be Forged?” The Science NewsLetter, vol. 29, no.
774, pp. 90-92, 1936.
[29] M. Wong, S.-P. Choo, and E.-H. Tan, “Travel Warning with Capecitabine,” Annals of
Oncology, vol. 20, p. 1281, 2009.
[31] H. Plotnick and H. Pinkus, “The Epidermal versus the Dermal Fingerprint: An
Experimental and Anatomical Study,” Archives of Dermatology, vol. 77, no. 1, pp. 12-17,
1958.
[33] NIST Special Database 14, NIST Mated Fingerprint Card Pairs 2 (MFCP2),
http://www.nist.gov/srd/nistsd14.htm. 2011.
[34] J. Zhou and J. Gu, “A Model-Based Method for the Computation of Fingerprints’
Orientation Field,” IEEE Trans. Image Processing, vol. 13, no. 6, pp. 821-835, 2004.
[35] S. Huckemann, T. Hotz, and A. Munk, “Global Models for the Orientation Field of
Fingerprints: An Approach Based on Quadratic Differentials,” IEEE Trans. Pattern Analysis
and Machine Intelligence, vol. 30, no. 9, pp. 1507-1519, Sept. 2008.
[36] Y. Wang and J. Hu, “Global Ridge Orientation Modeling for Partial Fingerprint
Identification,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 33, no. 1, pp.
72-87, Jan. 2010.
[37] C. Watson, M. Garris, E. Tabassi, C. Wilson, R.M. McCabe, S. Janet, and K. Ko, “NIST
Biometric Image Software,” http://www.nist.gov/itl/iad/ig/nbis.cfm, 2011.
[38] A.M. Bazen and S.H. Gerez, “Systematic Methods for the Computation of the
Directional Fields and Singular Points of Fingerprints,” IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 24, no. 7, pp. 905-919, July 2002.
[39] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” Proc.
IEEE Computer Vision and Pattern Recognition Conf., vol. 1, pp. 886-893, June 2005.
[40] C.-C. Chang and C.-J. Lin, LIBSVM: A Library for Support Vector Machines, software
http://www.csie.ntu.edu.tw/cjlin/libsvm, 2001.
[41] L.M. Wein and M. Baveja, “Using Fingerprint Image Quality to Improve the
Identification Performance of the U.S. Visitor and Immigrant Status Indicator Technology
Program,” Proc. Nat’l Academy of Sciences USA, vol. 102, no. 21, pp. 7772-7775, 2005.
[42] A. Ross, K. Nandakumar, and A.K. Jain, Handbook of Multibiometrics. Springer Verlag,
2006.
[45] R. Singh, M. Vatsa, H.S. Bhatt, S. Bharadwaj, A. Noore, and S.S. Nooreyezdan, “Plastic
Surgery: A New Dimension to Face Recognition,” IEEE Trans. Information Forensics and
Security, vol. 5, no. 3, pp. 441-448, Sept. 2010.