You are on page 1of 43

FINGER PRINT IDENTIFICATION TECHNIQUES FROM ABRADING, CUTTING,

AND BURNING FINGERS

Abstract

The widespread deployment of Automated Fingerprint Identification Systems (AFIS)


in law enforcement and border control applications has heightened the need for ensuring that
these systems are not compromised. While several issues related to fingerprint system
security have been investigated, including the use of fake fingerprints for masquerading
identity, the problem of fingerprint alteration or obfuscation has received very little attention.
Fingerprint obfuscation refers to the deliberate alteration of the fingerprint pattern by an
individual for the purpose of masking his identity. Several cases of fingerprint obfuscation
have been reported in the press. Fingerprint image quality assessment software (e.g., NFIQ)
cannot always detect altered fingerprints since the implicit image quality due to alteration
may not change significantly.

In this work, case studies of incidents where individuals were found to have altered
their fingerprints for circumventing AFIS are compiled and the impact of fingerprint
alteration on the accuracy of a commercial fingerprint matcher is investigated. The alterations
are classified into three major categories and possible countermeasures are suggested. A
technique is developed to automatically detect altered fingerprints based on analyzing
orientation field and minutiae distribution. The proposed technique and the NFIQ algorithm
are evaluated on a large database of altered fingerprints provided by a law enforcement
agency. Experimental results show the feasibility of the proposed approach in detecting
altered fingerprints and highlight the need to further pursue this problem.
Eminent Technology Solutions

EMINENT TECHNOLOGY SOLUTION is one of the leading information technology


companies. Through its Global Network Delivery Model, Innovation Network and Solution
Accelerators, ETS focuses on helping global organizations address their business challenges
effectively. ETS is an enterprise software company and it’s headquartering at Bangalore,
regional Office at Madurai and having branch offices in all over Tamil Nadu. It possesses not
only the latest technology gadgets but also the most knowledgeable and experienced hands to
offer most user friendly customized solutions. ETS offers a unique, customer-centric model
for delivering software products and services. The vision, ability to execute and financial
resources to be a great business partner for your enterprise. Our offerings for application
delivery, application management, and IT governance help customers maximize the business
value of IT by optimizing application quality, performance, and availability as well as
managing IT costs, risks, and compliance. ETS offers the following solutions to the IT
industry:
 Software Development
 Web Development
 High-End Training for Students & Professionals

Who we are

Eminent Technology Solutions is a Software Development, IT Solution, Academic


Projects, Website Development and Professional Services providing Company. The company
is located at the Silicon City of India, Bangalore. The company was founded by Vast
Experienced Global IT Professional to meet the challenges of the present growing global
technological environment with respect to the Information Technology Sector.

Eminent Technology Solutions shall meet the global challenges with its pool of highly
qualified Professionals. The company has competencies in Customized Software
Development, Out-sourcing of manpower and consultancy in the areas of Information
Systems, Analysis, Design, Development and Implementation.
Company's Mission Statement

To develop cost effective better performing user-friendly computer based business


solutions with quality and reliability within the time frame. In the process create satisfied
customers and proud employees.

Company's Values

Trust each other with utmost respect. Continuous skill improvement with professional
work environment.
CHAPTER I

INTRODUCTION

Biometrics is a method of recognizing a person based on physiological and behavioral


characteristics such as face, fingerprints, hand geometry, handwriting, iris, palm print, voice.
Among all biometric modality fingerprint is the most popular biometric identification
technique. Fingerprints are unique even twins does not have same fingerprint patterns. A
fingerprint contains many features as termination, bifurcation (fig-1), loops, island, whorls,
core, delta but the most widely used feature of fingerprint is ridges and valleys. Ridges are
the dark area and valleys are the white area on fingerprint.

Figure.1 Typical fingerprint with its features and Figure.2 Typical termination and
bifurcation

Fingerprint recognition has been successfully used by law enforcement agencies to


identify suspects and victims for almost 100 years. Recent advances in automated fingerprint
identification technology, coupled with the growing need for reliable person identification,
have resulted in an increased use of fingerprints in both government and civilian applications
such as border control, employment background checks, and secure facility access. The
success of fingerprint recognition systems in accurately identifying individuals has prompted
some individuals to engage in extreme measures for the purpose of circumventing these
systems. The primary purpose of fingerprint alteration is to evade identification using
techniques varying from abrading, cutting, and burning fingers to perform plastic surgery.

The use of altered fingerprints to mask one’s identity constitutes a serious “attack”
against a border control biometric system since it defeats the very purpose for which the
system was deployed in the first place, i.e., to identify individuals in a watch list. It should be
noted that altered fingerprints are different from fake fingerprints. The use of fake fingers
made of glue, latex, or silicone is a well-publicized method to circumvent fingerprint
systems. Altered fingerprints, however, are real fingers that are used to conceal one’s identity
in order to evade identification by a biometric system.

While fake fingers are typically used by individuals to adopt another person’s
identity, altered fingers are used to mask one’s own identity. In order to detect attacks based
on fake fingers, many software and hardware solutions have been proposed. However, the
problem of altered fingerprints has hitherto not been studied in the literature and there are no
reported techniques to identify them. Furthermore, the lack of public databases comprised of
altered fingerprint images has stymied research in this area. One of the goals of this paper is
to highlight the importance of the problem, analyze altered fingerprints, and propose an
automatic detection algorithm for them.

TYPES OF ALTERED FINGERPRINTS

The altered fingerprints into three categories based on the changes in ridge pattern due
to alteration. This categorization will assist us in following manner: 1) Getting a better
understanding of the nature of alterations that can be encountered, 2) detecting altered
fingerprints by modeling well-defined subcategories, and 3) developing methods for altered
fingerprint restoration.

Obliteration

Friction ridge patterns on fingertips can be obliterated by abrading, cutting, burning,


applying strong chemicals, and transplanting smooth skin. Further factors such as skin
disease (such as leprosy) and side effects of a cancer drug can also obliterate fingerprints.
Friction ridge structure is barely visible within the obliterated region. Obliteration appears to
be the most popular form of alteration.

This may be because obliteration, which completely destroys ridge structures, is much
simpler to perform than distortion/imitation, which requires a surgical procedure.
Furthermore, detecting distorted or imitated fingerprints is much more difficult for human
examiners than obliterated fingerprints. Obliterated fingerprints can evade fingerprint quality
control software, depending on the area of the damage. If the affected finger area is small, the
existing fingerprint quality assessment software may fail to detect it as an altered fingerprint,
but AFIS is likely to successfully match the damaged fingerprint to the original mated
fingerprint. But, if the altered area is sufficiently large, fingerprint quality control software
can easily detect the damage. To identify individuals with severely obliterated fingerprints, it
may be necessary to treat these fingerprints as latent images, perform AFIS search using
manually marked features, and adopt an appropriate fusion scheme for tenprint search. In rare
cases, even if the finger surface is completely damaged, the dermal papillary surface, which
contains the same pattern as the epidermal pattern, may be used for identification.

Distortion

Friction ridge patterns on fingertips can be turned into unnatural ridge patterns by
removing portions of skin from a fingertip and either grafting them back in different positions
or replacing them with friction ridge skin from the palm or sole. Distorted fingerprints have
unusual ridge patterns which are not found in natural fingerprints. These abnormalities
include abnormal spatial distribution of singular points or abrupt changes in orientation field
along the scars. Distorted fingerprints can also successfully pass the fingerprint quality test
since their local ridge structure remains similar to natural fingerprints while their global ridge
pattern is abnormal. Fingerprints altered by “Z” cut are of special interest since they retain
their original ridge structure, enabling reconstruction of the original fingerprint before
alteration. Therefore, it is imperative to upgrade current fingerprint quality control software
to detect the distorted fingerprints. Once detected, the following operations may be performed
to assist AFIS: 1) identify unaltered regions of the fingerprint and manually mark the features
in these regions and 2) reconstruct the original fingerprint as in the “Z” cut case.

Imitation

Friction ridge patterns on fingertips can still preserve fingerprint-like pattern after an
elaborate procedure of fingerprint alteration: 1) a portion of skin is removed and the
remaining skin is pulled and stitched together, 2) friction ridge skin from other parts of the
body is used to fill the removed part of the fingertip to reconcile with the remaining ridge
structure, or 3) transplantation of the entire fingertip. Imitated fingerprints can not only
successfully pass the fingerprint quality assessment software, they can also confound human
examiners. To match altered fingerprints, matching algorithms that are robust to distortion
and inconsistency are to be developed. In the case where fingerprints from different fingers
are swapped, fingerprint matching without using finger position information.
CHAPTER II

Existing System

Fingerprint alteration has even been performed at a much larger scale involving a
group of individuals. It has been reported that hundreds of asylum seekers had cut, abraded,
and burned their fingertips to prevent identification by EURODAC, a European Union-wide
fingerprint system for identifying asylum seekers. Although the number of publicly disclosed
cases of altered fingerprints is not very large, it is extremely difficult to estimate the actual
number of individuals who have successfully evaded identification by fingerprint systems as
a result of fingerprint alteration. Almost all the people identified as having altered their
fingerprints were not detected by AFIS, but by some other means.

Since existing fingerprint quality assessment algorithms are designed to examine if an


image contains sufficient information (say, minutiae) for matching, they have limited
capability in determining if an image is a natural fingerprint or an altered fingerprint.
Obliterated fingerprints can evade fingerprint quality control software, depending on the area
of the damage. If the affected finger area is small, the existing fingerprint quality assessment
software may fail to detect it as an altered fingerprint. The quality assessment software of
existing system cannot detect the altered fingerprint. Hence, it cannot detect it as altered
fingerprint if the altered areas are small.
CHAPTER III

Proposed System

Developing an automatic solution to detect altered fingerprints is the first step in


defeating fingerprint alteration. Fingerprint quality assessment routines used in most
fingerprint identification systems, such as the open source NIST Fingerprint Image Quality
(NIFQ) software, may be useful in detecting altered fingerprints if the corresponding images
are indeed of poor quality. But, not all altered fingerprint images have poor quality. Since
existing fingerprint quality assessment algorithms are designed to examine if an image
contains sufficient information (say, minutiae) for matching, they have limited capability in
determining if an image is a natural fingerprint or an altered fingerprint. The proposed system
was evaluated at two levels: finger level and subject level. At the finger level, we evaluate the
performance of distinguishing between natural and altered fingerprints. At the subject level,
we evaluate the performance of distinguishing between subjects with natural fingerprints and
those with altered fingerprints.

The main contributions of this work are:

1) Compiling case studies of incidents where individuals are found to have altered
their fingerprints for circumventing AFIS.

2) Investigating the impact of fingerprint alteration on the accuracy of a commercial


fingerprint matcher.

3) Classifying the alterations into three major categories and suggesting possible
countermeasures.

4) Developing a technique to automatically detect altered fingerprints based on


analyzing orientation field and minutiae distribution, and

5) Evaluating the proposed technique and the NFIQ algorithm on a large database of
altered fingerprints provided by a law enforcement agency.

Experimental results show the feasibility of the proposed approach in detecting


altered fingerprints and highlight the need to further pursue this problem.
System Architecture

The proposed algorithm based on the features extracted from the orientation field and
minutiae satisfy the three essential requirements for alteration detection algorithm:

1) Fast operational time,

2) High true positive rate at low false positive rate, and

3) Ease of integration into AFIS.


Data Flow Diagram

Modules

1. Detection of Altered Fingerprints

a. Normalization

b. Orientation field estimation

c. Orientation field approximation

d. Feature extraction

2. Analysis of Minutiae Distribution


Modules Description

1. DETECTION OF ALTERED FINGERPRINTS

A. NORMALIZATION

An input fingerprint image is normalized by cropping a rectangular region of the


fingerprint, which is located at the center of the fingerprint and aligned along the longitudinal
direction of the finger, using the NIST Biometric Image Software (NBIS). This step ensures
that the features extracted in the subsequent steps are invariant with respect to translation and
rotation of finger.

B. ORIENTATION FIELD ESTIMATION

The orientation field of the fingerprint is computed using the gradient-based method.
The initial orientation field is smoothed averaging filter, followed by averaging the
orientations in pixel blocks. A foreground mask is obtained by measuring the dynamic range
of gray values of the fingerprint image in local blocks and morphological process for filling
holes and removing isolated blocks is performed.

C. ORIENTATION FIELD APPROXIMATION

The orientation field is approximated by a polynomial model to obtain.

D. FEATURE EXTRACTION

The error map is computed as the absolute difference between and used to construct
the feature vector.

2. ANALYSIS OF MINUTIAE DISTRIBUTION

In this module, a minutia in the fingerprint indicates ridge characteristics such as ridge
ending or ridge bifurcation. Almost all fingerprint recognition systems use minutiae for
matching.

In addition to the abnormality observed in orientation field, we also noted that


minutiae distribution of altered fingerprints often differs from that of natural fingerprints.
Based on the minutiae extracted from a fingerprint by the open source minutiae extractor in
NBIS, a minutiae density map is constructed by using the Parzen window method with
uniform kernel function.

ER Diagram
SYSTEM REQUIREMENT

HARDWARE REQUIREMENTS

 System : Pentium IV 2.4 GHz.

 Hard Disk : 80 GB.

 Monitor : 15 VGA Color.

 Mouse : Logitech.

 Ram : 512 MB.

SOFTWARE REQUIREMENTS

 Operating system : Windows 7 Ultimate

 Front End : Visual Studio 2010

 Coding Language : C#.NET

 Database : SQL Server 2008


CHAPTER IV

Feasibility Study

The feasibility study is an evaluation of proposed system regarding its workability,


organizational ability to meet user needs and effective use of resources. When a new
application is proposed, it should go through the feasibility study before it is approved for the
development.

There are three aspects of feasibility study.


1. Technical Feasibility
2. Economic Feasibility
3. Operational Feasibility

Technical Feasibility

The consideration that is normally associated with the technical feasibility includes where the
project is to be developed and implemented. The proposed software should have the security
for data and should be very fast in processing the data efficiently. A basic knowledge to
operate a computer is sufficient to handle the system, since the system is designed to provide
user-friendly access.

Economic Feasibility

Economic justification is generally the “Bottom Line” consideration for most systems. It
includes a broad range of concerns that include the Cost-benefit analysis. The cost-benefit
analysis delineates costs for project development and weights then against tangible and
development of the system. Hence, there are tangible and intangible benefits the project
development.

Operational Feasibility

The new system must be accepted by the user. In this system, the administrator is one of the
users. As users are responsible for initiating the development of a new system, this is rooted
out.
Cost-benefit analysis (CBA) is an analytical tool for assessing and the pros and cons of
moving forward with a business proposal.
A formal CBA tallies all of the planned project costs, quantifies each of the tangible benefits
and calculates key financial performance metrics such as return on investment (ROI), net
present value (NPV), internal rate of return (IRR) and payback period. The costs associated
with taking action are then subtracted from the benefits that would be gained. As a general
rule, the costs should be less than 50 percent of the benefits and the payback period shouldn't
exceed 12 months.

A CBA is considered to be a subjective (as opposed to objective) assessment tool because


cost and benefit calculations can be influenced by the choice of supporting data and
estimation methodologies. Sometimes its most valuable use when assessing the value of a
business proposal is to serve as a vehicle for discussion. Cost-benefit analysis is sometimes
called benefit-cost analysis (BCA).The CBAM consists of the following steps:

1. Choosing scenarios and architectural strategies


2. Assessing quality attribute benefits
3. Quantifying the benefits of architectural strategies
4. Quantifying the costs and schedule implications of architectural strategies
5. Calculating desirability and making decisions
CHAPTER V
System Design

System design concentrates on moving from problem domain to solution domain. This
important phase is composed of several steps. It provides the understanding and procedural
details necessary for implementing the system recommended in the feasibility study.
Emphasis is on translating the performance requirements into design specification.

The design of any software involves mapping of the software requirements into Functional
modules. Developing a real time application or any system utilities involves two processes.
The first process is to design the system to implement it. The second is to construct the
executable code.

Software design has evolved from an intuitive art dependent on experience to a science,
which provides systematic techniques for the software definition. Software design is a first
step in the development phase of the software life cycle.

Before design the system user requirements have been identified, information has been
gathered to verify the problem and evaluate the existing system. A feasibility study has been
conducted to review alternative solution and provide cost and benefit justification. To
overcome this proposed system is recommended. At this point the design phase begins.

The process of design involves conceiving and planning out in the mind and making a
drawing. In software design, there are three distinct activities: External design, Architectural
design and detailed design. Architectural design and detailed design are collectively referred
to as internal design. External design of software involves conceiving and planning out and
specifying the externally observable characteristics of a software product.
Input Design:

Systems design is the process of defining the architecture, components, modules, interfaces,
and data for a system to satisfy specified requirements. Systems design could be seen as the
application of systems theory to product development. There is some overlap with the
disciplines of systems analysis, systems architecture and systems engineering.

Input Design is the process of converting a user oriented description of the inputs to a
computer-based business system into a programmer-oriented specification.

• Input data were found to be available for establishing and maintaining master and
transaction files and for creating output records

• The most suitable types of input media, for either off-line or on-line devices, where
selected after a study of alternative data capture techniques.

Input Design Consideration

• The field length must be documented.

• The sequence of fields should match the sequence of the fields on the source
document.

• The data format must be identified to the data entry operator.

Design input requirements must be comprehensive. Product complexity and the risk
associated with its use dictate the amount of detail

• These specify what the product does, focusing on its operational capabilities and the
processing of inputs and resultant outputs.

• These specify how much or how well the product must perform, addressing such
issues as speed, strength, response times, accuracy, limits of operation, etc.
Output Design:

A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs.

In output design it is determined how the information is to be displaced for immediate need
and also the hard copy output. It is the most important and direct source information to the
user. Efficient and intelligent output design improves the system’s relationship to help user
decision-making.

1. Designing computer output should proceed in an organized, well thought out


manner; the right output must be developed while ensuring that each output element is
designed so that people will find the system can use easily and effectively. When
analysis design computer output, they should Identify the specific output that is
needed to meet the requirements.

2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced by the
system.

The output form of an information system should accomplish one or more of the following
objectives.

• Convey information about past activities, current status or projections of the

• Future.

• Signal important events, opportunities, problems, or warnings.

• Trigger an action.

• Confirm an action.
CHAPTER VI

Literature Survey

S.
No Title Year Methodology Disadvantage
1. Fast fingerprint 2014 A distributed Due to the higher number
identification for framework for of minutiae of the rolled
large databases fingerprint matching to fingerprints, the matching
tackle large databases process is more
in a reasonable time is computationally complex.
proposed.

2. A knowledge-based 2015 A user-centric and High computational


decision support adaptive framework complexity.
system for adaptive that allows tacit
fingerprint knowledge of
identification that fingerprint examiners
uses relevance to be captured and re-
feedback used to enhance their
future decisions is
presented.
3. Towards 2013 A new representation of The matching accuracy is
contactless, low- 3D finger surface low.
cost and accurate 3d features using Finger
fingerprint Surface Codes is
identification developed.
4. Hierarchical 2013 A novel hierarchical The time required for
Minutiae Matching minutiae matching searching is high.
for Fingerprint algorithm for
and Palmprint fingerprint and
Identification palmprint identification
systems is proposed.

5. A new RBFN with 2016 A Radial Basis Low accuracy and high
modified optimal Function Network learning time.
clustering algorithm (RBFN) based on
for clear and Modified Optimal
occluded fingerprint Clustering Algorithm
identification (MOCA) is developed
for clear and occluded
fingerprint
identification.
Software Description
A programming infrastructure created by Microsoft for building, deploying, and
running applications and services that use .NET technologies, such as desktop applications
and Web services.

The .NET Framework contains three major parts:

 the Common Language Runtime

 the Framework Class Library

 ASP.NET.

Microsoft started development of the .NET Framework in the late 1990s, originally under the
name of Next Generation Windows Services (NGWS). By late 2000 the first beta versions of
.NET 1.0 were released. The .NET Framework (pronounced dot net) is a software
framework developed by Microsoft that runs primarily on Microsoft Windows. It includes a
large library and provides language interoperability (each language can use code written in
other languages) across several programming languages. Programs written for the .NET
Framework execute in a software environment (as contrasted to hardware environment),
known as the Common Language Runtime (CLR), an application virtual machine that
provides services such as security, memory management, and exception handling. The class
library and the CLR together constitute the .NET Framework.

An application software platform from Microsoft introduced in 2002 and commonly called
.NET ("dot net"). The .NET platform is similar in purpose to the Java EE platform, and like
Java's JVM runtime engine, .NET's runtime engine must be installed in the computer in order
to run .NET applications.

.NET Programming Languages

.NET is similar to Java because it uses an intermediate bytecode language that can be
executed on any hardware platform that has a runtime engine. It is also unlike Java, as it
provides support for multiple programming languages. Microsoft languages are C# (C
Sharp), J# (J Sharp), Managed C++, JScript.NET and Visual Basic.NET. Other languages
have been reengineered in the European version of .NET, called the Common Language
Infrastructure .
.NET Versions
.NET Framework 1.0 introduced the Common Language Runtime (CLR) and .NET
Framework 2.0 added enhancements. .NET Framework 3.0 included the Windows
programming interface (API) originally known as "WinFX," which is backward compatible
with the Win32 API. .NET Framework 3.0 added the following four subsystems and was
installed with Windows, starting with Vista. .NET Framework 3.5 added enhancements and
introduced a client-only version (see .NET Framework Client Profile). .NET Framework 4.0
added parallel processing and language enhancements.

TheUserInterface(WPF)
Windows Presentation Foundation (WPF) provides the user interface. It takes advantage of
advanced 3D graphics found in many computers to display a transparent, glass-like
appearance.

Messaging (WCF)

Windows Communication Foundation (WCF) enables applications to communicate with


each other locally and remotely, integrating local messaging with Web services..
Workflow (WWF)

Windows Workflow Foundation (WWF) is used to integrate applications and automate tasks.
Workflow structures can be defined in the XML Application Markup Language.
User Identity (WCS)

Windows CardSpace (WCS) provides an authentication system for logging into a Web site
and transferring personal information.
DESIGN FEATURES

Interoperability

Because computer systems commonly require interaction between newer and older
applications, the .NET Framework provides means to access functionality implemented in
newer and older programs that execute outside the .NET environment. Access
to COM components is provided in the System .Runtime. InteropServices and
System.Enterprise Services namespaces of the framework; access to other functionality is
achieved using the P/Invoke feature.

Common Language Runtime engine

The Common Language Runtime (CLR) serves as the execution engine of the .NET
Framework. All .NET programs execute under the supervision of the CLR, guaranteeing
certain properties and behaviors in the areas of memory management, security, and exception
handling.

Language independence

The .NET Framework introduces a Common Type System, or CTS. The


CTS specification defines all possible data types and programming constructs supported by
the CLR and how they may or may not interact with each other conforming to the Common
Language Infrastructure (CLI) specification. Because of this feature, the .NET Framework
supports the exchange of types and object instances between libraries and applications written
using any conforming .NET language.

Base Class Library

The Base Class Library (BCL), part of the Framework Class Library (FCL), is a
library of functionality available to all languages using the .NET Framework. The BCL
provides classes that encapsulate a number of common functions, including file reading and
writing, graphic rendering, database interaction, XML document manipulation, and so on. It
consists of classes, interfaces of reusable types that integrates with CLR(Common Language
Runtime).
Simplified deployment

The .NET Framework includes design features and tools which help manage
the installation of computer software to ensure it does not interfere with previously installed
software, and it conforms to security requirements.

Security

The design addresses some of the vulnerabilities, such as buffer overflows, which
have been exploited by malicious software. Additionally, .NET provides a common security
model for all applications.

Portability

While Microsoft has never implemented the full framework on any system except
Microsoft Windows, it has engineered the framework to be platform-agnostic and cross-
platform implementations are available for other operating systems (see Silverlight and
the Alternative implementations section below). Microsoft submitted the specifications for
the Common Language Infrastructure (which includes the core class libraries, Common Type
System, and the Common Intermediate Language), the C# language and the C++/CLI
language[8] to both ECMAand the ISO, making them available as official standards. This
makes it possible for third parties to create compatible implementations of the framework and
its languages on other platforms.
ARCHITECTURE:

Overview of the Common Language Infrastructure

Common Language Infrastructure (CLI)

The purpose of the Common Language Infrastructure (CLI) is to provide a language-


neutral platform for application development and execution, including functions
for exception handling, garbage collection, security, and interoperability. By implementing
the core aspects of the .NET Framework within the scope of the CLI, this functionality will
not be tied to a single language but will be available across the many languages supported by
the framework. Microsoft's implementation of the CLI is called the Common Language
Runtime, or CLR.
Security

.NET has its own security mechanism with two general features: Code Access
Security (CAS), and validation and verification. Code Access Security is based on evidence
that is associated with a specific assembly. Typically the evidence is the source of the
assembly (whether it is installed on the local machine or has been downloaded from the
intranet or Internet). Code Access Security uses evidence to determine the permissions
granted to the code. Other code can demand that calling code is granted a specified
permission. The demand causes the CLR to perform a call stack walk: every assembly of
each method in the call stack is checked for the required permission; if any assembly is not
granted the permission a security exception is thrown.

Class library

Namespaces in the BCL[9]

System

System.Diagnostics

System.Globalization

System.Resources

System.Text

System.Runtime.Serialization

System.Data

The .NET Framework includes a set of standard class libraries. The class library is
organized in a hierarchy of namespaces. Most of the built-in APIs are part of
either System.* or Microsoft.* namespaces. These class libraries implement a large number
of common functions, such as file reading and writing, graphic rendering, database
interaction, and XML document manipulation, among others. The .NET class libraries are
available to all CLI compliant languages.

Memory management

The .NET Framework CLR frees the developer from the burden of managing memory
(allocating and freeing up when done); it handles memory management itself by detecting
when memory can be safely freed. Instantiations of .NET types (objects) are allocated from
the managed heap; a pool of memory managed by the CLR.. When there is no reference to an
object, and it cannot be reached or used, it becomes garbage, eligible for collection. NET
Framework includes a garbage collector which runs periodically, on a separate thread from
the application's thread, that enumerates all the unusable objects and reclaims the memory
allocated to them.

VB.NET

VB.NET uses statements to specify actions. The most common statement is an expression
statement, consisting of an expression to be evaluated, on a single line. As part of that
evaluation, functions or subroutines may be called and variables may be assigned new values.
To modify the normal sequential execution of statements, VB.NET provides several control-
flow statements identified by reserved keywords. Structured programming is supported by
several constructs including two conditional execution constructs
( If … Then … Else … End If and Select Case ... Case ... End Select ) and three iterative

execution (loop) constructs ( Do … Loop , For … To , and For Each ) .

The For … To statement has separate initialization and testing sections, both of which must

be present. (See examples below.) The For Each statement steps through each value in a list.

In addition, in Visual Basic .NET:

 There is no unified way of defining blocks of statements. Instead, certain keywords, such
as "If … Then" or "Sub" are interpreted as starters of sub-blocks of code and have
matching termination keywords such as "End If" or "End Sub".
 Statements are terminated either with a colon (":") or with the end of line. Multiple line
statements in Visual Basic .NET are enabled with " _" at the end of each such line. The
need for the underscore continuation character was largely removed in version 10 and
later versions.[2]
 The equals sign ("=") is used in both assigning values to variable and in comparison.
 Round brackets (parentheses) are used with arrays, both to declare them and to get a
value at a given index in one of them. Visual Basic .NET uses round brackets to define
the parameters of subroutines or functions.
 A single quotation mark ('), placed at the beginning of a line or after any number
of space or tab characters at the beginning of a line, or after other code on a line, indicates
that the (remainder of the) line is a comment.
CHAPTER VII

System Testing

Software Testing

Software testing is an investigation conducted to provide stakeholders with


information about the quality of the product or service under test. Software testing can also
provide an objective, independent view of the software to allow the business to appreciate
and understand the risks of software implementation. Test techniques include, but are not
limited to the process of executing a program or application with the intent of finding
software bugs (errors or other defects). The purpose of testing is to discover errors. Testing is
the process of trying to discover every conceivable fault or weakness in a work product. It
provides a way to check the functionality of components, sub-assemblies, assemblies and/or a
finished product. It is the process of exercising the software with the intent of ensuring that
the software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of tests. Each test type addresses a specific
testing requirement.

Software testing is the process of evaluation a software item to detect differences between
given input and expected output. Also the feature of a software item is assessed. Testing
assesses the quality of the product. Software testing is a process that should be done during
the development process. In other words software testing is a verification and validation
process.

Types of testing

There are different levels during the process of Testing .Levels of testing include the
different methodologies that can be used while conducting Software Testing. Following are
the main levels of Software Testing:
 Functional Testing.

 Non-Functional Testing.

Functional Testing
Functional Testing of the software is conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements. There are five steps that are
involved when testing an application for functionality.

Steps Description

I The determination of the functionality that the intended application is meant


to perform.
II The creation of test data based on the specifications of the application.
III The output based on the test data and the specifications of the application.
IV The writing of Test Scenarios and the execution of test cases.
V The comparison of actual and expected results based on the executed test
cases.

An effective testing practice will see the above steps applied to the testing policies of
every organization and hence it will make sure that the organization maintains the strictest of
standards when it comes to software quality.

Unit Testing

This type of testing is performed by the developers before the setup is handed over to
the testing team to formally execute the test cases. Unit testing is performed by the respective
developers on the individual units of source code assigned areas. The developers use test data
that is separate from the test data of the quality assurance team. The goal of unit testing is to
isolate each part of the program and show that individual parts are correct in terms of
requirements and functionality.
Limitations of Unit Testing

Testing cannot catch each and every bug in an application. It is impossible to evaluate
every execution path in every software application. The same is the case with unit testing.
There is a limit to the number of scenarios and test data that the developer can use to
verify the source code. So after he has exhausted all options there is no choice but to stop unit
testing and merge the code segment with other units.

Integration Testing

The testing of combined parts of an application to determine if they function correctly


together is Integration testing. There are two methods of Integration Testing
 Bottom-up Integration testing

 Top- Down Integration testing

S.No. Integration Testing Method


1 Bottom-up integration
This testing begins with unit testing, followed by tests of progressively
higher-level combinations of units called modules or builds.
2 Top-Down integration
This testing, the highest-level modules are tested first and progressively
lower-level modules are tested after that.

In a comprehensive software development environment, bottom-up testing is usually


done first, followed by top-down testing. The process concludes with multiple tests of the
complete application, preferably in scenarios designed to mimic those it will encounter in
customers' computers, systems and network.
System Testing

This is the next level in the testing and tests the system as a whole. Once all the
components are integrated, the application as a whole is tested rigorously to see that it meets
Quality Standards. This type of testing is performed by a specialized testing team. System
testing is so important because of the following reasons:

 System Testing is the first step in the Software Development Life Cycle, where the
application is tested as a whole.

 The application is tested thoroughly to verify that it meets the functional and technical
specifications.

 The application is tested in an environment which is very close to the production


environment where the application will be deployed.

 System Testing enables us to test, verify and validate both the business requirements
as well as the Applications Architecture.

Regression Testing

Whenever a change in a software application is made it is quite possible that other


areas within the application have been affected by this change. To verify that a fixed bug
hasn't resulted in another functionality or business rule violation is Regression testing. The
intent of Regression testing is to ensure that a change, such as a bug fix did not result in
another fault being uncovered in the application. Regression testing is so important because
of the following reasons:

 Minimize the gaps in testing when an application with changes made has to be tested.

 Testing the new changes to verify that the change made did not affect any other area
of the application.

 Mitigates Risks when regression testing is performed on the application.

 Test coverage is increased without compromising timelines.

 Increase speed to market the product.

Acceptance Testing
This is arguably the most importance type of testing as it is conducted by the Quality
Assurance Team who will gauge whether the application meets the intended specifications
and satisfies the client requirements. The QA team will have a set of pre written scenarios
and Test Cases that will be used to test the application. More ideas will be shared about the
application and more tests can be performed on it to gauge its accuracy and the reasons why
the project was initiated. Acceptance tests are not only intended to point out simple spelling
mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the application
that will result in system crashers or major errors in the application. By performing
acceptance tests on an application the testing team will deduce how the application will
perform in production. There are also legal and contractual requirements for acceptance of
the system.

Alpha Testing

This test is the first stage of testing and will be performed amongst the teams
(developer and QA teams). Unit testing, integration testing and system testing when
combined are known as alpha testing. During this phase, the following will be tested in the
application:

 Spelling Mistakes

 Broken Links

 Cloudy Directions

 The Application will be tested on machines with the lowest specification to test
loading times and any latency problems.
Beta Testing

This test is performed after Alpha testing has been successfully performed. In beta
testing a sample of the intended audience tests the application. Beta testing is also known as
pre-release testing. Beta test versions of software are ideally distributed to a wide audience on
the Web, partly to give the program a "real-world" test and partly to provide a preview of the
next release. In this phase the audience will be testing the following:

 Users will install, run the application and send their feedback to the project team.

 Typographical errors, confusing application flow, and even crashes.

 Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.

 The more issues you fix that solve real user problems, the higher the quality of your
application will be.

 Having a higher-quality application when you release to the general public will
increase customer satisfaction.

Non-Functional Testing

This section is based upon the testing of the application from its non-functional
attributes. Non-functional testing of Software involves testing the Software from the
requirements which are nonfunctional in nature related but important a well such as
performance, security, and user interface etc. Some of the important and commonly used non-
functional testing types are mentioned as follows:

Performance Testing

It is mostly used to identify any bottlenecks or performance issues rather than finding
the bugs in software. There are different causes which contribute in lowering the performance
of software:

 Network delay.

 Client side processing.

 Database transaction processing.

 Load balancing between servers.


 Data rendering.

Performance testing is considered as one of the important and mandatory testing type in terms
of following aspects:

 Speed (i.e. Response Time, data rendering and accessing)

 Capacity

 Stability

 Scalability

It can be either qualitative or quantitative testing activity and can be divided into different sub
types such as Load testing and Stress testing.

Load Testing

Load testing is a process of testing the behavior of the Software by applying


maximum load in terms of Software accessing and manipulating large input data. It can be
done at both normal and peak load conditions. This type of testing identifies the maximum
capacity of Software and its behavior at peak time. Most of the time, Load testing is
performed with the help of automated tools such as Load Runner, AppLoader, IBM Rational
Performance Tester, Apache JMeter, Silk Performer, Visual Studio Load Test etc. Virtual
users (VUsers) are defined in the automated testing tool and the script is executed to verify
the Load testing for the Software. The quantity of users can be increased or decreased
concurrently or incrementally based upon the requirements.
Stress Testing

This testing type includes the testing of Software behavior under abnormal conditions.
Taking away the resources, applying load beyond the actual load limit is Stress testing.

The main intent is to test the Software by applying the load to the system and taking over the
resources used by the Software to identify the breaking point. This testing can be performed
by testing different scenarios such as:

 Shutdown or restart of Network ports randomly.

 Turning the database on or off.

 Running different processes that consume resources such as CPU, Memory, server
etc.

Usability Testing

This section includes different concepts and definitions of Usability testing from
Software point of view. It is a black box technique and is used to identify any error(s) and
improvements in the Software by observing the users through their usage and operation.
According to Nielsen, Usability can be defined in terms of five factors i.e. Efficiency of use,
Learn-ability, Memorability, Errors/safety, satisfaction. According to him the usability of the
product will be good and the system is usable if it possesses the above factors.

Nigel Bevan and Macleod considered that Usability is the quality requirement which
can be measured as the outcome of interactions with a computer system. This requirement
can be fulfilled and the end user will be satisfied if the intended goals are achieved effectively
with the use of proper resources. Molich in 2000 stated that user friendly system should fulfill
the following five goals i.e. Easy to Learn, Easy to Remember, Efficient to Use, Satisfactory
to Use and Easy to Understand.

In addition to different definitions of usability, there are some standards and quality
models and methods which define the usability in the form of attributes and sub attributes
such as ISO-9126, ISO-9241-11, ISO-13407 and IEEE std.610.12 etc.

UI vs Usability Testing
UI testing involves the testing of Graphical User Interface of the Software. This
testing ensures that the GUI should be according to requirements in terms of color, alignment,
size and other properties.

On the other hand Usability testing ensures that a good and user friendly GUI is
designed and is easy to use for the end user. UI testing can be considered as a sub part of
Usability testing.

Security Testing

Security testing involves the testing of Software in order to identify any flaws ad gaps
from security and vulnerability point of view. Following are the main aspects which Security
testing should ensure:

 Confidentiality.

 Integrity.

 Authentication.

 Availability.

 Authorization.

 Non-repudiation.
Portability Testing

Portability testing includes the testing of Software with intend that it should be re-useable and
can be moved from another Software as well. Following are the strategies that can be used for
Portability testing.

 Transferred installed Software from one computer to another.

 Building executable (.exe) to run the Software on different platforms.

 Portability testing can be considered as one of the sub parts of System testing, as this
testing type includes the overall testing of Software with respect to its usage over
different environments.
CHAPTER VIII

Conclusion

The success of AFIS and their extensive deployment all over the world have
prompted some individuals to take extreme measures to evade identification by altering their
fingerprints. The problem of fingerprint alteration or obfuscation is very different from that of
fingerprint spoofing, where an individual uses a fake fingerprint in order to adopt the identity
of another individual. While the problem of spoofing has received substantial attention in the
literature, the problem of obfuscation has not been addressed in the biometric literature, in
spite of numerous documented cases of fingerprint alteration for the purpose of evading
identification. While obfuscation may be encountered with other biometric modalities (such
as face and iris), this problem is especially significant in the case of fingerprints due to the
widespread deployment of AFIS in both government and civilian applications and the ease
with which fingerprints can be obfuscated.

An algorithm to automatically detect altered fingerprints based on the characteristics


of the fingerprint orientation field and minutiae distribution is developed. The proposed
algorithm based on the features extracted from the orientation field and minutiae satisfies the
three essential requirements for alteration detection algorithm: 1) fast operational time, 2)
high true positive rate at low false positive rate, and 3) ease of integration into AFIS. The
proposed algorithm and the NFIQ criterion were tested on a large public domain fingerprint
database (NIST SD14) as natural fingerprints and an altered fingerprint database provided by
a law enforcement agency. At a false positive rate of 0.3 percent, the proposed algorithm can
correctly detect 66.4 percent of the subjects with altered fingerprints, while 26.5 percent of
such subjects are detected by the NFIQ algorithm.
References

[1] J. Feng, A.K. Jain, and A. Ross, “Detecting Altered Fingerprints,” Proc. 20th Int’l Conf.
Pattern Recognition, pp. 1622-1625, Aug. 2010.

[2] D. Maltoni, D. Maio, A.K. Jain, and S. Prabhakar, Handbook of Fingerprint Recognition,
second ed. Springer-Verlag, 2009.

[3] The U.S. Department of Homeland Security, US-VISIT, http://www.dhs.gov/usvisit,


2011.

[4] The Fed. Bureau of Investigation (FBI), Integrated Automated Fingerprint Identification
System (IAFIS), http://www.fbi.gov/hq/cjisd/iafis.htm, 2011.

[5] H. Cummins, “Attempts to Alter and Obliterate Finger-prints,” J. Am. Inst. Criminal Law
and Criminology, vol. 25, pp. 982-991, 1935.

[6] Surgically Altered Fingerprints, http://www.clpex.com/images/FeetMutilation/L4.JPG,


2011.

[7] K. Singh, Altered Fingerprints,


http://www.interpol.int/Public/Forensic/fingerprints/research/alteredfingerprints.pdf, 2008.

[8] M. Hall, “Criminals Go to Extremes to Hide Identities,” USA Today,


http://www.usatoday.com/news/nation/2007-11-06-criminal-extreme_N.htm, Nov. 2007.

[9] Criminals Cutting off Fingertips to Hide IDs,


http://www.thebostonchannel.com/news/15478914/detail.html, 2008.

[10] A. Antonelli, R. Cappelli, D. Maio, and D. Maltoni, “Fake Finger Detection by Skin
Distortion Analysis,” IEEE Trans. Information Forensics and Security, vol. 1, no. 3, pp. 360-
373, Sept. 2006.

[11] K.A. Nixon and R.K. Rowe, “Multispectral Fingerprint Imaging for Spoof Detection,”
Proc. SPIE, Biometric Technology for Human Identification II, A.K. Jain and N.K. Ratha,
eds., pp. 214-225, 2005.

[12] E. Tabassi, C. Wilson, and C. Watson, “Fingerprint Image Quality,” NISTIR 7151,
http://fingerprint.nist.gov/NFIS/ir_7151.pdf, Aug. 2004.
[13] F. Alonso-Fernandez, J. Fierrez, J. Ortega-Garcia, J. GonzalezRodriguez, H. Fronthaler,
K. Kollreider, and J. Bigun, “A Comparative Study of Fingerprint Image-Quality Estimation
Methods,” IEEE Trans. Information Forensics and Security, vol. 2, no. 4, pp. 734-743, Dec.
2007.

[14] R. Cappelli, D. Maio, and D. Maltoni, “Synthetic FingerprintDatabase Generation,”


Proc. 16th Int’l Conf. Pattern Recognition, pp. 744-747, Aug. 2002.

[15] K. Wertheim, “An Extreme Case of Fingerprint Mutilation,” J. Forensic Identification,


vol. 48, no. 4, pp. 466-477, 1998.

[16] History of Fingerprint Removal, http://jimfisher.edinboro.edu/forensics/fire/print.html,


2011.

[17] J. Patten, Savvy Criminals Obliterating Fingerprints to Avoid Identification,


http://www.eagletribune.com/punews/local_story_062071408.html, 2008.

[18] Woman Alters Fingerprints to Deceive Taiwan Immigration Fingerprint Identification


System, http://www.zaobao.com/special/newspapers/2008/10/hongkong081002r.shtml, (In
Chinese), Oct. 2008.

[19] Sweden Refugees Mutilate Fingers, http://news.bbc.co.uk/2/hi/europe/3593895.stm ,


2004.

[20] Asylum Seekers Torch Skin off Their Fingertips So They Can’t Be ID’d by Police,
http://www.mirror.co.uk/sunday-mirror/2008/06/29/asylum-seekers-torch-skin-off-their-
fingertips-so-they-cant-be-id-d-by-police-98487-20624559/, 2008.

[21] Surgically Altered Fingerprints Help Woman Evade Immigration,


http://abcnews.go.com/Technology/GadgetGuide/surgicallyaltered-fingerprints-woman-
evade-immigration/story?id=9302505, 2011.

[22] Three Charged with Conspiring to Mutilate Fingerprints of Illegal Aliens,


http://www.eagletribune.com/local/x739950408/Threecharged-with-conspiring-to-mutilate-
fingerprints-of-illegal-aliens, July 2010.

[23] EURODAC: a European Union-Wide Electronic System for the Identification of


Asylum-Seekers,
http://ec.europa.eu/justice_home/fsj/asylum/identification/fsj_asylum_identification_en.htm,
2011.

[24] Neurotechnology Inc., VeriFinger, http://www.neurotechnology.com/vf_sdk.html, 2011.

[25] NIST Special Database 4, NIST 8-Bit Gray Scale Images of Fingerprint Image Groups
(FIGS), http://www.nist.gov/srd/nistsd4.htm, 2011.

[26] J.W. Burks, “The Effect of Dermabrasion on Fingerprints: A Preliminary Report,”


Archives of Dermatology, vol. 77, no. 1, pp. 8-11, 1958.

[27] Men in Black, http://www.imdb.com/title/tt0119654/, 1997.

[28] M.V. de Water, “Can Fingerprints Be Forged?” The Science NewsLetter, vol. 29, no.
774, pp. 90-92, 1936.

[29] M. Wong, S.-P. Choo, and E.-H. Tan, “Travel Warning with Capecitabine,” Annals of
Oncology, vol. 20, p. 1281, 2009.

[30] K. Nandakumar, A.K. Jain, and A. Ross, “Fusion in Multibiometric Identification


Systems: What about the Missing Data?,” Proc. Second Int’l Conf. Biometrics, pp. 743-752,
June 2009.

[31] H. Plotnick and H. Pinkus, “The Epidermal versus the Dermal Fingerprint: An
Experimental and Anatomical Study,” Archives of Dermatology, vol. 77, no. 1, pp. 12-17,
1958.

[32] Altered Fingerprints Detected in Illegal Immigration Attempts,


http://www.japantoday.com/category/crime/view/alteredfingerprints-detected-in-illegal-
immigration-attempts, 2011.

[33] NIST Special Database 14, NIST Mated Fingerprint Card Pairs 2 (MFCP2),
http://www.nist.gov/srd/nistsd14.htm. 2011.

[34] J. Zhou and J. Gu, “A Model-Based Method for the Computation of Fingerprints’
Orientation Field,” IEEE Trans. Image Processing, vol. 13, no. 6, pp. 821-835, 2004.
[35] S. Huckemann, T. Hotz, and A. Munk, “Global Models for the Orientation Field of
Fingerprints: An Approach Based on Quadratic Differentials,” IEEE Trans. Pattern Analysis
and Machine Intelligence, vol. 30, no. 9, pp. 1507-1519, Sept. 2008.

[36] Y. Wang and J. Hu, “Global Ridge Orientation Modeling for Partial Fingerprint
Identification,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 33, no. 1, pp.
72-87, Jan. 2010.

[37] C. Watson, M. Garris, E. Tabassi, C. Wilson, R.M. McCabe, S. Janet, and K. Ko, “NIST
Biometric Image Software,” http://www.nist.gov/itl/iad/ig/nbis.cfm, 2011.

[38] A.M. Bazen and S.H. Gerez, “Systematic Methods for the Computation of the
Directional Fields and Singular Points of Fingerprints,” IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 24, no. 7, pp. 905-919, July 2002.

[39] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” Proc.
IEEE Computer Vision and Pattern Recognition Conf., vol. 1, pp. 886-893, June 2005.

[40] C.-C. Chang and C.-J. Lin, LIBSVM: A Library for Support Vector Machines, software
http://www.csie.ntu.edu.tw/cjlin/libsvm, 2001.

[41] L.M. Wein and M. Baveja, “Using Fingerprint Image Quality to Improve the
Identification Performance of the U.S. Visitor and Immigrant Status Indicator Technology
Program,” Proc. Nat’l Academy of Sciences USA, vol. 102, no. 21, pp. 7772-7775, 2005.

[42] A. Ross, K. Nandakumar, and A.K. Jain, Handbook of Multibiometrics. Springer Verlag,
2006.

[43] The FBI’s Next Generation Identification (NGI), http://www.fbi.gov/hq/cjisd/ngi.htm,


2011.

[44] DoD Biometrics Task Force, http://www.biometrics.dod.mil/,2011.

[45] R. Singh, M. Vatsa, H.S. Bhatt, S. Bharadwaj, A. Noore, and S.S. Nooreyezdan, “Plastic
Surgery: A New Dimension to Face Recognition,” IEEE Trans. Information Forensics and
Security, vol. 5, no. 3, pp. 441-448, Sept. 2010.

You might also like