You are on page 1of 12

Sunil Kumar et al.

/ International Journal on Computer Science and Engineering (IJCSE)

Copy-Move Forgery Detection in Digital


Images: Progress and Challenges
Sunil Kumar, P. K. Das, Shally
Faculty of Engineering & Technology
Mody Institute of Technology & Science
Lakshmangarh, Sikar, India

S. Mukherjee
Department of Electrical Engineering
Indian Institute of Technology
Roorkee, Uttarakhand, India

Abstract— With the advancement of technology and easy availability of imaging tools, it’s not difficult
now a days to manipulate digital images to hide or create misleading images. Image forgery detection is
currently one of the hot research fields of image processing. Many research papers have been published
during recent years. Image forgery has already been categorized. Copy-Move forgery is one of the
frequently used techniques. In this paper a review of the existing techniques has been done. An attempt
has been made in this paper to list and highlight almost all the proposed methods along with their key
features.

Keywords- Blind techniques, duplicate region detection, forgery detection, image forensics, image
manipulation.

I. INTRODUCTION
Digital images in the modern world play very important role in areas like forensic investigation, insurance
processing, surveillance systems, intelligence services, medical imaging and journalism. But the basic
requirement to believe what we see is that the images should be authentic. With the advancement of technology
and availability of fast computing resources, it is not very difficult to manipulate or forge the digital images. The
availability of some software tools makes the problem more menacing. Despite this there is no method available
to detect all types of tampering with accuracy. Before coming to the discussion of forgery detection techniques;
it is necessary to know about the different types of tampering done with digital images.
There are many ways to categorize the image tampering based on different points of view (for a
categorization see, for example, [1]). Generally, we can say that the most oftenly performed operations in image
tampering are:
 Deleting or hiding a region in the image.
 Adding a new object into the image.
 Misrepresenting the image information.
Copy move image tampering is one of the frequently used techniques to hide or manipulate the content of the
image. Some part of the same image or some other image is pasted on another part of image. To detect the
region of some other image statistical methods may work but if the region pasted belongs to the same image
then it’s quite difficult to detect this forgery. Many methods have been suggested to detect this type of forgery.
Some methods regarding Copy-Move forgery are highlighted in [2], but all the methods have not been covered
in the survey. In the next section, the methods which have been published are listed with their key features.
II. METHODS
J. Fridrich, David Soukal and Jan Lukas [3] suggested one of the raw and earliest methods to detect
copy move forgery. The first method suggested in the paper is exact match method. In this method a square
block of BxB pixels is slid by one pixel along the image of size MxN from the upper left corner down to the
lower right corner. The block positions are stored in an array with B2 columns and (M-B+1)(N-B+1) rows. Two
identical rows in this array correspond to identical blocks. To identify identical rows, the rows of matrix are

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 652


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

sorted lexicographically. This can be done in MNlog2 (MN) steps. This method detects the exact duplication of
region. But in a sophisticated manipulation exact match is hard to find and some detraction is made after pasting
like blurring and random noise addition. So another method called robust match is suggested in which instead of
pixel value comparison quantized DCT coefficients are matched. The detection process is same as exact match.
For each block, the DCT transform is calculated. The DCT coefficients are quantized by quantization factor Q
and stored in the array instead of pixel values. The degree of robustness can be controlled by varying the Q
factor. Higher value of Q may lead to false matches. Also robust match algorithm looks at the mutual positions
of each matching block pair and outputs a specific block pair only if there are many other matching pairs in the
same mutual position. Towards this goal a shift vector s= (s1, s2) = (i1-j1, i2-j2) is stored in a separate list
corresponding to matching blocks s1 and s2 with block positions (i1, j1) and (i2, j2) respectively. Also if same shift
occurs again, a counter C(S(r)) associated with the shift vector is incremented by one. Lastly, the algorithm finds
shift vectors s(1), s(2),s(3),………,s(k), whose occurrence exceeds a user defined threshold T:C(S(r))>T for all
r=1,2,……,k. The value of T is related to the size of the smallest segment that can be identified with the
algorithm. Matching blocks that contribute to the specific shift vector are colored with the same color and
identified as segments that might have been copied and pasted. The method works for gray scale images .The
color images are first converted to gray scale image by the formula I=0.299R+0.587G+0.114B. The method is
not very robust to post copy paste operations but is one of the landmark methods for copy move forgery
detection.

H. Farid and Alin C Popescu [4] suggested a similar method based upon PCA [5] representation of a
block instead of DCT representation [3]. The method is based upon the fact that PCA representation is more
immune to random noise and JPEG compression factor. In the suggested method an image with N pixels is tiled
with overlapping blocks of b pixels (√b x √b pixels in dimension), each of which are assumed to be considerably

smaller than the size of the duplicated regions to be detected. Let x i = 1,.,.,.,Nb denote these blocks in vectorized
form, where Nb = (√N -√b+ 1)2. These image blocks are represented by principal component analysis (PCA) [5].
Nb  T  
If C is covariance matrix such that C  
i 1
x x
i i and the blocks xi are zero-mean, The eigenvectors, e j , of the
   
matrix C, with corresponding eigenvalues,  j , satisfying: Ce j =  j e j define the principal components, where

j = 1,…, b and λ1 ≥ λ2≥…….. ≥λb. The eigenvectors, e j , form a new linear basis for each image block,
 b   T  
x i =  a j e j where a j = xi e j and ai = (a1,a2,……,ab) is the new representation for each image block. The
j=1

dimensionality of this representation can be reduced by simply truncating the sum in the above equation to the
first Nt terms. This reduced dimension representation, therefore, provides a convenient space to identify similar
blocks in the presence of corrupting noise, as truncation of the basis will remove minor intensity variations. The
accuracy of the method is good except for small block size and low JPEG qualities.

W. Luo, J. Huang and Guoping Qiu [6] suggested a more robust method and the authors claimed lower
computational complexity and more robustness against stronger attacks and various types of after-copying
manipulations, such as lossy compression, noise contamination, blurring and combinations of these operations
than [3] and [4].In this method the input image is split into overlapping blocks of b × b pixels. Assuming that
the image is an M x N color image, there are S = (M-b+1) x (N-b+1) blocks. For each block Bi ( i = 1, 2 . . . S),
seven characteristics features cj ( j = 1, 2 . . . 7) are computed.
i) c1, c2 , c3 are the average of red, green and blue components respectively.
ii) In the Y channel (Y = 0.299R + 0.587G + 0.114B ), the block is divided into two equal parts in four
directions as shown in figure 1 and c4, c5, c6, c7 are computed according to ci =sum (part (1))/sum (part (1) + part
(2)) i = 4,5,6,7. These characteristic features will not change significantly after some common processing
operations. For each block Bi, a block characteristics vector V(i) = (c1(i), c2(i), c3(i), c4(i), c5(i), c6(i), c7(i)) is
computed and saved in an array A.
.

III. PREPARE YOURFigure


PAPER1 BEFORE STYLING
The array A is lexicographically sorted. For every pair Bi and Bj , their similarity is computed using their
respective characteristics feature vector V (i) and V (j) in A as follows:

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 653


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

Let Diff (k) = |ck(i)- ck(j)|, if the following conditions are satisfied (where P(k)’s, t1 and t2 are preset thresholds):
(i) Diff (k) < P(k)
(ii) Diff (1) + Diff (2) + Diff (3) < t1, and
(iii) Diff (4) + Diff (5) + Diff (6) + Diff (7) < t2 and if the shift vector between Bi, and Bj is greater than a preset
threshold L then the pair is recorded as similar blocks. Also for reducing false positives shift vector count is
maintained and only the shift vector greater than a threshold value will indicate copied region. For block size
larger than 64x64, high accuracy is being claimed.

Aaron Langille and Minglum Gong [7] suggested a method of duplicate region detection using Zero
mean Normalized Cross Correlation (ZNCC) [8]. To reduce the similar block search time kd-tree [9], [10]
sorting is used. The segmentation process is illustrated in figure 2. For a given input image with resolution M×N,
the image is divided in to NB = (M – B + 1) × (N – B + 1) blocks of BxB size. In order to save memory space the
pixel intensity data (figure 2b) is not stored in the block data structure. Instead only an array containing the top
left pixel of each block (figure 2c) is stored. The intensities of pixels in the block can be referenced from the
original image as needed. To reduce the computational cost a kd-tree based sorting approach to group blocks
with similar intensity patterns together is used so that for a given block only search within a local neighborhood,
referred to as Neighborhood Search Size (Nss) is needed.

Figure 2: A 5x4 pixel image (a) is segmented into 2x2 pixel blocks. The blocks are logically organized as vectors in array (b). In order
to conserve memory space, only the coordinates of the top left corner of each block are actually stored in array (c). Blocks can be
.
referenced when needed using the coordinates and the original image

Therefore the resulting algorithm has a complexity of O(NB x Nss), where Nss<<NB. The average case
complexity for kd sorting is O(NB x log(NB/Nss)). Zero mean Normalized Cross-Correlation function is used for
similarity measure as it is quite robust to match similar patterns in the presence of minor noise and intensity
variations caused by lossy JPEG compression. For each block Bi in the array the cross correlation between Bi
and Bj is computed, where j = i+1, …, i+Nss. If the result of the cross correlation is less than the specified
ZNCC threshold or the previously found maximum ZNCC value, the pair is ignored. This allows keeping only
the best matches within the Nss neighborhood and discarding the rest. The computational cost of the matching
process is O(NB×Nss). The x and y coordinates of both blocks that produce the highest ZNCC meeting or
exceeding the ZNCC threshold are stored. The coordinate difference between the two blocks is then used to
create two 32 bit colors, one for representing each of the blocks. By coloring based on offset between the blocks
we take advantage of the fact that neighboring pixels in a duplicate region will have the same offset when
correctly matched. As a result, if a duplicate region is present in an image the copy source and paste destination
will appear as two monochromatic clusters of pixels. Colored noise appears as randomly colored pixels with few
or no neighbors of like color. Noise can occur for a number of reasons:
(i) When the ZNCC threshold is too low random pairs with similar color that are not actually part of a duplicate
region may incorrectly be considered matches. Increasing the ZNCC threshold helps to decrease noise.
(ii) Images with large patches of solid color or naturally occurring patterns can cause noise and even small
clusters of like-colored pixels. Increasing the block size and/or increasing the ZNCC threshold helps to
minimize or eliminate these problems.
When all blocks have been processed and all matches have been colored an output image has been produced. A
visual inspection for large groups of like-colored pixels can be performed to detect the presence of a duplicate
region. If too many noisy pixels are present it can make the inspection process more difficult. Morphological
refinements like dilation and erosion are performed to remove this difficulty. The algorithm has the limitation of
not working in presence of rotation and scaling of duplicated region.
Guohui Li, Qiong Wu, Dan Tu and Shaojie Sun [11] suggested a method for detecting the duplicate
region by reducing the block size using DWT (Discrete Wavelet Transform) [12] and SVD (Singular Value
Decomposition) [13]. This improves further the time complexity of the detection algorithm. The problem in the

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 654


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

methods suggested in [3, 4, 6, 7] is that the number of blocks is large as they are extracted from the original
image. DWT, is a multilevel decomposition technique, localized in space and in frequency. The localization
feature both in space and frequency, in turn, results in a number of useful applications such as data compression,
detecting features in images, and removing noise and so on[12] . In this method the image is firstly decomposed
through DWT into a series of wavelet coefficients corresponding to the image's spatio-frequency sub-bands as
shown in the figure 3. Let us call IΦj the sub-band at resolution level j and with orientation Φє{LL,LH, HL,
HH} . As is well known, most of the image energy is concentrated at the low frequency sub-band IjLL, whose
size is only 1/4j of the original image size. Then sliding window operation is only applied to IjLL sub-band, and
SVD is used to extract the features of all blocks.

Figure 3

SVD Definition: any real m x n (In general, let m>=n) matrix A can be decomposed uniquely as A = UΛVT ,
Where U is an m x m orthogonal matrix, V is an n x n orthogonal matrix, and A is an m x n matrix whose off-
diagonal entries are all zeros and whose diagonal elements satisfy σ1≥ σ2≥ σ3≥ σ4≥ ……. ≥ σn. It can be shown
that r = rank(A) equals the number of nonzero singular values σi,i =l,...r represent the SVs in descending order
[13].
There are two purposes for using SVD: (1) SV vector is the unique, steady representation of a block. It is
optimal for given image in the sense that the energy packed in a given number of transformation coefficients is
maximized; (2) it further reduces feature dimension from m x n to r. Applying SVD to every block will lead to
Nb x r matrix(Nb is no of overlapping blocks), which is lexicographically sorted to find matching blocks just
like[3]. Although the authors claim high accuracy in the presence of compression (JPEG), the robustness against
scaling and rotation of duplicated region is not mentioned.

A.N. Myna, M. G. Venkateshmurthy and C. G. Patil [14] also suggested another method based on
discrete wavelet transform rectangular coordinates are first converted to log polar coordinates to counter the
effect of rotation and scaling. In this approach, detection of copy move forgery is done in two phases. In the first
phase, the exhaustive search for identical blocks is done only on the reduced dimension representation of the
image obtained after the application of Discrete Wavelet Transform (DWT) up to a specified level ‘L’ to the
original image. A square of size b x b pixels is slid by one pixel along the image from the upper left corner right
and down to the lower right corner for each position of the b x b block, the block is mapped on to log-polar
coordinates and then the resulting pixel values are extracted by rows into a row of a two dimensional array ‘A’
with b2 columns and (M-b+1) x (N-b+1) rows. Each row corresponds to one position of the sliding block. To
identify the matching blocks, the rows in matrix A are lexicographically sorted. Due to this the similar rows
come closer. Then each row of the sorted matrix is considered and is compared with a certain number of rows
above and below it. Phase correlation [15] is used as the similarity criterion. The maximum phase correlation
value for each block is considered. The top left positions of the reference block and the matching block are
saved as a row in a matrix only if their maximum phase correlation value exceeds a preset threshold value ‘t’. In
the second phase, the saved blocks are iteratively compared at each level of the wavelet transform. The final
match is performed on the original image itself. This saves considerable amount of time and also improves
accuracy as we move up to the higher resolution images. Since the size of the image doubles at each iteration,
the value of ‘b’ is also doubled in each iteration. The X and Y coordinate of the blocks at level L are mapped to
the previous level L-1 by the formula: X(L-1) = X(L) x 2 –1 and Y(L-1) = Y(L) x 2 –1 .The approach works even
if the pasted region has gone under scaling and rotation. Another method based upon SVD is suggested in [16]
which is similar to [11]. But the SVD [17] features are extracted from overlapping blocks of original image. The
features in each block are transformed into a kd-tree. After regions are represented as r-dimensional SV feature
vectors u and v, the Euclidean distance D(u,v) is used as similarity measure between these vectors. Due to
application of kd-tree data structure the time complexity is better than [3, 4, and 6] and [11]. Jing Zhang Zhanlei
Feng and Yuting Su [18] suggested another method based on DWT (Discrete Wavelet Transform) to reduce the
dimension. But block matching is not being used. Instead, phase correlation is computed to estimate the spatial
offset between copied region and pasted region. The reduced dimension image is divided into four non
overlapping sub-images. Phase correlation between every two sub-images is calculated and by extracting the
peaks as shown in the figure 4. The copy move regions are located by pixel matching. The method works even
for compressed images. But for highly compressed images the comparison between copied and pasted regions

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 655


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

has to be done at lower frequency components of the image achieved by calculating four levels of DWT. The
immunity of the method against scaling and rotation is not mentioned.

Figure 4(a) In ideal Figure 4 (b) In real Figure 4 (c) In no matching case

B. Mahdian and S. Saic [19] suggested a method based on blur moment invariants [20]. The method
focus on detecting duplicated region in presence of post copy paste operations like blurring and adding noise.
The key is the characteristic of blur moments that little blurring will not affect central blur moments. The
method is based upon the following steps:
i) Tiling the image with overlapping blocks,
ii) Blur moment invariants representation of the overlapping blocks,
iii) Principal component transformation,
iv) kd-tree representation,
v) Blocks similarity analyses,
vi) Duplicated regions map creation.
This method begins with the image being tiled by blocks of R x R pixels. Blocks are assumed to be smaller than
the size of the duplicated regions, which have to be detected. Blocks are horizontally slid by one pixel
rightwards starting with the upper left corner and ending with the bottom right corner. The total number of
overlapping blocks for an image of M x N pixels is (M-R+1) x (N-R+1). If μpq is two dimensional (p+q)th order
 
central moment for image function f(x,y) defined as   ( x  x )( y  y ) f ( x, y)dxdy where x =m
 
t t t 10/m00 ,

yt= m01/m00. mpq is two dimensional (p+q)th order central moment for image function f(x,y) defind as
 

 x
p
y q dxdy . By applying the algorithm as derived and described in [20], blur invariants based on central
 
moments of any order by using the following recursive relation can be constructed:
1 K m2  p   q 
B( p, q)   pq  qp         B  p  t  2i, q  2i  t  2i,2i where
00 n  0 i  m1  t  2i   2i 

 p  q  4   t  p  1   t q
K   , t  2( K  n  1), m1  max  0,    , m2  min  ,   
 2    2   2 2

  1  p  q are even and   0  p  q are odd. The proposed algorithm uses 24 blur invariants up to
seventh order to create the feature vector B={B1,B2,B3,………,B24} of each block. In the case of an RGB image,
the dimension of the feature vector is 72 (24 invariants per channel). Using the principal component
transformation, this dimension. is reduced. Typically the new orthogonal space has dimension 9 (fraction of the
ignored variance along the principal axes is set to 0.01). In PCT, the orthogonal basis set is given by the
eigenvectors set of the covariance matrix of the original vectors. Thus, it can be easily computed on very large
data sets. Note that PCT preserves the Euclidean distance among blocks. All blocks are put in kd-tree
representation to search similar blocks. The similarity measure employed here is defined by the formula:
1
s ( Bi , B j )  , where ρ is a distance measured in Euclidean space given
1   ( Bi , B j )
1/ 2
 dim 
by  ( Bi , B j )    ( Bi [k ]  B j [k ])2  .For each analyzed block represented by the feature vector B, all blocks
 k 1 

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 656


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

with an equal or larger similarity relation are looked. It must be an equal or larger similarity to the threshold T.
The method finds all similar blocks for each one (similar to the nearest neighbors search) and analyses their
neighborhood. This is done efficiently using the kd-tree structure, which was created in the previous step. If s(Bi,
Bj)  T, where T is the minimum required similarity, the neighborhood of Bi and Bj is also analyzed. Note that
the threshold T plays a very important role. It expresses the degree of reliability with which blocks Bi and Bj
correspond with each other. It is obvious that the choice of T directly affects the precision of results of the
method. Due to the possibility of the presence of additive noise, a boundary effect, or JPEG compression, this
threshold should not be set to 1. After two blocks with the required similarity have been found, a verification
step begins. In the verification step, similar blocks with different neighbors will be eliminated. If s(B(i, j), B(k, l))
 T, but (i  k )2  ( j  l )2  D ,these blocks will not be further analyzed and will not be assigned as duplicated.
Threshold D is a user-defined parameter determining the minimum image distance between duplicated regions.
The method is also robust compared to [4] and [6] regarding post processing compression like JPEG. Also time
complexity is on higher side.

B. Dybala, B. Jennings and D. Letscher [21] developed an algorithm to detect the duplication of the
region by detecting the use of filters used to smoothen the pasted region. The steps taken in sequence are as
follows.
i). Apply the appropriate filter to the image, e.g. the Laplacian.
ii). Find each B x B block in the image.
iii). Insert each block into a kd-tree. For each block find its closest match discarding those with root mean
square error greater than EMAX.
iv). Cluster block pairs using hierarchical clustering of their position vectors and a distance cutoff of DMAX.
v). Remove clusters with fewer than CMIN blocks.
The input consists of an image and four parameters: B (size of block), EMAX (similarity threshold), DMAX (limiting
distance for clustering of similar blocks) and CMIN (minimum no. of blocks required for clustering). After the
correct filter is applied to the image, a two stage procedure is used. First, rough matches are found between
blocks. These pairings are then processed to remove false positives. These choices of the parameters will affect
the quality of paired regions found by the algorithm. To detect Poisson cloning the Laplacian needs to be
evaluated and for the healing brush the bi-Laplacian, or Laplacian applied twice, needs to be found. There are
multiple ways to estimate these derivatives. Using the definition of the derivative simple filters can be found to
estimate them. However, the basic filters are numerically sensitive and deal poorly with noise. An alternative is
to use 7-tap or higher order derivative filter [22] to get estimates to these values that are more robust. This filter
can be applied twice to find the bi-Laplacian and is the filter used in this method. The proposed algorithm
detects the use of filters on sufficiently large regions to establish copy move forgery. The method also shows
some robustness to high quality compression.

Weihai Li, Yuan yuan and Nenghai Yu [23] proposed a method for detecting copy move forgery in
JPEG images. The method works for a manipulation performed on JPEG and the target image is also JPEG. As
JPEG is one of frequently used format by cameras, the method is quite useful. In [24] method of detecting
doctored JPEG image has been suggested, but this method is not able to detect, if image is compressed thrice. In
[25] method based upon BA (Block Artifact) [26, 27] is suggested. But BA itself waves in an image. The method
[23] use BAG (Block Artifact Grid) mismatches and also works for multi compressed images. A DCT grid is the
horizontal lines and the vertical lines that partition an image into blocks. And a BAG is the grid embedded in an
image where block artifact appears. The DCT grid and BAG match together in undoctored images. When an
image slice is moved, the BAG within it also moves. To make image visual unperceived after copy-paste forgery,
the BAG usually cannot be cared since the slice must be placed in a certain place. Thus, the method locates the
BAG firstly, and then checks whether the BAG mismatches or not. Once a BAG mismatch is affirmed, then the
image can be authenticated as doctored. To locate the BAG, a measure, called as Local Effect (LE), is defined.
Suppose the luminance of pixels in a 8x8 window is [Sij] (0 ≤ i,j ≤ 7), and [Suv] (0 ≤ u,v ≤ 7) is the corresponding
DCT coefficients represented as

u   v 7 7
u (2i  1) v(2 j  1) 1u , v  0
Suv   S cos cos Where  u ,v  
2u, v  0
ij
8 i 0 j 0 16 16
Then the local effect is defined with the right column and bottom raw AC coefficients.


i  7and / or j  7
Si j 2
LE 
S00 2
By sliding the window in the whole image, a local effect map of LE can be obtained to confirm forgery. To

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 657


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

extract the BAG more clearly, the local minimal value points can be marked and the cross-points of BAG can be
obtained. The method works even when the copied area does not belong to same image.

Hailing Huang,W. Guoand Yu Zhang [28] suggested a new approach using SIFT (Scale invariant
Feature Transform) algorithm. The method works by first extracting SIFT descriptors of an image. The key of
the algorithm is that the SIFT descriptors [29] are invariant to changes in illumination, rotation, scaling etc. So
SIFT descriptors of copied and pasted region are matched to detect the tampering. The SIFT algorithm extracts
distinctive features of local image patches which are invariant to image scale and rotation and are robust to
changes in noise, illumination, distortion and viewpoint. As described in [29], it consists of four major steps: (1)
Scale-space extrema detection; (2) Key point localization; (3) Orientation assignment; (4) Key point descriptor.
In order to efficiently detect potential interest points that are invariant to scale and orientation, which are also
called key points in SIFT framework, the method used the scale-space extrema in the Difference of Gaussian
(DoG) function convolved with the image, D(x, y, σ) , which can be computed from the difference of two nearby
scales separated by a constant multiplicative factor k : D(x, y, σ)= [ G(x, y, kσ)- D(x, y, σ)] x I(x, y) for input
x2  y2
1  2
image I(x, y) and Gaussian function G(x, y, σ) = e , where σ is the factor of scale space. The
2 2
convolved images are grouped by octave, and an octave corresponds to doubling the value of σ. Then the
value of k is selected so that we obtain a fixed number of blurred images per octave. This also ensures the same
numbers of DoG images are generated per octave. Once DoG images have been obtained, keypoints are
identified as local minima or maxima of the DoG images across scales. This is done by comparing each pixel in
the DoG images to its eight neighbors at the same scale and nine corresponding neighboring pixels in each of
the neighboring scales. If the pixel value is the maximum or minimum among all compared pixels, it is selected
as a candidate key point. Scale-space extrema detection produces too many keypoint candidates, some of which
are unstable. The key points are filtered so that only stable key points are retained. For making image invariant
to orientation, the keypoint orientation is calculated from an orientation histogram of local gradients from the
closest smooth image L(x, y, σ). For each image sample L(x, y) at the key point’s scale σ , the gradient
magnitude m(x, y) and orientation (x, y) is computed using pixel differences, as m( x, y )  L12  L2 2
and  ( x, y )  arctan( L2 / L1 ) where L1= L(x+1 ,y, σ) - L(x-1,y, σ), L2= L(x,y+1, σ) - L(x,y-1, σ). L(x, y, σ) =
G(x, y, σ) x I(x, y) is scale space image of I(x, y). After extracting SIFT key points from an input unknown image,
these distinctive key points are then matched between each other to authenticate copy-move forgeries in the
digital image. If any matches are detected, it means that the input image has copy-move forgeries. The method
works reasonably up to JPEG compression 40, SNR 20dB. However for very small block size problem of false
positive is faced.

S. Bayram H.T. Sencar and Nasir Menon [30] suggested another method based upon Fourier–Mellin
Transform [31]. The features of image blocks are extracted using FMT. Lexicographic sorting is used to
compare the similar blocks like in [2, 3].The features extracted using FMT are robust to JPEG compression,
blurring, noise addition, rotation and scaling. Attempt also has been made to reduce the time complexity by
using counting bloom filters instead of lexicographic sorting. First divide the image into bxb overlapping blocks.
A block i(x; y) and its rotated, scaled, and translated version i’(x; y) are considered for comparison, where i’(x; y)
=i(σ (xcosα + ysinα) - x0; σ (-x sinα + ycosα) - y0) and(x0; y0), σ and α indicates translation, scaling and rotation
parameters respectively. Feature vectors are prepared for all these blocks using FMT. After obtaining the feature
vector for each of the blocks, lexicographic sorting is performed to find the similar blocks. To improve the
efficiency of detection step, counting bloom filters are used, which essentially compares the hashes of features
as opposed to features themselves. This is realized as following:

(i) Form an array K with k elements which are all zero initially.
(ii) Hash the feature vector fi of each block such that each hash value will indicate an index number in the array
K.
(iii) If the feature vectors of two blocks are identical they would give the same hash value yielding same index
value, increment the value of the corresponding element in K. That is, h = hash (fi) and K (h) = K(h)+1. It is
assumed any element of array K that is higher than two indicates duplicated block pairs. One can imagine this
scheme would require the duplicated blocks to be exactly same, and the resulting image to be saved without any
compression. This scheme is not as robust as lexicographic sorting, due to the fact that lexicographic sorting
scheme requires the duplicated blocks to have similar feature vectors only. On the other hand, this approach
would reduce the computational time significantly, since the hashing and forming the array K will be executed at
the same step as feature extraction. Finding the duplicated blocks is not enough for deciding the forgery, since
most of the natural images would have many similar blocks. There should be more than a number of connected

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 658


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

blocks within the same distance to make such a decision. The distance between the two blocks that are detected
to be the duplicated pairs, ai and aj , whose starting positions are (xi; yi) and (xj; yj) respectively is calculated, as
follows:
d x (i, j )  xi  x j and d y (i, j)= yi - y j .
Note that in the lexicographically sorting scheme, ai and aj would correspond to the blocks which were coming
successively and in the bloom filter scheme, ai and aj would indicate the blocks whose feature vectors yielded to
the same hash value. To measure how many blocks are detected as duplicates within the same distance, a
distance vector D is constructed. The values of D are set to zero initially. When a distance between two blocks
are calculated, the corresponding index value of D is incremented by one : D(dx; dy) = D(dx; dy) + 1. Any
value of D (dx; dy ), which is more than the threshold TH indicates the blocks that are copied and moved along
the same distance. If these blocks are connected to each other, then a decision of forgery can be made.
Experimental results show better performance than [3] and [4].

S. B. Solario and Asoke K. Nandi [32] suggested another method using log polar coordinates.
Overlapping blocks of pixels are reassembled into log polar coordinates and summed along the angle axis to
obtain a one dimensional descriptor invariant to reflection and rotation. This approach allows performing better
search of similar blocks by means of correlation coefficient of its Fourier magnitudes. The point (x, y) can be
written using logpolar coordinates, x=eρcosθ ,y= eρsinθ , where ρЄR and 0≤θ≤2π. Let (x’,y’) denotes the
coordinates of a reflected, rotated and scaled point, i.e. x’= μ(xcosΦ + ysinΦ) , y’=μ(xsinΦ-ycosΦ), where Φa
nd μ are parameters of rotation and scaling respectively. Rewriting the equations in log polar form: x’=e(ρ+logμ)
cos(Φ-θ) and y’= e(ρ+logμ) sin(Φ-θ) . Observe that scaling in rectangular coordinates results in a simple translation

of log polar map.Consider a block of pixels Bi(x,y) and its log-polar representation Bi(ρ,θ). A 1-D descriptor v i

as v i (ρ) =  B (r , q) .
q
i Compared to simple log polar coordinate use of 1-D descriptors reduce memory

requirements and computational costs and make the block matching process independent of reflection operation.
The blocks are sorted to reduce the computational cost of the search stage. The centre of each block Ai will be
the centre of a disk of diameter q. Let f1 i , f 2i and f 3i be the average of the red, blue and green color
components, respectively, of the pixels within the disc. Additionally, the luminance of the pixels within the disc
is computed as Y = 0.2126r + 0.7152g + 0.0722b, where r, g, and b are components of red, green and blue,
respectively. Then the entropy is calculated as, f 4i = −  pk log 2 pk , where pk is the probability of each
k

luminance value in the disc. To reduce the occurrence of false matches, blocks where the computed entropy is
lower than a predefined threshold emin are discarded. Then a list L is formed with the tuple of features
( f1 i , f 2i , f 3i , f 4i ) corresponding to the remaining blocks. Let us define L’ as the result of lexicographically
sorting the list L. Additionally, let Bi be the i-th block of pixels in L’, and the tuple (xi,yi) be the coordinates (the
 
upperleft corner) of Bi in the image X. A descriptor v i is computed for every Bi, and its Fourier magnitude V i is
 
calculated. Then, the correlation coefficient ci j = c( V i , V j )is computed, for every j > i that satisfies the three
following conditions:
a) di j >τd,
b) ( f ki − τa) ≤ f k j ≤ ( f ki +τa), for k = 1,2,3,
c) ( f 4i − τe) ≤ f 4 j ≤ ( f 4i + τe),

where dij = ( xi - x j ) 2 + ( yi - y j ) 2 , and τd, τa, τe are predefined thresholds. Note that, since L’ is sorted, the
 
comparisons for V i can stop once a descriptor V u is reached, such that f1u > ( f1 i + τa). Let us assume that cir
 dir dir
was the higher correlation coefficient computed for V i . If cir > τsim, a tuple ( x ,y ,xi, yi, xr, yr) is appended
dir dir
to a list Q, where x = |xi−xr| and y = |yi − yr| are the offsets of the two pairs of coordinates. The list Q is
sorted in accordance with the offsets to form a new list Q’. Then, Q’ is scanned to identify clusters with
similarity i.e. not necessarily equal- offsets. Finally, a bitmap is encoded with the clusters that contain more
blocks than a predefined threshold τnum. Although the method efficiently robust to scaling and rotation false
positives are detected for every operation like move reflection, rotation and scaling.

H. J. Lin, C. W. Wang and Y. T. Kao [33] suggested a method using radix sort to increase the efficiency.

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 659


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

The image is divided into overlapping blocks of equal size. Features [34] having resistance against some
modifications such as compression and Gaussian noise for each block are extracted and represented as a vector.
All the extracted feature vectors are then sorted using radix sort. The difference (shift vector) between the
positions of every pair of adjacent feature vectors in the sorted list is computed. The accumulated number of
each of the shift vectors is evaluated. Large accumulated number is indication of duplicated region. The blocks
corresponding to the shift vector are marked for a tentative detected result. Medium filtering and connected
component analysis are performed to get final result.

Figure 5 (Block B)

For resisting against various modifications and improving the efficiency for sorting feature vectors, each block B
of size b x b (=16×16) is represented by a 9-dimensional feature vector vB = (x1, x2, …, x9). Firstly, the block B is
divided into four equal-sized sub-blocks, S1, S2, S3, and S4, as shown in Fig. 5 and let Ave(.) denote the average
intensity function. Then f1 denotes the average intensity of the block B, the entries f2, f3, f4, and f5 denote the
ratios of the average intensities of the blocks S1, S2, S3, and S4 to f1, respectively, and f6, f7, f8, and f9 stand for the
differences of the average intensities of the blocks S1, S2, S3, and S4 from f1, respectively.
 fi  Ave( B)if i  1,

f i   Ave( Si  1) / 4 Ave( B)  1 if 2  i  5,
 f  Ave( S  5)  Ave( B )if 6  i  9.
 i i

Finally, entries fi’s are normalized to integers xi’s ranging from 0 to 255.


  f i if i  1,

xi     f i  if 2  i  5,

   f i  m2  if 6  i  9.
  m1  m2   2 

Where m1  max  f i  and m2  min  f i  . Although these 9 entities contain duplicated information, they
6i 9 6 i  9
together possess higher capability of resistance against some modifications, such as JPEG compression and
Gaussian noise. However small duplicated region not detected with high efficiency. As feature vectors are stored
as integers , radix sort is applied to cut down the matching cost. let v1, v2,…,vk denote sorted list of the feature
vectors of blocks B1, B2,…, Bk, respectively. The position of the top-left corner point of each block Bi is recorded
in P(Bi) and a shift vector is defined as the difference of two adjacent feature vectors in the sorted list. Two
duplicated regions caused by copy-move forgery form a number of pairs of identical feature vectors, each pair
then make the same shift vector, thus the accumulative number of a shift vector u(i)=P(Bi+1)-P(Bi) is used to
detect the duplicated regions.

J. Wang, G. Liu, H. Li, and Z. Wang [35] suggested a method based upon Gaussian pyramid. The
methods in [3, 4, 6] works by extracting special features to make the match between two blocks. Although these
methods work for post processing operations, but fail if random rotation of pasted regions is performed. This is
due to the rectangular structure of block. In this method firstly Gaussian pyramid decomposition is obtained to
reduce the image size to ¼ of the original scale. Also features extracted from the low frequency components of
Gaussian pyramid decomposition make detection method more robust than those directly extracted from the
spatial domain. Let the original image be G , which is taken as the zero level, the lth level image of Guassian
pyramid decomposition can be obtained by making the (l-1)th level image convolute with a window function
w(m,n) with low pass characteristics, and do the down sampling after the convolution. The process can be
described as

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 660


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

2 2
G (i, j )    w(m, n)G
m 2 n 2
i 1 (2i  m, 2 j  n) .The window function w is called as weight function or the

generation kernel, the size of the window is usually chosen as 5x5. The image is divided into overlapping
circular blocks (Fig. 6a). Each circle is divided further into four concentric circles (Fig 6b). Each circle would be
divided into four concentric circles, which are denoted as 1 ,  2 , 3 ,  4 with radius equaling to 1, 2, 3, 4
respectively.

Figure 6(a) overlapping circular blocks, (b) concentric circles

The means k 
x i, j
, xi , j   k of these concentric circles are taken as feature vectors, as they are robust
i
against rotation, blurring, noise adding and JPEG compression. For comparing the similarity, the Euclidian
4

 (  i 2 ) where
2
distance between the two feature vectors should be computed as: SIM (V1 , V2 )  i
1

i 1

V1  11 , 21 , 31 , 41  , V2  12 , 2 2 , 32 , 4 2  . To minimize false positives three thresholds are used. The first is
the similarity threshold Ts, the second is the distance threshold Td, the last one the area threshold Ta .

Zhao Junhong [36] presented a new technique based upon LLE (Locally Linear Embedding). LLE is
method for dimensionality reduction in high dimensional data set suggested by Roweis [37]. LLE can find the
topological relationship among non linear datasets and map high-dimensional data to low dimensional data
without changing the relative locations. A digital image is essentially a non linear digital signal.The method
works in a similar way as [4]. But for reduction of dimension LLE is being used instead of PCA. Because PCA
lose some local information. LLE based method is better in detecting fused edges. The authors have compared
the results of LLE based method with [4] and found that the copy moved area shown to be more pronounced.
Recently M.K. Bashar et.al.[38] presented a method for detecting forgery in the presence of flip and rotation .
The method has been applied with PCA, KPCA and wavelet transformed images. Different tables have been
prepared for an exhaustive comparison between PCA, KPCA and wavelet based techniques for robustness
against rotation, horizontal flips, vertical flips, translation and SNR. Also one method has been suggested for
automation of threshold parameters for detecting duplicate regions. The authors claimed highest efficiency for
KPCA based algorithm. However the time costs are high.
III. CONCLUSION
Copy-move forgery is one of the most frequently applied forgery technique. Although many papers
have been published suggesting different detection techniques, the challenges which are faced have not been
overcome yet. The few key areas can be listed in the copy-move forgery detection. Firstly, the given image is
divided in to overlapping rectangular blocks except in [35] where overlapping circular block are created.
Secondly , to reduce the search area and to make the search unit as robust as possible to post processing like
compression, Gaussian noise, scaling and rotation, some transformation technique is used like DCT, PCA, DWT,
SVD, LLE etc. Thirdly feature vectors, after transformation are sorted lexicographically or using k-d tree. The
neighboring vectors are compared against the similarity parameters to hint the duplication of region which are
located in the image.
IV. FUTURE SCOPES
All the methods which have been suggested draw strengths from different transforms to make them
robust against post processing and to reduce the number of logical blocks to compare. However, as yet no
method achieved 100% robustness against post processing operations. Also selection of block size poses
problem. If it’s taken too small false positives appears and if taken too large, some forged areas go undetected.
Also the threshold parameter detection is manual and has to be wisely set to avoid false positives. There is scope
for improvement in the time costs.

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 661


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

REFERENCES
[1] H. Farid, Creating and detecting doctored and virtual images: implications to the child pornography prevention act, Technical Report,
TR2004-518,Dartmouth College, Hanover, New Hampshire, 2004.
[2] S. Bayram, H. T. Sencar, and N. Memon, A Survey of Copy-Move Forgery Detection Techniques, IEEE Western New York Image
Processing Workshop, September 2008, NY
[3] J. Fridrich, D. Soukal and J. Lukas, Detection of copy–move forgery in digital images, Proceedings of Digital Forensic Research
Workshop, IEEE Computer Society, Cleveland, OH, USA (August 2003), pp. 55–61.
[4] A. Popescu, H. Farid, Exposing digital forgeries by detecting duplicated image regions, Technical Report TR2004-515, Department of
Computer Science, Dartmouth College, 2004.
[5] R. Duda and P. Hart. Pattern Classi_cation and Scene Analysis. John Wiley and Sons, 1973.
[6] W. Luo, J. Huang and G. Qiu, Robust detection of region-duplication forgery in digital image, ICPR ’06: Proceedings of the 18th
International Conference on Pattern Recognition, IEEE Computer Society, Washington, DC, USA (2006), pp. 746–749.
[7] A. Langille and M. Gong, An efficient match-based duplication detection algorithm, CRV ’06: Proceedings of the 3rd Canadian
Conference on Computer and Robot Vision (CRV’06), IEEE Computer Society, Washington, DC, USA (2006), p. 64.
[8] Jung, I-K. and Lacroix, S., “A Robust Interest Point Matching Algorithm”, in International Conference on Computer Vision. 2001.
[9] A.W. Moore ”An introductory tutorial on Kd-trees”, PhD Thesis: Efficient Memory-based Learning for Robot Control, technical
Report no. 209. University of Cambridge, 1991.
[10] D. A. Talbert and D. Fisher “An empirical analysis of techniques for constructing and searching k-dimensional trees” , in ACMSIGKD
international conference on knowledge discovery and data mining,pp 26-33,2000.
[11] G. Li, Q. Wu, D. Tu, and S. Sun, "A Sorted Neighborhood Approach for Detecting Duplicated Regions in Image Forgeries based on
DWT and SVD," in Proceedings of IEEE International Conference on Multimedia and Expo, Beijing China, July 2-5, 2007, pp. 1750-
1753.
[12] Amara Graps, "An Introduction to Wavelets", IEEE Computational Science and Engineering, 1992, 2(2):50-61.
[13] E. J. lentilucci, "Using the Singular Value Decompostion", Chester F. Carlson Center for Imaging Science, Rochester Institute of
Technology, 29 May 2003.
[14] A.N. Myna, M.G. Venkateshmurthy, C.G. Patil, Detection of region duplication forgery in digital images using wavelets and log-polar
mapping, in: ICCIMA ’07: Proceedings of the International Con- ference on Computational Intelligence and Multimedia Applica- tions
(ICCIMA 2007), IEEE Computer Society, Washington, DC, USA, 2007, pp. 371–377.
[15] Hongjie Xie, Nigel Hicks, G. Randy Keller,Haitao Huang, Vladik Kreinovich “An IDL/ENVI Implementation of the FFT-based
Algorithm for Automatic Image Registration”, Intl. Journal of Computers and Geosciences, vol. 29, pp. 1045-1055, 2003.
[16] Zhang Ting, Wang Rang-ding, “Copy-Move Forgery Detection based on SVD in Digital Images”, pp.1-5, 2nd International Congress
on Image and Signal Processing (CISP '09), 2009,Tianjin.
[17] V.C. Klema, “The Singular Value Decomposition: Its Computation and Some Applications”, IEEE Trans. Automatic Control, 1980,
Vol.25, pp.164-176.
[18] J. Zhang, Z. Feng, Y. Su, A new approach for detecting copy–move forgery in digital images, in: IEEE Singapore International
Conference on Communication Systems, 2008, pp. 362–366.
[19] B. Mahdian and S. Saic, Detection of copy–move forgery using a method based on blur moment invariants, Forensic Science
International 171 (2–3) (2007), pp. 180–189.
[20] J. Flusser, T. Suk, S. Saic, Image features invariant with respect to blur, Pattern Recogn. 28 (1995) 1723–1732.
[21] B. Dybala, B. Jennings, D. Letscher, Detecting filtered cloning in digital images, MM&Sec ’07: Proceedings of the 9th Workshop on
Multimedia & Security, ACM, New York, NY, USA, 2007, pp. 43–50.
[22] W. Li, Y. Yuan, N. Yu, Detecting copy-paste forgery of jpeg image via block artifact grid extraction, in: International Workshop on
Local and Non-Local Approximation in Image Processing, 2008
[23] J.F. He, Z.C. Lin, L.F. Wang, and X.O. Tang, “Detecting doctored JPEG images via DCT coefficient analysis”, Lecture Notes in
Computer Science, Springer Berlin, vol. 3953, pp.423-435, 2006.
[24] S.M. Ye, Q.B. Sun, and E.C. Chang, “Detecting digital image forgeries by measuring inconsistencies of blocking artifact”, in Proc.
IEEE International Conference on Multimedia and Expo 2007, Beijing, China, July 2007, pp.12-15.
[25] A.C. Bovik, and S. Liu, “DCT-domain Blind Measurement of Blocking Artifacts in DCT-coded Images”, in Proc. IEEE International
Conference on Acoustics, Speech and Signal Processing, Salt Lake City, UT. USA, pp. 1725-1728, 2001.
[26] F. Pan, X. Lin, S. Rahardja, and etc., “A Locally Adaptive Algorithm for Measuring Blocking Artifacts in Images and Videos”, in Proc.
the 2004 International Symposium on Circuits and Systems, Singapore, May 2004, pp. 23-26.
[27] H. Huang, W. Guo, Y. Zhang, Detection of copy–move forgery in digital images using sift algorithm, in: PACIIA ’08: Proceedings of
the 2008 IEEE Pacific-Asia Workshop on Computational Intelligence and Industrial Application, IEEE Computer Society, Washington,
DC, USA, 2008, pp. 272–276.
[28] D.G. Lowe (2004). “Distinctive Image Features from Scale-Invariant Key points”. International Journal of Computer Vision, Vol. 60,
No. 2, pp. 91-110.
[29] S. Bayram, H. Taha Sencar, N. Memon, An efficient and robust method for detecting copy–move forgery, in: ICASSP ’09: Proceedings
of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE Computer Society, Washington, DC,
USA, 2009, pp. 1053–1056.
[30] Yunlong Sheng and Henri H. Arsenault, “Experiments on pattern recognition using invariant fourier-mellin descriptors,” J. Opt. Soc.
Am. A, vol. 3, no. 6, pp. 771, 1986.
[31] S. Bravo-Solorio, A.K. Nandi, Passive forensic method for detecting duplicated regions affected by reflection, rotation and scaling, in:
European Signal Processing Conference, 2009, pp. 824–828.
[32] H. J. Lin, C.-W. Wang, Y.-T. Kao, Fast copy–move forgery detection, WSEAS Transactions on Signal Processing 5 (5) (2009) 188–197.
[33] C. T. Hsieh and Y. K. Wu, “Geometric Invariant Semi-fragile Image Watermarking Using Real Symmetric Matrix,” WSEAS
Transaction on Signal Processing, Vol. 2, Issue 5, May 2006, pp. 612-618.
[34] Junwen Wang, Guangjie Liu, Hongyuan Li, Yuewei Dai, Zhiquan Wang, "Detection of Image Region Duplication Forgery Using
Model with Circle Block," mines, vol. 1, pp.25-29, 2009 International Conference on Multimedia Information Networking and
Security, 2009.
[35] Zhao Junhong “ Detection of Copy-Move Forgery Based on One Improved LLE Method”, vol. 4, pp 547-550, 2nd International
Conference on Advanced Computer Control (ICACC), 2010.
[36] Roweis S T, Saul L K. Nonlinear dimensionality reduction by locally linear embedding[J]. Scinece, 2000. 290(5000): 2323 -2326
[37] M.K. Bashar, K. Noda, N. Ohnishi, K. Mori , ”Exploring Duplicated Regions in Natural Images”, IEEE Transactions on Image
Processing,2010.

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 662


Sunil Kumar et al. / International Journal on Computer Science and Engineering (IJCSE)

AUTHORS PROFILE

Sunil Kumar is working as Assistant Professor , Faculty of Engineering & Technology, MITS. He has
more than 08 years of UG and PG teaching experience after completing his M. Tech. in Computer Science &
Engineering in 2002 from Kurukshetra University , Kurukshetra.He has supervised many B.Tech and M. Tech
projects.

Prof. Pradip Kumar Das, Professor & Dean, Faculty of Engineering & Technology, MITS is a former
professor of Jadavpur University, where he taught for 35 long years - first in the Department of Electronics &
Tele-Communication Engineering and subsequently in the Department of Computer Science & Engineering. A
Master of Electronics Engineering with specialization in Computer Engineering in 1970, Prof. Das joined the
Computer Group of the Tata Institute of Fundamental Research, visited and worked in many countries in
Europe, Japan and USA and was awarded a Commonwealth fellowship and a fellowship of the Japan
International Cooperation Agency. He did post-doctoral research in the Computer Science department of the
Queen's University of Belfast in UK. He has guided nine Ph.D. theses and numerous Master's dissertations. He
has published nearly 80 research papers, mostly in International journals and conferences. He is a senior
member of the Institute of Electrical & Electronics Engineers, USA and a Fellow of the Institution of Engineers,
India and has been the founder director of the School of Mobile Computing & Communication in Jadavpur
University.

Prof S.Mukherjee graduated in Electrical Engg. from Patna University in 1968.After having Industrial
experience for couple of years he did his Masters in Electrical Engg. from UOR in 1977 and since then is
engaged in teaching in UOR and now IIT Roorkee. At present he is Professor in the Department of Electrical
Engg. at IIT Roorkee and had been Vice Chancellor of Mody Institute of Technology and Science, Rajasthan
between 2008-10.He has supervised 7 Ph.Ds and around 35 M.Tech dissertations during his teaching career so
far. His areas of interests are system modeling, application of ANN in process control and other areas, order
reduction of linear systems along with computer applications in Electrical Engineering.

Shally is working as Assistant Professor , Faculty of Engineering & Technology, MITS. She has more
than 03 years of UG and PG teaching experience after completing his M. Tech. in Computer Science &
Engineering in 2002 from Banasthali University , Banasthali. She has supervised many B.Tech and M. Tech
projects.

ISSN : 0975-3397 Vol. 3 No. 2 Feb 2011 663

You might also like