Professional Documents
Culture Documents
⋅∑ xi
image [4]. 1 n
x = …Eq. (2)
1) Skin Color Segmentation: n i=1
Our objective is to develop a classifier skin and Covariance Vector:
non-skin regions in and image. We can classify the
given set of data into 2 classes: skin and non-skin Covariance is the measure of how much two
regions. Using Bayes’s rule we can develop one variables vary from each other. It is defined using
such classifier. From Bayes’s rule we can write, the following equation,
∑ (x ) ⋅ ( yi − y )
n
cov (x ) =
( x − x )2 i − x
^
1 −
p ^ (ω i | x) = e 2.σ 2
…Eq. (1) i =1
…Eq.
ωi | x 2π .σ n −1
(3)
Where, ωi is the class, x is the test sample, σ is the 2
Multivariate Density Function:
variance. Let us define each of these terms in brief.
The general multivariate normal density in d
Input Image in RGB Format dimension is written as under,
1
p( x) = . exp − ( x − µ ) ∑ −1 ( x − µ )
1
2
t
Convert the image in RGB format to YES
|∑|
d 1
format (2π ) 2 2
...Eq. (4)
Calculate Mean Vector, Covariance Matrix And
Mahalanobis Distance Where, x is a d-component column vector, d is
the dimension (i.e., number of features. In our case
it is 2), Σ is a d-by-d covariance matrix and µ is d-
component mean vector and | ∑ | is the determinant
Calculate True Positive and False Positive
Values For Different Values Of Threshold
of covariance matrix.
Plot Receiver Operating Characteristics (ROC)
Curve
Is pixel value No
within
threshold?
Yes
Label as skin Label as non-
region skin region
Universal Threshold:
The classification is performed over various
thresholds and based on the true-positive/false- Fig. 4: R.O.C. Curve [11]
positive (ROC curve analysis); we define the value
The above figure illustrates the receiver operating
of universal threshold. The process can be
characteristic curve (R.O.C.), which is a plot of true
summarized by the following sets of equations,
positive value against false positive value. The
λig < t ai (h i , t ui ) … Eq. (6) point where the ROC curve intersects the true
positive value is considered to be an ideal threshold
value for the skin region. Thus by thresholding the
λig > t ai (h i , t ui ) … Eq. (7) image we can extract the skin region. Hence we can
find the edges of the skin region by using an edge
Classification: operator. However this approach solely depends on
i the values in the training set, which further
Choose Wi when Pr[W K | X g ] = max Pr[Wi | x g ] complicates the face recognition process.
p( x g | Wi ). Pr(Wi )
Pr[Wi | x g ] = 2) Detecting Boundaries in a Vector Field:
p( x g )
Boundary detection in a two-dimensional field with
1 …. Pixel at g ⊂ Wi one attribute, i.e., edge detection in a
I ij monochromatic image is an extensively studied
z g =
0 …. Pixel at g ⊄ Wi field. Many types of approaches have been
∑ ∑
1 Li employed. In one type, change in a single attribute
Ii λ ,I i
FP λ t j = fp g t j in the two dimensional field is computed as the
T 2 j =1 ( g ∈ I ij ) gradient of a scalar field. In this paper a vector
∑ ∑ tp
Ii 1 Li
λ t , I ij
gradient approach is proposed to detect boundaries
TP λ t j = g of a face. Earlier studies on finding the edge of a
T1 j = 1 ( g ∈ I ij ) color image were based on finding the edge of each
R, G and B plane separately, followed by clubbing
them into one. Direction of an edge pixel is chosen
FACE RECOGNITION USING LINE EDGE MAPS 9
to be either the direction corresponding to the
maximum component or weighted average of the Draw line (critical line) through 2
three gradient directions. reference points
However instead of finding the edge of the three
color planes separately, a single technique should
be employed to identify the edges in all the planes. Define 2 boundary points parallel to
Images may have multiple dimensions. Hence in critical line and at a distance d from critical
general we may define an image as a vector field line
that maps m-dimensional (spatial) space to n-
dimensional (color) space. d is also called as the error tolerance
Input Image
Curve is traversed point by point
For RGB, colored image the matrix D is given as and the largest eigenvalue is given by,
under,
∂u / ∂x ∂u / ∂y λ=
1
( p + q + ( p + q ) 2 − 4( pq − t 2 ) )
D = ∂v / ∂x ∂v / ∂y
2
∂x ∂x ∂x
3) Dynamic Two-Strip Algorithm:
∂u ∂u ∂v ∂v ∂w ∂w
It this approach we develop an algorithm which
t = + +
∂x ∂y ∂x ∂y ∂x ∂y
first finds the best fitted left-hand and then finds the
right-hand side strips at each point on the curve.
FACE RECOGNITION USING LINE EDGE MAPS 9
The orientation and width of the strip can be (LEM), is proposed here to integrate the structural
adjusted dynamically and the corresponding line information with spatial information of a face
segment can be derived. The general approach to image by grouping pixels of face edge map to line
this algorithm is as follows, segments.
Line Segment Hausdorff Distance (LHD)
In this approach, we first draw a line between two measure is used to match LEMs of faces. LHD has
reference points, which are the points on the edges better distinctive power as it makes use of the
on the image. We name this line as critical line. additional structural attributes of line orientation,
Then two boundary lines which are parallel to the line-point association and number disparity in
critical lines and at a distance d from the critical LEM.
line are defined. These two boundary lines restrict
Consider two LEMs M l = {m1l , m2l ,............, m lp } ,
the curve between 2 lines. Thus smaller the
distance, greater is the accuracy. A curve is then represent a model LEM in database and
traversed through these two points is within the T l = {t1l , t 2l ,........., t nl } , represent an input LEM. The
boundary region. Curve is traversed point by point, distance between two line segments mil and t lj is
eventually covering the entire image.
defined as,
d (m il , t ;jl ) = d θ2 ( mil , t lj ) + d //2 (mil , t lj ) + d ⊥2 (m li , t lj )
Boundary line
The above equation finds the distance between
Critical Line the two images. The distance between the two
images is small, when the two images are similar.
Boundary line Thus is a helpful technique for face recognition.
Fig 7: Definition of a strip
Input Image
(a)
(a)
(b)
(b)
(a) (b)
(b) (d)
Fig. 10: (a) Original Image, (b) Edge of the image using Sobel filter, (c) Thinned edges of the image,
(d) applying dynamic two-strip algorithm
FACE RECOGNITION USING LINE EDGE MAPS 9
VIII. REFERENCES