You are on page 1of 10

IET Image Processing

Research Article

License number plate recognition system ISSN 1751-9659


Received on 10th April 2017
Revised 25th May 2017
using entropy-based features selection Accepted on 13th June 2017
E-First on 8th November 2017
approach with SVM doi: 10.1049/iet-ipr.2017.0368
www.ietdl.org

Muhammad Attique Khan1 , Muhammad Sharif1, Muhammad Younus Javed2, Tallha Akram3, Mussarat
Yasmin1, Tanzila Saba4
1Department of Computer Science, COMSATS Institute of Information Technology, WahCantt, Pakistan
2Department of Computer Science & Engineering, HITEC University, Museum Road, Taxila, Pakistan
3Department of EE, COMSATS Institute of Information Technology, WahCantt, Pakistan
4College of Computer and Information Science, Prince Sultan University, Riyadh, Saudi Arabia

E-mail: attique@ciitwah.edu.pk

Abstract: License plate recognition (LPR) system plays a vital role in security applications which include road traffic monitoring,
street activity monitoring, identification of potential threats, and so on. Numerous methods were adopted for LPR but still, there
is enough space for a single standard approach which can be able to deal with all sorts of problems such as light variations,
occlusion, and multi-views. The proposed approach is an effort to deal under such conditions by incorporating multiple features
extraction and fusion. The proposed architecture is comprised of four primary steps: (i) selection of luminance channel from CIE-
Lab colour space, (ii) binary segmentation of selected channel followed by image refinement, (iii) a fusion of Histogram of
oriented gradients (HOG) and geometric features followed by a selection of appropriate features using a novel entropy-based
method, and (iv) features classification with support vector machine (SVM). To authenticate the results of proposed approach,
different performance measures are considered. The selected measures are False positive rate (FPR), False negative rate
(FNR), and accuracy which is achieved maximum up to 99.5%. Simulation results reveal that the proposed method performs
exceptionally better compared with existing works.

 Nomenclature factors are image size, processing time, and success rate. In [2],
author discussed various important approaches for the development
IR red channel of ALPR system, where the primary levels include: (i) localisation
G
I green channel of number plate, (ii) character recognition, and (iii) digits
blue channel classification [4]. Few existing algorithms [5], performed well
IB under fixed conditions, such as stationary background, constant
I L extraction of luminance channel illumination, and single view. Most of the existing LPR systems
σw2 (t1) weighted sum of two classes follow techniques like, artificial neural network (ANN) [6], optical
Otsu segmentation character recognition (OCR) [7], features salient [8], support vector
ψ(I)
machine (SVM) [9], colour segmentation [10], Scale-Invariant
φI(cv) two-dimensional convolution operation Feature Transform (SIFT) [11], fuzzy-based algorithm, and colour-
φI(Bg) remove extra regions distance characteristics [12]. The general flow of LPR system
erosion operation mostly comprised of four primary steps: (i) preprocessing, (ii)
φI(er) segmentation and region of interest (ROI) detection, (iii) features
φI(FL) filled image extraction, and (iv) recognition.
φ(Final1) final segmented image In this paper, a new technique is implemented for LPR system
φI(Flr) floor operation based on two types of extracted features and their fusion. First,
φ(Δ) entropy-based features selection and then serial-based feature
area
φ(maj) fusion is done. The proposed architecture is comprised of four
major axis length
φ( min ) primary steps: (i) selection of luminance channel from CIE-Lab
minor axis length
colour space, (ii) binary segmentation of selected channel and
φ(Pr) perimeter
image refinement, (iii) fusion of HOG and geometric features using
φ(Et) extent novel method of entropy and vector dimension, (iv) feature
φ(St) solidity classification with SVM. This paper is organised as follows.
ΔB area of bounding box Related work is described in Section 2. Section 3 presents the
φ(ξFV) fused feature vector motivation and contribution. Section 4 presents proposed work,
which includes preprocessing, segmentation, features extraction,
1 Introduction features selection and fusion, and classification. Sections 5 and 6
describe experimental results and conclusion of this paper.
Development of an efficient license plate recognition (LPR) system
is one of the most active research areas having numerous 2 Related work
applications such as road traffic monitoring, traffic law
enforcement, street activity monitoring, identification of potential Several techniques are recently proposed for ALPR. Chang et al.
security threats, and so on [1, 2]. Recently, several articles were [13] introduced an ALPR system based on two modules: (i) license
published, dealing with automatic LPR (ALPR) [2–4]. In [3], plate coordinates identification and (ii) digits identification. For
author discussed a novel approach of ALPR in which the key feature extraction of licence plate, a fuzzy-based approach is

IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209 200


© The Institution of Engineering and Technology 2017
Fig. 1  System architecture of proposed LPR system

utilised and later ANN is implemented for the classification. Wang and classes of entropy measures based on rough set theory [21]. All
et al. [4] introduced a Chinese number plate recognition system systems discussed above perform well but under fixed conditions.
based on SIFT features which incorporate position and orientation To handle this problem, we proposed a novel method which
information. The introduced system consists of two major steps: (i) facilitates fusion of multiple sets of features. Our major
SIFT features extraction of template images to store in the database contributions in this article are: (i) selection of luminance channel
and (ii) matching of extracted SIFT features of each image with the because of its minimum signal-to-noise ratio, (ii) extraction of
database. Jobin et al. [14] introduced an ALPR system based on eight geometric features and HOG features in order to compute a
stroke width transform which primarily consists of three steps: (i) fused feature vector; an introduction of novel features selection
removing of motion blurring in an image, utilising interleaved method based on entropy to select most relevant features, and (iii)
plane pixels’ layers, (ii) edge detection through plane and vertical classification of fused features through SVM for LPR. A selection
mask, and (iii) utilisation of OCR for the recognition. The of Caltech car dataset allows us to perform comprehensive
introduced method worked effectively under light variations and simulations, which consists of 1155 images. An addition of our
even under shadow. Cinsdikici et al. [15] suggested a new own local database further improves the accuracy which consists of
information retrieval system of number plate recognition. The 500 template images of different views and different angles and
suggested method was a combination of two major steps namely, tested on four classifiers. Finally, a comparison is made with
character segmentation and number recognition. The segmentation existing LPR techniques.
step comprised of a ROI extraction using Kaiser resizing,
morphological operations, artificial shifting, and bi-directional 4 Proposed work
vertical thresholding. For recognition, first eigen-space features are
extracted which were then utilised through principle component In this section, we are limited to our proposed work which
analysis (PCA) and back-propagation neural network. Rasheed et comprised of four major steps of: (i) preprocessing, (ii) segment
al. [16] introduced a robust method for LPR based on Hough lines, ROI, (iii) features extraction and fusion, and (iv) number
utilising Hough transformation. The introduced method comprised recognition. In preprocessing step, we use CIE-Lab colour space
of two major steps: (i) number plate detection using canny edge transformation to get luminance channel, which is later utilised in
detector and Hough transformation and (ii) recognition with the segmentation phase. In segmentation step, we initially perform
template matching. The experiments were done on 102 images Otsu thresholding and convert the image into binary. In the second
having different illumination conditions. Ghazal et al. [17] step, morphological operations of erosion and dilation are utilised
introduced a new approach based on level sets and ANNs. The to make segmented images more accurate. An introduction of
introduced method began with the segmentation of moving regions image subtraction (eroded from dilated image) and two-
through background subtraction method. A colour-based particle dimensional (2D) convolution further refine the results. In the final
filtering technique was utilised for the tracking of segmented step, colour mapping is done on processed image before subjecting
regions. To detect the license plate, Lab colour space to morphological operations. In features extraction steps, we
transformation was performed and the level set algorithm is utilised extract HOG features and six geometric features of cropped region.
to locate its contour. The geometric features were extracted from Finally, the extracted features are fused and fed to SVM classifier
the detected plates, which later fed into the artificial neural for number plate recognition. The detailed diagram of the proposed
architecture for the recognition. Azad and Shayegh [18] designed a algorithm is shown in Fig. 1. Each section is described below in
new system for ALPR based on edge detection and connected detail.
components. The adaptive thresholding method was implemented
to convert image into a binary form, and then edge detection and 4.1 Preprocessing
some morphological operations were utilised for the localisation of
number plate. Finally, few statistical features of localized number In preprocessing step, we utilised CIE–LAB transformation which
plate were extracted for its final classification. is a combination of three channels, where L channel represents
luminance and a, and b channels describes colour component.
After transformation, a luminance channel is extracted which is
3 Motivation and contribution later utilised in the segmentation step. The extraction of LAB
Broadly speaking, an automatic license number plate recognition colour space transformation as follows. Let the original input
system has few major steps, which include: (i) preprocessing, (ii) license number plate image I(x, y). The R, G, and B channels are
segmentation of ROI, (iii) features extraction, and (iv) number extracted as
recognition/classification. ROI detection and features extraction
play a vital role in this field and large number of algorithms were IR
IR = (1)
proposed for number plate recognition system. Few existing R
∑u = 1 Iu
techniques include: OCR system [7], salient features based [8],
colour segmentation based [10], SIFT features based [11], fuzzy-
based algorithms, haar-like features based [19], HOG features [20],
IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209 201
© The Institution of Engineering and Technology 2017
Fig. 2  Image preprocessing results
(a) Original image, (b) LAB colour space transformation, (c) L-channel selection, (d) Contour image

IG where k ∈ (1, 2). After performing Otsu's segmentation,


IG = G (2) morphological operations of erosion and dilation is implemented.
∑u = 1 Iu
The purpose of erosion and dilation is to make digits in license
plate more accurately visible. The implementation of erosion and
IB dilation is as follows.
IB = B (3)
∑u = 1 Iu Consider ψ(I) is binary image, which is computed after Otsu
segmentation, and IE is structuring element, respectively. Hence,
where u = [1, 2, 3] is an index of three extracted channels red, the erosion and dilation is expressed as
green, and blue. After red, green, and blue channels extraction, the
LAB colour space is computed as follows: (ψ(I) ⊕ IE)(x, y) = max (ψ(I)(x − u, y − v)) (8)
u, v

Iζ ∈ υ × Iw (4)
(ψ(I) ⊖ IE)(x, y) = min (ψ(I)(x + u, y + v)) (9)
u, v
X Y Z R G B
where ζ ∈ (I , I , I ) and w ∈ (I , I , I ). Hence, the L channel is
extracted as After implementing erosion and dilation, we subtract the erosion
image from dilated image as
IL = ρ × F(IY ) − 16 (5)
φI(D) = (ψ(I) ⊖ IE) − (ψ(I) ⊕ IE) (10)
where the value of ρ is 116 selected.
The 2D-convolution operation is performed on subtracted image,
which is computed by (10), to make the character in the licence
4.2 Image segmentation plate more visible. The computation of 2D-convolution operation is
In the segmentation step, the processed luminance channel, as
extracted in the previous step, is converted into a binary image.
a=∞
First, we utilised Otsu's segmentation which is a clustering-based
L h
φI(cv) = ∑ α(a, b)β(p1 − a, p2 − b) (11)
algorithm. Let us consider, luminance image I with I grey levels a= −∞
Ih − 1
(0, 1, 2, 3…Ih − 1) and total number of pixels ℕ = ∑i = 0 ni, where
After performing 2D-convolution operation, enhanced the intensity
ni are pixels at certain grey levels. The Otsu segmentation is based value of segmented image, which is low at the time of convolution
on two intra-class variance that is defined for a weighted sum of operation. Also implemented some morphological operations to
difference of two classes, as shown in fig. 2: remove the extra regions and getting the final segmented regions as

σw2 (t1) = p1(t1)(μ1(t1) − μ)2 + p2(t1)(μ2(t1) − μ)2 (6) φI(Bg) = logical(φI(cv)) (12)
t
where the pixels probability is p1(t1) = ∑i1= 1 pi and φI(er) = φ( ⊕ )(φI(Bg)) (13)
p2(t1) = 1 − p(k − 1)(t1), here k ∈ [1, 2]. The mean of between two
t
classes is μ1(t1) = (1/(p1(t1)))∑i1= 1 ip1 and φI(FOB) = φI(Bg) − φI(er) (14)
h
I
μ2(t1) = (1/(p2(t1)))∑i = t1 + 1 ip2, and variance between two classes is
t φI(FL) = (φ(I) ∙ IE)(x, y) = ((ψ(I) ⊕ IE) ⊖ ψ E) (15)
σ12t1 = ∑i1= 1 (i − μ1(t1))2(pi /(p1(t1))), and
Ih − 1 The final segmented region is computed by (16) and their results
σ22t1 = ∑i = t1 + 1 (i − μ2(t1))2(pi /(p2(t1))). Hence, the cost function of
threshold selection for Otsu segmentation is described below: are shown in Fig. 3

ψ(I) = arg 0 ≤ t1 < Ih max σk2(t1) (7) φI(Final) = φI(er)(φI(FL)) (16)

202 IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209


© The Institution of Engineering and Technology 2017
The final segmented image φI(Final) contains too many set of φ( min ) = c1 + c2 − d (26)
points but our requirement is an area with characters. To resolve
this problem, we implemented the floor function. The commutation where d = (c2 − c1) + (m2 − m1) is distance of points.
of new segmented image by performing floor function is given as The orientation is an angle between the x-axis and major axis of
the ellipse that has the same second moment as the region.
a1 b The final three features as perimeter, extent, and solidity is
φI(Flr) = AreaRem φI(Final) + φF × 1 (17)
20 20 computed as

≡ φI(Flr)(2 ← φF(0.9 × a1)) = 1 (18) φ(Pr) = 2l + 2w (27)

φ(Δ)
where φF = x = max m ∈ ℤ | m ≤ x . After performing floor φ(Et) = (28)
ΔB
function, extract the regions in the image by using region props on
φI(Flr). The region props are used to detect the ROI in the φ(Δ)
segmented image. For regions props, we require four functions φ(St) = (29)
Convex area
x, y, t, w as position, length, and breadth. Finally, the region is
selected, which contains characters area. Hence, the calculation of After fusion of both HOG and geometric features, fused these
these character regions is as follows: features set and generated a new feature vector, which is later fed
to SVM for number recognition.
φI(xlow) = φF( min (φcn)) (19)
4.4 Features selection and integration
φI(xhigh) = φC( max (φcn)) (20) In this section, an entropy-based feature selection and serial-based
features fusion is performed. Many authors utilised a well-known
φI(xadd) = φC(φcn(size(φcn))) (21) algorithm PCA [22] for features reduction. However here, we
consider the problem of automatical features selection. Since, if we
say that ‘100 features are selected by PCA’, than the major
φI(ylow) = φF( min (φcn)) (22) question arises that how to select the features. To handle this
problem, we implemented an entropy-based features selection
φI(yadd) = φC( max (φcn)) (23) technique where we first calculated the entropy of all extracted
features and than selected the most relevant features based on their
Hence, the final segmented image is computed as (see equation highest score. The mathematical description of proposed feature
below) The ROI detection with respect to selected character selection technique is defined as: let ξ(HOG) represent the HOG
regions results are shown in Fig. 4. feature vector having dimension of 1 × 3780 and ξ(GM) represent the
geometric feature vector having length 1 × 8. Then calculate the
4.3 Features extraction entropy of both feature vector. The entropy is defined as

In features extraction phase, we extract two types of features as


HOG features and eight geometric features. The geometric features
Entropy = ∑ ∑ p(k, l)log p(k, l) (30)
k l
include solidity, perimeter, orientation, area, and few more. The
final character region image φ(Final1) is used for feature extraction. Then the entropy of ξ(HOG) and ξ(GM) is defined as
First, HOG features are extracted from the segmented image. The
HOG features are also called shape-based features and it represents ξ(HOG)ent = Entropy(ξ(HOG)) (31)
a global feature because this type of features is predicted through
the shape of the object. The extraction of HOG features is based on ξ(GM)ent = Entropy(ξ(GM)) (32)
edge sharpness for orientation histogram. In the extraction of HOG
features, (i) edge gradient and orientation are computed in both After performing entropy, the obtained vectors are sort in
directions as horizontal and vertical using mask [1 0 1] and descending order and selected 500 features from ξ(HOG) vector and
[ − 1 0 1]T; (ii) divide the image local regions into a small groups, eight from ξ(GM) vector. Finally, serial-based feature fusion is
which is called cells. The size of ‘cell’ is fixed in this case at performed and a final vector is obtained as
16 × 16 pixels. The eight orientations with a histogram of edge
gradients are calculated for each cell. (iii) The Gaussian function is = Dimension of ξ(HOG)ent; Dimension of ξ(GM)ent
implemented to assign weights to each pixel in order to avoid small
or sudden changes as well as to give less emphasis to gradients that = [1 × 500; 1 × 8]
are far from the centre of the descriptor. The length of HOG = [1 × 508]
features against one image is 1 × 3780, where the size of the
segmented frame is 64 × 128. The features of each frames are Hence, the final feature vector is 1 × 508 dimension, which is later
saved into a new feature vector, which is later used for feature fed to SVM for LPR (see Fig. 5).
selection. The selected features are later fused with geometric
features. The geometric features are extracted as 4.5 Classification
n−1 The fused feature vector is finally utilised through SVM for license

y
φ(Δ) = f (x) dx = ∑ f (xi +1)Δx (24) number recognition. The SVM is a supervised learning classifier
x i=0 utilised for the prediction of class labels. It transforms features into
a higher dimension space, where it implements the optimal
n−1
where φ(Δ) represent the area and f (x) = limn → ∞ ∑i = 0 f (xi)Δx. hyperplane that describes the classes. The hyperplane work is
Second, filled area is calculated using number of foreground pixels based on the maximum margin between itself and those who is
in the image. Third, major axis length is calculated to find out the nearest to it. The nearest set of points are called support vectors.
distance between two points c1 and c2. Hence, computation of Suppose, we have ℕ training samples of features set φ(ξRC),
major and minor axis length is calculated as whereas

φ(maj) = c1 + c2 (25) φ(ξRC) = (ρi, τi) | | ρi ∈ ℝm, τi ∈ ( − 1, 1) (33)

IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209 203


© The Institution of Engineering and Technology 2017
Fig. 3  Licence plate segmentation results
(a) Subtracted image, (b) 2D-convolution image,
(c) Enhanced intensity value of segmented image, (d) Final segmented region

Fig. 4  ROI detection


(a) Original image, (b) ROI detection,
(c) 3D-contour plot, (d) 2D contour,
(e) Selected region

where m is [1 × 508] dimension of feature vector with ith training


samples and τ j ∈ ( − 1, 1) is class label of ρi. Hence, the algorithm
needs to solve the following optimise function:
κ
1 T
min w w + δ ∑ max {1 − τiwT ρi, 0} (34)
2 i=1

where δ > 0 is called penalty parameter. The SVM supposed to


solve following dual problem:

1
minγ ζ(γ) = γ Tυγ − eTγ
2 (35)
s.t 0 ≤ γ ≤ δ and i = 1, 2, 3, …, κ

where Qi, j = τiτ j ρi ρ j and e ∈ Rκ is a vector of ones. The final


graphical results of proposed licence plate recognition system are
shown in Figs. 6 and 7.

Fig. 5  Algorithm 1: proposed LPR system


5 Experimental results and discussion
In this section, we discuss the performance of our proposed
algorithm. For training and testing, we utilised Caltech and our

φ(Final1) = φI(Final)(φI(ylow) + φI(yadd) + (φF( max (φcn) − 4) + xlow ← (xhigh + xadd)))


a2 a
≡ AreaRem φ(Final1) + φF × 2
20 20
204 IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209
© The Institution of Engineering and Technology 2017
Fig. 6  Proposed number plate recognition results Fig. 7  Proposed number plate recognition results
(a) Original image, (b) Recognised image (a) Original image, (b) Recognition result

Table 1 Performance measures using complete set of HOG features


Performance measures
Classifier Sensitivity FPR FNR FDR AUC Precision, % Accuracy, %
KNN 0.985 0.507 1.45 1.60 0.985 98.40 98.50
EBT 0.968 0.528 3.20 3.25 0.994 96.75 96.80
LDA 0.979 0.020 2.00 2.10 0.993 97.90 98.00
DT 0.968 0.032 3.00 5.70 0.974 97.15 97.00
proposed 0.989 0.012 1.25 1.00 0.999 99.20 98.90

Table 2 Performance measures results on eight geometric features


Performance measures
Classifier Sensitivity FPR FNR FDR AUC Precision, % Accuracy, %
KNN 0.94 0.05 5.2 4.55 0.99 95.45 94.8
EBT 0.94 0.05 5.2 5.20 0.98 94.80 94.8
LDA 0.96 0.03 3.7 3.55 0.99 96.45 96.3
DT 0.96 0.02 3.0 3 0.97 97.00 97.0
proposed 0.97 0.02 2.2 2.05 0.98 97.95 97.8

own dataset developed by our lab. These images were captured in presented complete HOG features results with maximum accuracy
university car parking using our own cameras. The resolution of 98.90% on proposed framework and lowest accuracy is 96.80% on
each image is 1536 × 2048. The sample images are shown in Fig. 6 EBT classifier. Table 2 described the eight geometric features
where every image is captured with the consideration of different results with maximum accuracy is 97.8% on proposed framework.
views and different angles. Additionally, proposed algorithm is Table 3 described the proposed selected fusion features results with
tested on two publically available datasets named as Caltech car maximum accuracy of 99.50%. The results show that the proposed
dataset and Medialab LPR dataset. A comparison of five state-of- features fusion algorithm performs significantly better (Table 4).
the-art classifiers is been made which include: K-nearest neighbour
(KNN), ensemble boosted tree (EBT), linear discriminant analysis 5.2 Experiment II
(LDA), and decision tree (DT), and SVM. To validate the
performance of implemented system, seven measures are computed In experiment II, 200 images are selected for testing the proposed
which include accuracy [23], sensitivity, FNR, FPR, False system and remaining 300 images are selected for training the
discovery rate (FDR) [24], Area under the curve (AUC), and classifier. For each training and testing phase, fused 2000 HOG
precision. The main purpose of using these measures is to analyse features and eight geometric features based on vector dimension as
the results of algorithm with existing methods. The simulation is discussed in Section 4.4 are utilised. Table 5 described the fusion
done in Matlab 2015b having Personal Computer Core i7 with 8  results of 2000 HOG features and eight geometric features with
GB of RAM. maximum accuracy is 96.4% on DT algorithm. The comparison of
Table 5 with Table 3 clearly shows that the results decrease, when
5.1 Experiment I increasing the number of features. The comparison of both tables is
also shown in Fig. 8.
In the proposed work, we extracted two set of descriptors as HOG
and geometric features for license number plate recognition. In the 5.3 Experiment III
experimental setup 1, we selected 250 images for each training and
testing phase. The proposed algorithm is tested by selecting 8 In experiment III, 180 sample images are selected for testing and
geometric and 500 HOG features for each training and testing 320 images for training the classifier. In this phase of experiments,
image sample, prior to fusion, as discussed in Section 4.4. The the selected fused features set which contains 3000 HOG features
fused feature vector is later fed to SVM for recognition. Table 1 and eight geometric features are utilised. These fusion results are
shown in Table 6, which have maximum accuracy of 97.2% on
IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209 205
© The Institution of Engineering and Technology 2017
Table 3 Fusion results of 500 HOG features and eight geometric features
Performance measures results: features fusion
Classifier Sensitivity FPR FNR, % FDR AUC Precision, % Accuracy, %
KNN 0.994 0.004 0.50 0.55 0.994 99.70 99.5
EBT 0.990 0.009 0.90 0.90 0.999 99.10 99.1
LDA 0.967 0.032 3.20 3.30 0.995 96.70 96.8
DT 0.969 0.029 2.80 2.70 0.978 97.30 97.2
proposed 0.995 0.003 0.40 0.45 0.999 100.0 99.5

Table 4 Confusion matrix of proposed features fusion algorithm results (500 HOG, eight geometric)
Class Positive Negative
Proposed
positive 100% 0.5%
negative 0.4% 100%
KNN
positive 99.3% 0.7%
negative 0.4% 100%
EBT
positive 98.8% 1.2%
negative 0.8% 99.2%
DT
positive 95.6% 4.4%
negative 1.5% 98.5%
LDA
positive 97.0% 3.0%
negative 3.4% 96.6%

Fig. 8  Graphical comparison of experiments 1 and 2 with respect to accuracy and FNR

Table 5 Fusion of 2000 HOG features and eight geometric features


Performance measures
Classifier Sensitivity FPR FNR FDR AUC Precision, % Accuracy, %
KNN 0.95 0.03 3.90 4.30 0.95 95.70 96.1
EBT 0.94 0.04 4.40 4.50 0.97 95.50 95.6
LDA 0.93 0.06 6.70 7.45 0.90 92.55 93.3
DT 0.95 0.03 3.60 3.95 0.97 96.05 96.4
proposed 0.94 0.04 4.70 5.05 0.95 94.95 95.3

proposed framework. The comparison of Tables 5 and 6 shows that and testing. In this round of experiments, the proposed fusion
with proposed framework, accuracy increased up to 1.9% but the technique is performed and selected 500 HOG features and eight
overall results decrease. The comparison of experiments 1, 2, and 3 geometric features through entropy and fed to classifier. The results
is shown in Fig. 9, from which it is clear that the increasing the of the proposed model are shown in Table 7 and the best accuracy
number of features decreases the accuracy. obtained is 99.8% on the proposed algorithm. From Table 7, the
proposed algorithm is compared with other classification models
5.4 Caltech car dataset and produced accuracy 99.4, 99.2, and 99.5% on DT, LDA, and
EBT, which shows that the proposed algorithm also perform well
The Caltech car dataset [25] consists of total 1155 images. For on other classification methods. Finally, a graphical comparison is
training the classifier, we adopted a strategy of 60:40 for training

206 IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209


© The Institution of Engineering and Technology 2017
Fig. 9  Graphical comparison of experiments 1, 2, and 3 with respect to accuracy and FNR

Table 6 Performance measures for integration of 3000 HOG features and eight geometric features
Fusion of 3000 HOG features and eight geometric features
Classifier Sensitivity FPR FNR FDR AUC Precision, % Accuracy, %
KNN 0.86 0.13 11.90 10.50 0.96 89.50 88.1
EBT 0.85 0.13 12.20 10.25 0.96 89.74 87.8
LDA 0.94 0.04 4.80 4.80 0.98 95.20 95.2
DT 0.95 0.03 3.90 3.95 0.96 96.05 96.1
proposed 0.97 0.02 2.80 2.90 0.96 97.10 97.2

Fig. 10  Graphical comparison of our local database with Caltech car dataset in terms of accuracy and FNR

Table 7 Fusion of 500 HOG features and eight geometric close view. For validation of proposed algorithm, we adopted 50:50
features for Caltech dataset strategy for both training and testing. The entropy-based proposed
Classification model Performance measures feature selection technique is performed and obtained maximum
FPR FNR, % Accuracy, % recognition results 99.30% as shown in Table 8. The proposed
results are also compared with other state-of-the-art classification
DT 0.01 0.6 99.4
algorithms and obtained highest recognition rate 99%. Also, made
LDA 0.02 0.8 99.2 a comparison of proposed algorithm with existing LPR techniques,
fine-KNN 0.08 3.0 97.0 which are tested on still images as depicted in Table 10. The
cubic-KNN 0.34 11.3 88.7 proposed method performs significantly better compared with
weighted-KNN 0.26 9.0 91.0 existing methods.
EBT 0.01 0.5 99.5
subspace KNN 0.17 5.9 94.1 5.6 Discussion
proposed 0.00 0.2 99.8 In this section, we epitomise our proposed system in terms of
qualitative measures. The proposed system, in general, is a
conjunction of two major steps: (i) preprocessing and segmentation
been made out between our local database and Caltech database in and (ii) features extraction and recognition. Whereas each step is
Fig. 10. an amalgamation of series of sub-steps as shown in Fig. 1. The
LPR results are also shown in Fig. 6. To validate the performance
5.5 Medialab LPR database of proposed system, a comparison is also been made with four
classification algorithms: KNN, EBT, LDA, and DT. For LPR
In this section, the proposed algorithm is tested on 324 still images, system, HOG and geometric features are extracted, which are
which are collected from Medialab LPR database [26]. The images integrated, as discussed in Section 4.4. The results of complete
are collected from the close view, day colour view, and day colour HOG features and eight geometric features are shown in Tables 1

IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209 207


© The Institution of Engineering and Technology 2017
Table 8 Proposed algorithm results on Medialab LPR Table 10 Comparison table for Medialab LPR database
database. The best values are presented in bold with existing methods
Classification model Performance measures Method Year Recognition rate, %
FPR FNR, % Accuracy, % Time, s Anagnostopoulos et al. [33] 2006 96.5
DT 0.03 2.4 97.6 45 Giannoukos et al. [34] 2010 90.9
LDA 0.02 1.0 99.0 12 Psyllos et al. [26] 2012 94.6
fine-KNN 0.10 4.7 95.3 18 Hsu et al. [35] 2013 92.1
cubic-KNN 0.08 3.7 96.3 10 Shahraki et al. [36] 2013 84
weighted-KNN 0.05 2.7 97.3 23 Smara and Khalefah [37] 2014 97.61
EBT 0.08 4.1 95.9 7 Davis and Arunvinodh [38] 2015 92
subspace KNN 0.12 8.1 91.9 4 Soora and Deshpande [39] 2016 97.56
proposed 0.01 0.7 99.3 6 proposed — 99.30

database and obtained maximum results 99.8 and 99.30% as


depicted in Tables 7 and 8. From above three experiments on our
local database, it is clear that experiment I performance is better
having accuracy of 99.5%, which can be confirmed by a confusion
matrix in Table 4. The sample results are shown in Figs. 6, 7, and
11. A comparison of proposed method with existing methods is
also been made and the results are shown in Tables 9 and 10, in
which it is clear that the proposed system outperforms existing
methods.

6 Conclusion
In this paper, a novel system is proposed for LPR based on the
features selection. The proposed system comprises of three major
step: (i) preprocessing and segmentation of character area, (ii)
features extraction of ROI, and (iii) features fusion using novel
technique. With proposed method, we have tried our best to deal
with different problems of light variation, occlusion, and so on.
The simulation results confirm that, with our novel idea, we have
managed to tackle above mentioned problems. Moreover, it is also
concluded that a cascaded design can comfortably manage
mentioned problem at its early stages and also the selection of good
features results in improved classification accuracy. As a future
Fig. 11  Sample recognition results
work, we will add few more set of features and will implement
improved features selection technique to reduce the error rate and
Table 9 Comparison with existing techniques using local increase the recognition rate.
database
Comparison
7 References
Method Year Recognition rate, %
Chen et al. [8] 2009 93.10 [1] Liu, G., Ma, Z., Du, Z., et al.: ‘The calculation method of road travel time
based on license plate recognition technology’, in ‘Advances in information
Wen et al. [5] 2011 97.88 technology and education’ (Springer Berlin Heidelberg, 2011), pp. 385–389
Kasaei et al. [27] 2011 98.20 [2] Du, S., Ibrahim, M., Shehata, M., et al.: ‘Automatic license plate recognition
(ALPR): a state-of-the-art review’, IEEE Trans. Circuits Syst. Video Technol.,
Zheng et al. [19] 2012 98.00 2013, 23, (2), pp. 311–325
Rasheed et al. [16] 2012 90.62 [3] Patel, C., Shah, D., Patel, A.: ‘Automatic number plate recognition system
(ANPR): a survey’, Int. J. Comput. Appl., 2013, 69, (9), pp. 21–33
Dehshibi et al. [28] 2012 94.50 [4] Wang, Y., Ban, X., Chen, J., et al.: ‘License plate recognition based on SIFT
Cinsdikici et al. [15] 2013 92.00 feature’, Opt.-Int. J. Light Electron Opt., 2015, 126, (21), pp. 2895–2901
[5] Wen, Y., Lu, Y., Yan, J., et al.: ‘An algorithm for license plate recognition
Azad and Shayegh [18] 2013 98.66
applied to intelligent transportation system’, IEEE Trans. Intell. Transp. Syst.,
Gou et al. [20] 2014 97.90 2011, 12, (3), pp. 830–845
Rabee and Barhumi [29] 2014 97.89 [6] Deb, K., Khan, I., Saha, A., et al.: ‘An efficient method of vehicle license
plate recognition based on sliding concentric windows and artificial neural
Rajput et al. [30] 2015 96.40 network’, Procedia Technol., 2012, 4, pp. 812–819
Xing et al. [31] 2016 95.00 [7] Naz, S., Hayat, K., Razzak, M.I., et al.: ‘The optical character recognition of
Urdu-like cursive scripts’, Pattern Recognit., 2014, 47, (3), pp. 1229–1248
Panahi and Gholampour [32] 2016 97.60 [8] Chen, Z.-X., Liu, C.-Y., Chang, F.-L., et al.: ‘Automatic license-plate location
proposed — 99.50 and recognition based on feature salience’, IEEE Trans. Veh. Technol., 2009,
58, (7), pp. 3781–3785
[9] Zhai, X., Bensaali, F., Sotudeh, R.: ‘Real-time optical character recognition
on field programmable gate array for automatic number plate recognition
and 2. Later on, we made out three experiments as I, II, and III, to system’, IET Circuits Devices Syst., 2013, 7, (6), pp. 337–344
validate the proposed algorithm. In experiment I, 500 HOG [10] Yang, Y., Gao, X., Yang, G.: ‘Study the method of vehicle license locating
based on color segmentation’, Procedia Eng., 2011, 15, pp. 1324–1329
features and eight geometric features are integrated and their [11] Yousef, K.M.A., Al-Tabanjah, M., Hudaib, E., et al.: ‘SIFT based automatic
results are depicted in Table 3 in the form of seven measures. In number plate recognition’. 6th Int. Conf. on Information and Communication
experiment II, 2000 HOG features and eight geometric features are Systems (ICICS), 2015, 2015, pp. 124–129
integrated and their results are depicted in Table 5 having accuracy [12] Yang, X., Hao, X.-L., Zhao, G.: ‘License plate location based on trichromatic
imaging and color-discrete characteristic’, Opt.-Int. J. Light Electron Opt.,
of 96.4%. In experiment III, 3000 HOG features and eight 2012, 123, (16), pp. 1486–1491
geometric features are integrated and their results are described in [13] Chang, S.-L., Chen, L.-S., Chung, Y.-C., et al.: ‘Automatic license plate
Table 6 having accuracy of 97.2%. The comparison of these three recognition’, IEEE Trans. Intell. Transp. Syst., 2004, 5, (1), pp. 42–53
experiments is also shown in Figs. 8 and 9. Additionally, the [14] Jobin, K.V., Jiji, C.V., Anurenjan, P.R.: ‘Automatic number plate recognition
system using modified stroke width transform’. Fourth National Conf. on
proposed algorithm test on the Caltech dataset and Medialab LPR
208 IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209
© The Institution of Engineering and Technology 2017
Computer Vision, Pattern Recognition, Image Processing and Graphics [28] Dehshibi, M.M., Allahverdi, R.: ‘Persian vehicle license plate recognition
(NCVPRIPG), 2013, 2013, pp. 1–4 using multiclass Adaboost’, Int. J. Comput. Electr. Eng., 2012, 4, (3), p. 355
[15] Cinsdikici, M., Ugur, A., Tunalı, T.: ‘Automatic number plate information [29] Rabee, A., Barhumi, I.: ‘May. license plate detection and recognition in
extraction and recognition for intelligent transportation system’, Imaging Sci. complex scenes using mathematical morphology and support vector
J., 2013, 55, (2), pp. 102–113 machines’. IWSSIP 2014 Proc., 2014, pp. 59–62
[16] Rasheed, S., Naeem, A., Ishaq, O.: ‘Automated number plate recognition [30] Rajput, H., Som, T., Kar, S.: ‘An automated vehicle license plate recognition
using Hough lines and template matching’. Proc. of the World Congress on system’, Computer, 2015, 8, pp. 56–61
Engineering and Computer Science, 2012, vol. 1, pp. 24–26 [31] Xing, J., Li, J., Xie, Z., et al.: ‘Research and implementation of an improved
[17] Ghazal, M., Hajjdiab, H.: ‘License plate automatic detection and recognition radon transform for license plate recognition’. 8th Int. Conf. on Intelligent
using level sets and neural networks’. 1st Int. Conf. on Communications, Human–Machine Systems and Cybernetics (IHMSC), 2016, 2016, vol. 1, pp.
Signal Processing, and their Applications (ICCSPA), 2013, 2013, pp. 1–5 42–45
[18] Azad, R., Shayegh, H.R.: ‘New method for optimization of license plate [32] Panahi, R., Gholampour, I.: ‘Accurate detection and recognition of dirty
recognition system with use of edge detection and connected component’. 3rd vehicle plate numbers for high-speed applications’, IEEE Trans. Intell.
Int. Conf. on Computer and Knowledge Engineering (ICCKE), 2013, 2013, Transp. Syst., 2016, 18, (4), pp. 767–779
pp. 21–25 [33] Anagnostopoulos, C.N.E., Anagnostopoulos, I.E., Loumos, V., et al.: ‘A
[19] Zheng, K., Zhao, Y., Gu, J., et al.: ‘License plate detection using haar-like license plate-recognition algorithm for intelligent transportation system
features and histogram of oriented gradients’. IEEE Int. Symp. on Industrial applications’, IEEE Trans. Intell. Transp. Syst., 2006, 7, (3), pp. 377–392
Electronics (ISIE), 2012, 2012, pp. 1502–1505 [34] Giannoukos, I., Anagnostopoulos, C.-N., Loumos, V., et al.: ‘Operator context
[20] Gou, C., Wang, K., Yu, Z., et al.: ‘License plate recognition using MSER and scanning to support high segmentation rates for real time license plate
HOG based on ELM’. IEEE Int. Conf. on Service Operations and Logistics, recognition’, Pattern Recognit., 2010, 43, (11), pp. 3866–3878
and Informatics (SOLI), 2014, 2014, pp. 217–221 [35] Hsu, G.-S., Chen, J.-C., Chung, Y.-Z.: ‘Application-oriented license plate
[21] Sen, D., Pal, S.K.: ‘Generalized rough sets, entropy, and image ambiguity recognition’, IEEE Trans. Veh. Technol., 2013, 62, (2), pp. 552–561
measures’, IEEE Trans. Syst. Man Cybern. B, Cybern., 2009, 39, (1), pp. 117– [36] Shahraki, A.A., Ghahnavieh, A.E., Mirmahdavi, S.A.: ‘License plate
128 extraction from still images’. 4th Int. Conf. on Intelligent Systems Modelling
[22] Holland, S.M.: ‘Principal components analysis (PCA)’, University of Georgia, & Simulation (ISMS), 2013, 2013, pp. 45–48
2008 [37] Smara, G.A., Khalefah, F.: ‘Localization of license plate number using
[23] Yoon, Y., Ban, K.-D., Yoon, H., et al.: ‘Blob extraction based character dynamic image processing techniques and genetic algorithms’, IEEE Trans.
segmentation method for automatic license plate recognition system’. IEEE Evol. Comput., 2014, 18, (2), pp. 244–257
Int. Conf. on Systems, Man, and Cybernetics (SMC), 2011, 2011, pp. 2192– [38] Davis, A.M., Arunvinodh, C.: ‘Automatic license plate detection using
2196 vertical edge detection method’. Int. Conf. on Innovations in Information,
[24] Paunwala, C.N., Patnaik, S.: ‘Automatic license plate localization using Embedded and Communication Systems (ICIIECS), 2015, 2015, pp. 1–6
intrinsic rules saliency’, Int. J. Adv. Comput. Sci. Appl., 2011, 10, pp. 105–111 [39] Soora, N.R., Deshpande, P.S.: ‘Color, scale, and rotation independent multiple
[25] Griffin, G., Holub, A., Perona, P.: ‘Caltech-256 object category dataset’, 2007 license plates detection in videos and still images’, Math. Probl. Eng., 2016,
[26] Psyllos, A., Anagnostopoulos, C.-N., Kayafas, E.: ‘M-sift: a new method for 2016
vehicle logo recognition’. IEEE Int. Conf. onVehicular Electronics and Safety
(ICVES), 2012, 2012, pp. 261–266
[27] Kasaei, S.H.M., Kasaei, S.M.M.: ‘Extraction and recognition of the vehicle
license plate for passing under outside environment’. European Intelligence
and Security Informatics Conf. (EISIC), 2011, 2011, pp. 234–237

IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209 209


© The Institution of Engineering and Technology 2017

You might also like