Professional Documents
Culture Documents
Research Article
Muhammad Attique Khan1 , Muhammad Sharif1, Muhammad Younus Javed2, Tallha Akram3, Mussarat
Yasmin1, Tanzila Saba4
1Department of Computer Science, COMSATS Institute of Information Technology, WahCantt, Pakistan
2Department of Computer Science & Engineering, HITEC University, Museum Road, Taxila, Pakistan
3Department of EE, COMSATS Institute of Information Technology, WahCantt, Pakistan
4College of Computer and Information Science, Prince Sultan University, Riyadh, Saudi Arabia
E-mail: attique@ciitwah.edu.pk
Abstract: License plate recognition (LPR) system plays a vital role in security applications which include road traffic monitoring,
street activity monitoring, identification of potential threats, and so on. Numerous methods were adopted for LPR but still, there
is enough space for a single standard approach which can be able to deal with all sorts of problems such as light variations,
occlusion, and multi-views. The proposed approach is an effort to deal under such conditions by incorporating multiple features
extraction and fusion. The proposed architecture is comprised of four primary steps: (i) selection of luminance channel from CIE-
Lab colour space, (ii) binary segmentation of selected channel followed by image refinement, (iii) a fusion of Histogram of
oriented gradients (HOG) and geometric features followed by a selection of appropriate features using a novel entropy-based
method, and (iv) features classification with support vector machine (SVM). To authenticate the results of proposed approach,
different performance measures are considered. The selected measures are False positive rate (FPR), False negative rate
(FNR), and accuracy which is achieved maximum up to 99.5%. Simulation results reveal that the proposed method performs
exceptionally better compared with existing works.
Nomenclature factors are image size, processing time, and success rate. In [2],
author discussed various important approaches for the development
IR red channel of ALPR system, where the primary levels include: (i) localisation
G
I green channel of number plate, (ii) character recognition, and (iii) digits
blue channel classification [4]. Few existing algorithms [5], performed well
IB under fixed conditions, such as stationary background, constant
I L extraction of luminance channel illumination, and single view. Most of the existing LPR systems
σw2 (t1) weighted sum of two classes follow techniques like, artificial neural network (ANN) [6], optical
Otsu segmentation character recognition (OCR) [7], features salient [8], support vector
ψ(I)
machine (SVM) [9], colour segmentation [10], Scale-Invariant
φI(cv) two-dimensional convolution operation Feature Transform (SIFT) [11], fuzzy-based algorithm, and colour-
φI(Bg) remove extra regions distance characteristics [12]. The general flow of LPR system
erosion operation mostly comprised of four primary steps: (i) preprocessing, (ii)
φI(er) segmentation and region of interest (ROI) detection, (iii) features
φI(FL) filled image extraction, and (iv) recognition.
φ(Final1) final segmented image In this paper, a new technique is implemented for LPR system
φI(Flr) floor operation based on two types of extracted features and their fusion. First,
φ(Δ) entropy-based features selection and then serial-based feature
area
φ(maj) fusion is done. The proposed architecture is comprised of four
major axis length
φ( min ) primary steps: (i) selection of luminance channel from CIE-Lab
minor axis length
colour space, (ii) binary segmentation of selected channel and
φ(Pr) perimeter
image refinement, (iii) fusion of HOG and geometric features using
φ(Et) extent novel method of entropy and vector dimension, (iv) feature
φ(St) solidity classification with SVM. This paper is organised as follows.
ΔB area of bounding box Related work is described in Section 2. Section 3 presents the
φ(ξFV) fused feature vector motivation and contribution. Section 4 presents proposed work,
which includes preprocessing, segmentation, features extraction,
1 Introduction features selection and fusion, and classification. Sections 5 and 6
describe experimental results and conclusion of this paper.
Development of an efficient license plate recognition (LPR) system
is one of the most active research areas having numerous 2 Related work
applications such as road traffic monitoring, traffic law
enforcement, street activity monitoring, identification of potential Several techniques are recently proposed for ALPR. Chang et al.
security threats, and so on [1, 2]. Recently, several articles were [13] introduced an ALPR system based on two modules: (i) license
published, dealing with automatic LPR (ALPR) [2–4]. In [3], plate coordinates identification and (ii) digits identification. For
author discussed a novel approach of ALPR in which the key feature extraction of licence plate, a fuzzy-based approach is
utilised and later ANN is implemented for the classification. Wang and classes of entropy measures based on rough set theory [21]. All
et al. [4] introduced a Chinese number plate recognition system systems discussed above perform well but under fixed conditions.
based on SIFT features which incorporate position and orientation To handle this problem, we proposed a novel method which
information. The introduced system consists of two major steps: (i) facilitates fusion of multiple sets of features. Our major
SIFT features extraction of template images to store in the database contributions in this article are: (i) selection of luminance channel
and (ii) matching of extracted SIFT features of each image with the because of its minimum signal-to-noise ratio, (ii) extraction of
database. Jobin et al. [14] introduced an ALPR system based on eight geometric features and HOG features in order to compute a
stroke width transform which primarily consists of three steps: (i) fused feature vector; an introduction of novel features selection
removing of motion blurring in an image, utilising interleaved method based on entropy to select most relevant features, and (iii)
plane pixels’ layers, (ii) edge detection through plane and vertical classification of fused features through SVM for LPR. A selection
mask, and (iii) utilisation of OCR for the recognition. The of Caltech car dataset allows us to perform comprehensive
introduced method worked effectively under light variations and simulations, which consists of 1155 images. An addition of our
even under shadow. Cinsdikici et al. [15] suggested a new own local database further improves the accuracy which consists of
information retrieval system of number plate recognition. The 500 template images of different views and different angles and
suggested method was a combination of two major steps namely, tested on four classifiers. Finally, a comparison is made with
character segmentation and number recognition. The segmentation existing LPR techniques.
step comprised of a ROI extraction using Kaiser resizing,
morphological operations, artificial shifting, and bi-directional 4 Proposed work
vertical thresholding. For recognition, first eigen-space features are
extracted which were then utilised through principle component In this section, we are limited to our proposed work which
analysis (PCA) and back-propagation neural network. Rasheed et comprised of four major steps of: (i) preprocessing, (ii) segment
al. [16] introduced a robust method for LPR based on Hough lines, ROI, (iii) features extraction and fusion, and (iv) number
utilising Hough transformation. The introduced method comprised recognition. In preprocessing step, we use CIE-Lab colour space
of two major steps: (i) number plate detection using canny edge transformation to get luminance channel, which is later utilised in
detector and Hough transformation and (ii) recognition with the segmentation phase. In segmentation step, we initially perform
template matching. The experiments were done on 102 images Otsu thresholding and convert the image into binary. In the second
having different illumination conditions. Ghazal et al. [17] step, morphological operations of erosion and dilation are utilised
introduced a new approach based on level sets and ANNs. The to make segmented images more accurate. An introduction of
introduced method began with the segmentation of moving regions image subtraction (eroded from dilated image) and two-
through background subtraction method. A colour-based particle dimensional (2D) convolution further refine the results. In the final
filtering technique was utilised for the tracking of segmented step, colour mapping is done on processed image before subjecting
regions. To detect the license plate, Lab colour space to morphological operations. In features extraction steps, we
transformation was performed and the level set algorithm is utilised extract HOG features and six geometric features of cropped region.
to locate its contour. The geometric features were extracted from Finally, the extracted features are fused and fed to SVM classifier
the detected plates, which later fed into the artificial neural for number plate recognition. The detailed diagram of the proposed
architecture for the recognition. Azad and Shayegh [18] designed a algorithm is shown in Fig. 1. Each section is described below in
new system for ALPR based on edge detection and connected detail.
components. The adaptive thresholding method was implemented
to convert image into a binary form, and then edge detection and 4.1 Preprocessing
some morphological operations were utilised for the localisation of
number plate. Finally, few statistical features of localized number In preprocessing step, we utilised CIE–LAB transformation which
plate were extracted for its final classification. is a combination of three channels, where L channel represents
luminance and a, and b channels describes colour component.
After transformation, a luminance channel is extracted which is
3 Motivation and contribution later utilised in the segmentation step. The extraction of LAB
Broadly speaking, an automatic license number plate recognition colour space transformation as follows. Let the original input
system has few major steps, which include: (i) preprocessing, (ii) license number plate image I(x, y). The R, G, and B channels are
segmentation of ROI, (iii) features extraction, and (iv) number extracted as
recognition/classification. ROI detection and features extraction
play a vital role in this field and large number of algorithms were IR
IR = (1)
proposed for number plate recognition system. Few existing R
∑u = 1 Iu
techniques include: OCR system [7], salient features based [8],
colour segmentation based [10], SIFT features based [11], fuzzy-
based algorithms, haar-like features based [19], HOG features [20],
IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209 201
© The Institution of Engineering and Technology 2017
Fig. 2 Image preprocessing results
(a) Original image, (b) LAB colour space transformation, (c) L-channel selection, (d) Contour image
Iζ ∈ υ × Iw (4)
(ψ(I) ⊖ IE)(x, y) = min (ψ(I)(x + u, y + v)) (9)
u, v
X Y Z R G B
where ζ ∈ (I , I , I ) and w ∈ (I , I , I ). Hence, the L channel is
extracted as After implementing erosion and dilation, we subtract the erosion
image from dilated image as
IL = ρ × F(IY ) − 16 (5)
φI(D) = (ψ(I) ⊖ IE) − (ψ(I) ⊕ IE) (10)
where the value of ρ is 116 selected.
The 2D-convolution operation is performed on subtracted image,
which is computed by (10), to make the character in the licence
4.2 Image segmentation plate more visible. The computation of 2D-convolution operation is
In the segmentation step, the processed luminance channel, as
extracted in the previous step, is converted into a binary image.
a=∞
First, we utilised Otsu's segmentation which is a clustering-based
L h
φI(cv) = ∑ α(a, b)β(p1 − a, p2 − b) (11)
algorithm. Let us consider, luminance image I with I grey levels a= −∞
Ih − 1
(0, 1, 2, 3…Ih − 1) and total number of pixels ℕ = ∑i = 0 ni, where
After performing 2D-convolution operation, enhanced the intensity
ni are pixels at certain grey levels. The Otsu segmentation is based value of segmented image, which is low at the time of convolution
on two intra-class variance that is defined for a weighted sum of operation. Also implemented some morphological operations to
difference of two classes, as shown in fig. 2: remove the extra regions and getting the final segmented regions as
σw2 (t1) = p1(t1)(μ1(t1) − μ)2 + p2(t1)(μ2(t1) − μ)2 (6) φI(Bg) = logical(φI(cv)) (12)
t
where the pixels probability is p1(t1) = ∑i1= 1 pi and φI(er) = φ( ⊕ )(φI(Bg)) (13)
p2(t1) = 1 − p(k − 1)(t1), here k ∈ [1, 2]. The mean of between two
t
classes is μ1(t1) = (1/(p1(t1)))∑i1= 1 ip1 and φI(FOB) = φI(Bg) − φI(er) (14)
h
I
μ2(t1) = (1/(p2(t1)))∑i = t1 + 1 ip2, and variance between two classes is
t φI(FL) = (φ(I) ∙ IE)(x, y) = ((ψ(I) ⊕ IE) ⊖ ψ E) (15)
σ12t1 = ∑i1= 1 (i − μ1(t1))2(pi /(p1(t1))), and
Ih − 1 The final segmented region is computed by (16) and their results
σ22t1 = ∑i = t1 + 1 (i − μ2(t1))2(pi /(p2(t1))). Hence, the cost function of
threshold selection for Otsu segmentation is described below: are shown in Fig. 3
φ(Δ)
where φF = x = max m ∈ ℤ | m ≤ x . After performing floor φ(Et) = (28)
ΔB
function, extract the regions in the image by using region props on
φI(Flr). The region props are used to detect the ROI in the φ(Δ)
segmented image. For regions props, we require four functions φ(St) = (29)
Convex area
x, y, t, w as position, length, and breadth. Finally, the region is
selected, which contains characters area. Hence, the calculation of After fusion of both HOG and geometric features, fused these
these character regions is as follows: features set and generated a new feature vector, which is later fed
to SVM for number recognition.
φI(xlow) = φF( min (φcn)) (19)
4.4 Features selection and integration
φI(xhigh) = φC( max (φcn)) (20) In this section, an entropy-based feature selection and serial-based
features fusion is performed. Many authors utilised a well-known
φI(xadd) = φC(φcn(size(φcn))) (21) algorithm PCA [22] for features reduction. However here, we
consider the problem of automatical features selection. Since, if we
say that ‘100 features are selected by PCA’, than the major
φI(ylow) = φF( min (φcn)) (22) question arises that how to select the features. To handle this
problem, we implemented an entropy-based features selection
φI(yadd) = φC( max (φcn)) (23) technique where we first calculated the entropy of all extracted
features and than selected the most relevant features based on their
Hence, the final segmented image is computed as (see equation highest score. The mathematical description of proposed feature
below) The ROI detection with respect to selected character selection technique is defined as: let ξ(HOG) represent the HOG
regions results are shown in Fig. 4. feature vector having dimension of 1 × 3780 and ξ(GM) represent the
geometric feature vector having length 1 × 8. Then calculate the
4.3 Features extraction entropy of both feature vector. The entropy is defined as
1
minγ ζ(γ) = γ Tυγ − eTγ
2 (35)
s.t 0 ≤ γ ≤ δ and i = 1, 2, 3, …, κ
own dataset developed by our lab. These images were captured in presented complete HOG features results with maximum accuracy
university car parking using our own cameras. The resolution of 98.90% on proposed framework and lowest accuracy is 96.80% on
each image is 1536 × 2048. The sample images are shown in Fig. 6 EBT classifier. Table 2 described the eight geometric features
where every image is captured with the consideration of different results with maximum accuracy is 97.8% on proposed framework.
views and different angles. Additionally, proposed algorithm is Table 3 described the proposed selected fusion features results with
tested on two publically available datasets named as Caltech car maximum accuracy of 99.50%. The results show that the proposed
dataset and Medialab LPR dataset. A comparison of five state-of- features fusion algorithm performs significantly better (Table 4).
the-art classifiers is been made which include: K-nearest neighbour
(KNN), ensemble boosted tree (EBT), linear discriminant analysis 5.2 Experiment II
(LDA), and decision tree (DT), and SVM. To validate the
performance of implemented system, seven measures are computed In experiment II, 200 images are selected for testing the proposed
which include accuracy [23], sensitivity, FNR, FPR, False system and remaining 300 images are selected for training the
discovery rate (FDR) [24], Area under the curve (AUC), and classifier. For each training and testing phase, fused 2000 HOG
precision. The main purpose of using these measures is to analyse features and eight geometric features based on vector dimension as
the results of algorithm with existing methods. The simulation is discussed in Section 4.4 are utilised. Table 5 described the fusion
done in Matlab 2015b having Personal Computer Core i7 with 8 results of 2000 HOG features and eight geometric features with
GB of RAM. maximum accuracy is 96.4% on DT algorithm. The comparison of
Table 5 with Table 3 clearly shows that the results decrease, when
5.1 Experiment I increasing the number of features. The comparison of both tables is
also shown in Fig. 8.
In the proposed work, we extracted two set of descriptors as HOG
and geometric features for license number plate recognition. In the 5.3 Experiment III
experimental setup 1, we selected 250 images for each training and
testing phase. The proposed algorithm is tested by selecting 8 In experiment III, 180 sample images are selected for testing and
geometric and 500 HOG features for each training and testing 320 images for training the classifier. In this phase of experiments,
image sample, prior to fusion, as discussed in Section 4.4. The the selected fused features set which contains 3000 HOG features
fused feature vector is later fed to SVM for recognition. Table 1 and eight geometric features are utilised. These fusion results are
shown in Table 6, which have maximum accuracy of 97.2% on
IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209 205
© The Institution of Engineering and Technology 2017
Table 3 Fusion results of 500 HOG features and eight geometric features
Performance measures results: features fusion
Classifier Sensitivity FPR FNR, % FDR AUC Precision, % Accuracy, %
KNN 0.994 0.004 0.50 0.55 0.994 99.70 99.5
EBT 0.990 0.009 0.90 0.90 0.999 99.10 99.1
LDA 0.967 0.032 3.20 3.30 0.995 96.70 96.8
DT 0.969 0.029 2.80 2.70 0.978 97.30 97.2
proposed 0.995 0.003 0.40 0.45 0.999 100.0 99.5
Table 4 Confusion matrix of proposed features fusion algorithm results (500 HOG, eight geometric)
Class Positive Negative
Proposed
positive 100% 0.5%
negative 0.4% 100%
KNN
positive 99.3% 0.7%
negative 0.4% 100%
EBT
positive 98.8% 1.2%
negative 0.8% 99.2%
DT
positive 95.6% 4.4%
negative 1.5% 98.5%
LDA
positive 97.0% 3.0%
negative 3.4% 96.6%
Fig. 8 Graphical comparison of experiments 1 and 2 with respect to accuracy and FNR
proposed framework. The comparison of Tables 5 and 6 shows that and testing. In this round of experiments, the proposed fusion
with proposed framework, accuracy increased up to 1.9% but the technique is performed and selected 500 HOG features and eight
overall results decrease. The comparison of experiments 1, 2, and 3 geometric features through entropy and fed to classifier. The results
is shown in Fig. 9, from which it is clear that the increasing the of the proposed model are shown in Table 7 and the best accuracy
number of features decreases the accuracy. obtained is 99.8% on the proposed algorithm. From Table 7, the
proposed algorithm is compared with other classification models
5.4 Caltech car dataset and produced accuracy 99.4, 99.2, and 99.5% on DT, LDA, and
EBT, which shows that the proposed algorithm also perform well
The Caltech car dataset [25] consists of total 1155 images. For on other classification methods. Finally, a graphical comparison is
training the classifier, we adopted a strategy of 60:40 for training
Table 6 Performance measures for integration of 3000 HOG features and eight geometric features
Fusion of 3000 HOG features and eight geometric features
Classifier Sensitivity FPR FNR FDR AUC Precision, % Accuracy, %
KNN 0.86 0.13 11.90 10.50 0.96 89.50 88.1
EBT 0.85 0.13 12.20 10.25 0.96 89.74 87.8
LDA 0.94 0.04 4.80 4.80 0.98 95.20 95.2
DT 0.95 0.03 3.90 3.95 0.96 96.05 96.1
proposed 0.97 0.02 2.80 2.90 0.96 97.10 97.2
Fig. 10 Graphical comparison of our local database with Caltech car dataset in terms of accuracy and FNR
Table 7 Fusion of 500 HOG features and eight geometric close view. For validation of proposed algorithm, we adopted 50:50
features for Caltech dataset strategy for both training and testing. The entropy-based proposed
Classification model Performance measures feature selection technique is performed and obtained maximum
FPR FNR, % Accuracy, % recognition results 99.30% as shown in Table 8. The proposed
results are also compared with other state-of-the-art classification
DT 0.01 0.6 99.4
algorithms and obtained highest recognition rate 99%. Also, made
LDA 0.02 0.8 99.2 a comparison of proposed algorithm with existing LPR techniques,
fine-KNN 0.08 3.0 97.0 which are tested on still images as depicted in Table 10. The
cubic-KNN 0.34 11.3 88.7 proposed method performs significantly better compared with
weighted-KNN 0.26 9.0 91.0 existing methods.
EBT 0.01 0.5 99.5
subspace KNN 0.17 5.9 94.1 5.6 Discussion
proposed 0.00 0.2 99.8 In this section, we epitomise our proposed system in terms of
qualitative measures. The proposed system, in general, is a
conjunction of two major steps: (i) preprocessing and segmentation
been made out between our local database and Caltech database in and (ii) features extraction and recognition. Whereas each step is
Fig. 10. an amalgamation of series of sub-steps as shown in Fig. 1. The
LPR results are also shown in Fig. 6. To validate the performance
5.5 Medialab LPR database of proposed system, a comparison is also been made with four
classification algorithms: KNN, EBT, LDA, and DT. For LPR
In this section, the proposed algorithm is tested on 324 still images, system, HOG and geometric features are extracted, which are
which are collected from Medialab LPR database [26]. The images integrated, as discussed in Section 4.4. The results of complete
are collected from the close view, day colour view, and day colour HOG features and eight geometric features are shown in Tables 1
6 Conclusion
In this paper, a novel system is proposed for LPR based on the
features selection. The proposed system comprises of three major
step: (i) preprocessing and segmentation of character area, (ii)
features extraction of ROI, and (iii) features fusion using novel
technique. With proposed method, we have tried our best to deal
with different problems of light variation, occlusion, and so on.
The simulation results confirm that, with our novel idea, we have
managed to tackle above mentioned problems. Moreover, it is also
concluded that a cascaded design can comfortably manage
mentioned problem at its early stages and also the selection of good
features results in improved classification accuracy. As a future
Fig. 11 Sample recognition results
work, we will add few more set of features and will implement
improved features selection technique to reduce the error rate and
Table 9 Comparison with existing techniques using local increase the recognition rate.
database
Comparison
7 References
Method Year Recognition rate, %
Chen et al. [8] 2009 93.10 [1] Liu, G., Ma, Z., Du, Z., et al.: ‘The calculation method of road travel time
based on license plate recognition technology’, in ‘Advances in information
Wen et al. [5] 2011 97.88 technology and education’ (Springer Berlin Heidelberg, 2011), pp. 385–389
Kasaei et al. [27] 2011 98.20 [2] Du, S., Ibrahim, M., Shehata, M., et al.: ‘Automatic license plate recognition
(ALPR): a state-of-the-art review’, IEEE Trans. Circuits Syst. Video Technol.,
Zheng et al. [19] 2012 98.00 2013, 23, (2), pp. 311–325
Rasheed et al. [16] 2012 90.62 [3] Patel, C., Shah, D., Patel, A.: ‘Automatic number plate recognition system
(ANPR): a survey’, Int. J. Comput. Appl., 2013, 69, (9), pp. 21–33
Dehshibi et al. [28] 2012 94.50 [4] Wang, Y., Ban, X., Chen, J., et al.: ‘License plate recognition based on SIFT
Cinsdikici et al. [15] 2013 92.00 feature’, Opt.-Int. J. Light Electron Opt., 2015, 126, (21), pp. 2895–2901
[5] Wen, Y., Lu, Y., Yan, J., et al.: ‘An algorithm for license plate recognition
Azad and Shayegh [18] 2013 98.66
applied to intelligent transportation system’, IEEE Trans. Intell. Transp. Syst.,
Gou et al. [20] 2014 97.90 2011, 12, (3), pp. 830–845
Rabee and Barhumi [29] 2014 97.89 [6] Deb, K., Khan, I., Saha, A., et al.: ‘An efficient method of vehicle license
plate recognition based on sliding concentric windows and artificial neural
Rajput et al. [30] 2015 96.40 network’, Procedia Technol., 2012, 4, pp. 812–819
Xing et al. [31] 2016 95.00 [7] Naz, S., Hayat, K., Razzak, M.I., et al.: ‘The optical character recognition of
Urdu-like cursive scripts’, Pattern Recognit., 2014, 47, (3), pp. 1229–1248
Panahi and Gholampour [32] 2016 97.60 [8] Chen, Z.-X., Liu, C.-Y., Chang, F.-L., et al.: ‘Automatic license-plate location
proposed — 99.50 and recognition based on feature salience’, IEEE Trans. Veh. Technol., 2009,
58, (7), pp. 3781–3785
[9] Zhai, X., Bensaali, F., Sotudeh, R.: ‘Real-time optical character recognition
on field programmable gate array for automatic number plate recognition
and 2. Later on, we made out three experiments as I, II, and III, to system’, IET Circuits Devices Syst., 2013, 7, (6), pp. 337–344
validate the proposed algorithm. In experiment I, 500 HOG [10] Yang, Y., Gao, X., Yang, G.: ‘Study the method of vehicle license locating
based on color segmentation’, Procedia Eng., 2011, 15, pp. 1324–1329
features and eight geometric features are integrated and their [11] Yousef, K.M.A., Al-Tabanjah, M., Hudaib, E., et al.: ‘SIFT based automatic
results are depicted in Table 3 in the form of seven measures. In number plate recognition’. 6th Int. Conf. on Information and Communication
experiment II, 2000 HOG features and eight geometric features are Systems (ICICS), 2015, 2015, pp. 124–129
integrated and their results are depicted in Table 5 having accuracy [12] Yang, X., Hao, X.-L., Zhao, G.: ‘License plate location based on trichromatic
imaging and color-discrete characteristic’, Opt.-Int. J. Light Electron Opt.,
of 96.4%. In experiment III, 3000 HOG features and eight 2012, 123, (16), pp. 1486–1491
geometric features are integrated and their results are described in [13] Chang, S.-L., Chen, L.-S., Chung, Y.-C., et al.: ‘Automatic license plate
Table 6 having accuracy of 97.2%. The comparison of these three recognition’, IEEE Trans. Intell. Transp. Syst., 2004, 5, (1), pp. 42–53
experiments is also shown in Figs. 8 and 9. Additionally, the [14] Jobin, K.V., Jiji, C.V., Anurenjan, P.R.: ‘Automatic number plate recognition
system using modified stroke width transform’. Fourth National Conf. on
proposed algorithm test on the Caltech dataset and Medialab LPR
208 IET Image Process., 2018, Vol. 12 Iss. 2, pp. 200-209
© The Institution of Engineering and Technology 2017
Computer Vision, Pattern Recognition, Image Processing and Graphics [28] Dehshibi, M.M., Allahverdi, R.: ‘Persian vehicle license plate recognition
(NCVPRIPG), 2013, 2013, pp. 1–4 using multiclass Adaboost’, Int. J. Comput. Electr. Eng., 2012, 4, (3), p. 355
[15] Cinsdikici, M., Ugur, A., Tunalı, T.: ‘Automatic number plate information [29] Rabee, A., Barhumi, I.: ‘May. license plate detection and recognition in
extraction and recognition for intelligent transportation system’, Imaging Sci. complex scenes using mathematical morphology and support vector
J., 2013, 55, (2), pp. 102–113 machines’. IWSSIP 2014 Proc., 2014, pp. 59–62
[16] Rasheed, S., Naeem, A., Ishaq, O.: ‘Automated number plate recognition [30] Rajput, H., Som, T., Kar, S.: ‘An automated vehicle license plate recognition
using Hough lines and template matching’. Proc. of the World Congress on system’, Computer, 2015, 8, pp. 56–61
Engineering and Computer Science, 2012, vol. 1, pp. 24–26 [31] Xing, J., Li, J., Xie, Z., et al.: ‘Research and implementation of an improved
[17] Ghazal, M., Hajjdiab, H.: ‘License plate automatic detection and recognition radon transform for license plate recognition’. 8th Int. Conf. on Intelligent
using level sets and neural networks’. 1st Int. Conf. on Communications, Human–Machine Systems and Cybernetics (IHMSC), 2016, 2016, vol. 1, pp.
Signal Processing, and their Applications (ICCSPA), 2013, 2013, pp. 1–5 42–45
[18] Azad, R., Shayegh, H.R.: ‘New method for optimization of license plate [32] Panahi, R., Gholampour, I.: ‘Accurate detection and recognition of dirty
recognition system with use of edge detection and connected component’. 3rd vehicle plate numbers for high-speed applications’, IEEE Trans. Intell.
Int. Conf. on Computer and Knowledge Engineering (ICCKE), 2013, 2013, Transp. Syst., 2016, 18, (4), pp. 767–779
pp. 21–25 [33] Anagnostopoulos, C.N.E., Anagnostopoulos, I.E., Loumos, V., et al.: ‘A
[19] Zheng, K., Zhao, Y., Gu, J., et al.: ‘License plate detection using haar-like license plate-recognition algorithm for intelligent transportation system
features and histogram of oriented gradients’. IEEE Int. Symp. on Industrial applications’, IEEE Trans. Intell. Transp. Syst., 2006, 7, (3), pp. 377–392
Electronics (ISIE), 2012, 2012, pp. 1502–1505 [34] Giannoukos, I., Anagnostopoulos, C.-N., Loumos, V., et al.: ‘Operator context
[20] Gou, C., Wang, K., Yu, Z., et al.: ‘License plate recognition using MSER and scanning to support high segmentation rates for real time license plate
HOG based on ELM’. IEEE Int. Conf. on Service Operations and Logistics, recognition’, Pattern Recognit., 2010, 43, (11), pp. 3866–3878
and Informatics (SOLI), 2014, 2014, pp. 217–221 [35] Hsu, G.-S., Chen, J.-C., Chung, Y.-Z.: ‘Application-oriented license plate
[21] Sen, D., Pal, S.K.: ‘Generalized rough sets, entropy, and image ambiguity recognition’, IEEE Trans. Veh. Technol., 2013, 62, (2), pp. 552–561
measures’, IEEE Trans. Syst. Man Cybern. B, Cybern., 2009, 39, (1), pp. 117– [36] Shahraki, A.A., Ghahnavieh, A.E., Mirmahdavi, S.A.: ‘License plate
128 extraction from still images’. 4th Int. Conf. on Intelligent Systems Modelling
[22] Holland, S.M.: ‘Principal components analysis (PCA)’, University of Georgia, & Simulation (ISMS), 2013, 2013, pp. 45–48
2008 [37] Smara, G.A., Khalefah, F.: ‘Localization of license plate number using
[23] Yoon, Y., Ban, K.-D., Yoon, H., et al.: ‘Blob extraction based character dynamic image processing techniques and genetic algorithms’, IEEE Trans.
segmentation method for automatic license plate recognition system’. IEEE Evol. Comput., 2014, 18, (2), pp. 244–257
Int. Conf. on Systems, Man, and Cybernetics (SMC), 2011, 2011, pp. 2192– [38] Davis, A.M., Arunvinodh, C.: ‘Automatic license plate detection using
2196 vertical edge detection method’. Int. Conf. on Innovations in Information,
[24] Paunwala, C.N., Patnaik, S.: ‘Automatic license plate localization using Embedded and Communication Systems (ICIIECS), 2015, 2015, pp. 1–6
intrinsic rules saliency’, Int. J. Adv. Comput. Sci. Appl., 2011, 10, pp. 105–111 [39] Soora, N.R., Deshpande, P.S.: ‘Color, scale, and rotation independent multiple
[25] Griffin, G., Holub, A., Perona, P.: ‘Caltech-256 object category dataset’, 2007 license plates detection in videos and still images’, Math. Probl. Eng., 2016,
[26] Psyllos, A., Anagnostopoulos, C.-N., Kayafas, E.: ‘M-sift: a new method for 2016
vehicle logo recognition’. IEEE Int. Conf. onVehicular Electronics and Safety
(ICVES), 2012, 2012, pp. 261–266
[27] Kasaei, S.H.M., Kasaei, S.M.M.: ‘Extraction and recognition of the vehicle
license plate for passing under outside environment’. European Intelligence
and Security Informatics Conf. (EISIC), 2011, 2011, pp. 234–237