You are on page 1of 26

A project report on

NUMBER PLATE EXTRACTION AND RECOGNITION

Submitted in partial fulfillment of the requirement for the award of the


degree of
BACHELOR OF TECHNOLOGY
In
ELECTRONICS AND COMMUNICATION ENGINEERING

By
CH.MABUNNI (R111870)
K.MADHURI RANI (R111151)
A.PAVANESWARI (R111867)
Under the guidance of
M.DINESH REDDY, LECTURER
Department of Electrical And Electronics Engineering
RGUKT RK-Valley

.
Rajiv Gandhi University of Knowledge Technologies
RGUKT RK-Valley Campus
Kadapa(dist), Andhra Pradesh, 516330

1
CERTIFICATE OF PROJECT COMPLETION

This is to certify that the project entitled Number Plate


Extraction and Recognition, submitted by CH.Mabunni(R111870),
K.Madhuri Rani(R111151) ,A.Pavaneswari (R111867) under our guidance
and supervision for the partial fulfillment for the degree of Bachelor of
Technology in Electronics & Communication Engineering at RGUKT- RK
Valley.
To the best of our knowledge, the results embodied in this
dissertation work have not been submitted to any university or institute for the
award of any degree.

Project Guide Head of the Dept


Mr. M. Dinesh Reddy Mr.Y. Arun kumar Reddy
Lecturer Head &Asst. Professor
ECE Department ECE Department
RGUKT, RK-Valley RGUKT. RK-Valley

2
ACKNOWLEDGEMENT

We wish to express our sincere thanks to G. Bhagavannarayana, Director


of RGUKT, RK-Valley, for providing us with this opportunity and for his
constant support.
We take this opportunity to express our deep sense of gratitude and
indebtedness to our guide, Mr. M. Dinesh Reddy, Department of ECE for his
valuable support, suggestions and constructive criticisms throughout this
project.
We would like to thank Mr. Y. Arun Kumar Reddy, Head of the
Department, Department of ECE, for providing us with this opportunity and for
his constant support.
We would like to place on record the continuous support, encouragement,
advice and help that our parents and friends have gave us throughout the
duration of the project, and we should always being grateful to them for this.
Finally we thank everyone who has contributed in any way for the
successful completion of the project work.

CH.Mabunni(R111870)
K.Madhuri Rani(R111151)
A.Pavaneswari(R111867)

3
ABSTRACT
Image segmentation is often defined as a partition of pixels or image
blocks into homogeneous groups. The goal of segmentation is to simplify and
change the representation of an image into something that is more meaningful
and easier to analyze.
We have used histogram processing methods for image segmentation
because they typically require only one pass through the pixels. Using Otsus
thresholding method we separated a row of words from a text image.
Histogram can also be used to enhance an image. Histogram equalization is a
method in image processing of contrast adjustment using the images
histogram. The images which are very dark can be brightened and vice versa.
Hence we have used this method for the image enhancement.
Software used: MATLAB

4
INDEX
1. Introduction ..6
1.1. Objective.....6
1.2. Problem statement...6
1.3. Brief applications....6
2. Image sampling and quantization..7
2.1. Sampling and quantization.7
2.2. Representing digital images................................7
2.3. Spatial and intensity resolution...8
2.4. Image interpolation...10
3. Spatial filtering.12
3.1. Spatial correlation and convolution..13
4. Histogram.14
4.1. Histogram equalization.15
5. Image segmentation.18
5.1. Otsus method..18
5.2. Main code- Extraction of first row and first letter20
6. Conclusion...25
6.1. Conclusion25
6.2. Applications......25
7. References26

5
CHAPTER-1

INTRODUCTION

1.1 Objective:
To learn different concepts like image sampling and quantization,
histogram processing and segmentation of image.

1.2 Problem Statement :


To extract letters from text image using segmentation and image
enhancement.

1.3 Brief Application Areas :


Projects foundation is aiming at providing multitasking system to
help to improve performance it is most relevant for research, academic as well
as institute such as:

1) Edge detection

2) Texture detection

3) Extraction of important information from an image.

6
CHAPTER-2

IMAGE SAMPLING AND QUANTIZATION

An image is a visual representation of something which is two-dimensional and


continuous. An image is defined as a two-dimensional function f(x, y), where x
and y are the spatial coordinates and amplitude of f at any pair of coordinates
(x, y) is the intensity or gray level and image at that point.

2.1) Sampling and quantization:

To convert it to digital form, we have to sample the function in both coordinates


and in amplitude. Digitizing the coordinate values is called sampling. Digitizing
the amplitude values is called quantization.

Sampling rate determines spatial resolution of the digitized image and


quantization levels determines number of gray levels in digital image. In
addition to the number of discrete levels used, the accuracy achieved in
quantization is highly dependent on the noise content in sampled signal. The
quality of the digital image is determined to a large degree by the number of
samples and discrete intensity levels used in sampling and quantization.

2.2) Representing Digital Images:

The result of sampling and quantization is a matrix of real numbers. Assume


that an image f(x, y) is sampled so that the resulting digital image has M rows
and N columns. The values of the coordinates (x, y) now become discrete
quantities. For notational clarity and convenience, we shall use integer values
for these discrete coordinates. Thus, the values of the coordinates at the origin
are (x, y)=(0, 0). The next coordinate values along the first row of the image are
represented as (x, y)=(0, 1). It is important to keep in mind that the notation
(0,1) is used to signify the second sample along the first row. It does not mean
that these are the actual values of physical coordinates when the image was
sampled.

We write the representation of an MxN numerical array as,

7
Both sides of this equation are equivalent ways of expressing the digital image
quantitatively. Right side of the equation is the matrix of real numbers. Each
element of this matrix is called an image element, picture element, pixel, pel.
The origin of the digital image is at the top left with the positive x-axis
extending downward and positive y-axis extending to the right. The first
element of the matrix is by convention at the top left of the array, so choosing
the origin of f(x,y). This representation is the standard right-handed Cartesian
coordinate system.

This digitization process requires decisions about values for M, N, and for the
number, L, of discrete gray levels allowed for each pixel. There are no
requirements on M and N, other than that they have to be positive integers.
However due to processing, storage, and sampling hardware considerations, the
number of gray levels typically is an integer power of 2:

L = 2k

Assumption is made that discrete levels are equally spaced and there are
integers in the interval [0, L-1] .

The number, b, of bits required to store a digitized image is

b=M X N X K.

When an image can have 2k intensity levels, we refer to the image as K-bit
image.

2.3) Spatial and Intensity Resolution:

Spatial resolution is a measure of the smallest detectable detail in an image.


Line pairs per unit distance & dots (pixels) per unit distance are the measure of
spatial resolution. Suppose that we construct a chart with alternating black and
white vertical lines, each of width W units. Width of a line pair is 2W and there
are 1/2W line pairs per unit distance.

8
Intensity resolution refers to the smallest detectable change in intensity levels.

Program for conversion of 8bit image into 4 bit image:

t=imread('cameraman.tif');

b4bit=uint8(t/32)

figure;

imagesc(b4bit);

title('4 bit image');

colormap gray;

b4bit=uint8(t/128)

figure;

imagesc(b4bit);

colormap gray;

title('2 bit image');

Output:

a)Original image

9
b) 4 bit image c) 2 bit image

From the above output, we observe that as we change the number of bits in
image the intensity levels change and affect the quality of image. More intensity
levels can be seen in the 4-bit image which has more number of intensity levels
than the 2-bit image.

2.4) Image Interpolation

Interpolation is a basic tool used extensively in tasks such as zooming,


shrinking, rotating, and geometric corrections. Fundamentally, interpolation is
the process of using known data to estimate values at unknown locations.

Nearest neighbor interpolation is a method that assigns to each new location the
intensity of its nearest neighbor in the original image. A more suitable approach
is bilinear interpolation, in which we use the four nearest neighbors to estimate

10
the intensity at a given location. For bilinear interpolation, the assigned value is
obtained using the equation-

v(x, y) = ax + by + cxy + d

Bilinear interpolation gives much better results than nearest neighbor


interpolation, with a modest increase in computational burden. Bi cubic
interpolation, involves the sixteen nearest neighbors of a point. The intensity
value assigned to point is obtained using the equation,

11
CHAPTER-3

SPATIAL FILTERING

Spatial domain techniques are more efficient computationally and required less
processing resources to implement. These techniques operate directly on pixels
of image as opposed by the frequency domain.

The spatial domain processes can be denoted by the expression.

g(x, y)=T[f(x, y)]

where f(x, y) is the input image, g(x, y) is the output image, and T is an operator
on f defined over a neighborhood of point (x, y). The operator can apply to a
single image or to a set of images, such as performing the pixel-by-pixel sum of
a sequence of images for noise reduction. The point (x, y) shown is an arbitrary
location in the image, and the small region shown containing the point is a
neighborhood of (x, y). The neighborhood is rectangular, centered on (x,y), and
much smaller in size than the image.

Fig.3.1.Representation of a pixel and its neighbors

12
3X3 neighborhood about a point (x, y) in an image in the spatial domain. The
neighborhood is moved from pixel to pixel in the image to generate an output
image.

3.1) Spatial correlation and convolution:

There are two closely related concepts that must be understood clearly when
performing linear spatial filtering. One is correlation and other is convolution.
Correlation is the process of moving a filter mask over the image and
computing the sum of products at each location. The mechanics of convolution
are the same, except that the filter is first rotated by 180 degree. The first thing
we note is that there are parts of the function that do not overlap. The solution
to this problem is to pas f with enough 0s on either side to allow each pixel in
w to visit every pixel in f. If the filter is of size m. we need to m-1 0s on either
side of f.

Summarizing the correlation and convolution in the equation form, we have that
the correlation of a filter w(x, y) of size m x n with an image f(x, y), denoted as,

This equation is evaluated for all values of the displacement variables x and y
so that all elements of w visit every pixel in f.

In a similar manner, the convolution of w(x, y) and f(x, y), denoted by w(x.
y)*f(x, y)1 , is given by the expression

Where the minus signs on the right flip f ( i.e., rotate it by 1800)

When we apply convolution for an image, masking done using convolution


generates an image which has been blurred due to smoothing. The transition
from one intensity level to other is smoother.

13
CHAPTER-4

HISTOGRAM

The histogram of a digital image with gray levels in the range [0,
L-1] is a discrete function h( rk ) =nk, where rk is the k th gray level and nk is the
number of pixels in the image having gray level rk. It is common practice to
normalize a histogram by dividing each of its values by the total number of
pixels in the image, denoted by n. Thus, a normalized histogram is given by
(rk)=nk/n, for k=0, 1,p ,L-1. Loosely speaking, p(rk) gives an estimate of the
probability of occurrence of gray level rk. Note that the sum of all components
of a normalized histogram is equal to 1.
Histograms are the basis for numerous spatial domain processing
techniques. Histogram manipulation can be used effectively for image
enhancement. In addition to providing useful image statistics, we shall see in
subsequent chapters that the information inherent in histograms also is quite
useful in other image processing applications, such as image compression and
segmentation. Histograms are simple to calculate in software and also lend
themselves to economic hardware implementations, thus making them a
popular tool for real-time image processing.
We note in the dark image that the components of the histogram
are concentrated on the low (dark) side of the gray scale. Similarly, the
components of the histogram of the bright image are biased toward the high
side of the gray scale. An image with low contrast has a histogram that will be
narrow and will be centered toward the middle of the gray scale.
Finally, we see that the components of the histogram in the high-
contrast image cover a broad range of the gray scale and, further, that the
distribution of pixels is not too far from uniform, with very few vertical lines
being much higher than the others. Intuitively, it is reasonable to conclude that
an image whose pixels tend to occupy the entire range of possible gray levels
and, in addition, tend to be distributed uniformly, will have an appearance of
high contrast and will exhibit a large variety of gray tones. The net effect will
be an image that shows a great deal of gray-level detail and has high dynamic
range. It will be shown shortly that it is possible to develop a transformation
function that can automatically achieve this effect, based only on information
available in the histogram of the input image.

14
Program: histogram

a=imread('bag.png');
subplot(1,2,1);
imagesc(a);
colormap gray
title('original image');
[m n]=size(a);
for k=0:255;
ind=find(a==k);
y=length(ind)/(m*n);
c(k+1)=y;
s=sum(c(:));

end
%sum=0;
subplot(1,2,2);
stem(0:255,c);
title('histogram');

Output:

We have obtained the histogram as shown in the above output.

15
4.1) Histogram Equalization:

Consider for a moment continuous functions, and let the variable r represent the
gray levels of the image to be enhanced. In the initial part of our discussion we
assume that r has been normalized to the interval [0, 1], with r=0 representing
black and r=1 representing white. Later, we consider a discrete formulation and
allow pixel values to be in the interval [0, L-1]. For any r satisfying the
aforementioned conditions, we focus attention on transformations of the form

Program: Equalization of black and white image

%after plotting histogram


op=c;
csum=cumsum(op);
subplot(3,2,3);
stem(csum);
for i=1:m
for j=1:n
h(i,j)=round(csum(a(i,j)+1)*255);
end
end

sum=cumsum(h);
subplot(3,2,4);
stem(sum);
J = histeq(a, [0,1]);
j1=uint8(J);
subplot(3,2,4);
stem(J);
subplot(3,2,5);imshow(J);
imagesc(h);
colormap gray;

his=hist(h(:),[0:255]);
subplot(3,2,6);plot(his);

16
Output:

The first fig. shows the original image and besides it is its histogram. The next
row contains the CDF of original image and next to it is the equalized
histogram using the inbuilt function histeq. The last row contains the Modified
image and next to it is its histogram.
Hence, it is clear that image enhancement has been done successfully using
histogram equalization.

17
CHAPTER-5

IMAGE SEGMENTATION

Segmentation refers to the process of partitioning a digital image into the


multiple segments(set of pixels as known as super pixels) the goal of
segmentation is to simplify and or change the representation of an image into
something that is more meaningful and easier to analyze. Development of an
accurate image segmentation algorithm can be most demanding part of a
computer vision system there is not a panacean method that can be work with
several different types of images in the segmentation approach is usually
designed for solving a specific problem. In this work, histogram thresholding is
proposed in order to help the segmentation step in what was found to be robust
way regardless of the segmentation approach used semi atomic algorithm for
histogram thresholding are discussed.

5.1) Otsus method:

Otsus method is used to find the threshold point in the histogram. In computer
vision and image processing, Otsu's method, named after Nobuyuki Otsu is
used to automatically perform clustering based image thresholding or, the
reduction of a gray level image to a binary image. The algorithm assumes that
the image contains two classes of pixels following bimodal histogram
(foreground pixels and background pixels), it then calculates the optimum
threshold separating the two classes so that their combined spread (intra class
variance) is minimal, or equivalently (because the sum of pairwise squared
distances is constant), so that their interclass variance is maximal. In Otsu's
method we exhaustively search for the threshold that minimizes the intra class
variance (the variance within the class), defined as a weighted sum of variances
of the two classes:

Otsu shows that minimizing the intra class variance is the same as maximizing
inter class variance:

18
Code: Otsus method

ut=sum((1:256).*c(1:end));
for t=1:256;

w0=max(0.0001,sum(c(1:t)));
w1=max(0.0001,sum(c(t+1:end)));
%w2=sum(c(t+1:end))
u0=((sum((1:t).*c(1:t)))/w0);
u1=((sum((t+1:256).*c(t+1:end)))/w1);
v=(w0+w1);
v1=(w0*u0)+(w1*u1);
cc=((u0-u1)*(u0-u1));
var(t)=(w0)*(w1)*(cc);
end
varsq=max(var(:));
subplot(5,2,4);
stem(var);
title('plot of variance');
[val ind]=max(var);
y=ind;
ind=find(a<y);
a(ind)=0;
ind=find(a>=y);
a(ind)=1;
subplot(5,2,5);
%imshow(a);
imagesc(a);
colormap gray;
title('segmented image');

Otsus method has been implemented above which the technique is used by us
for the main code of image segmentation i.e. extracting the first row of words
and the first letter from the text image.

19
Justification to use Otsus method:
Otsus method is used to find the threshold point in the histogram
and it is used to reduce the grey level image to binary image. Otsus method
operates directly on the gray level histogram. So its very fast (once histogram
is computed).
By using this method gray image converts into pure binary (0 or
1). By using bin function also we can convert into binary, in this it takes 0 value
as 0.5. Because of this we get errors. So we use Otsus method. In Otsus
method values takes only 0 and 1. In that case we differentiate easily between 0
and 1in conversion.

Main code :- Extracting the first letter from the text

Program:

a=imread('cameraman.tif');
%reading the image
a=rgb2gray(a);
subplot(5,2,1);
b=im2bw(a);
imshow(b);
title('binary image by using command');
%a=im2gray(a);
subplot(5,2,2)
imagesc(a);
colormap gray
title('original image');
%in=imageinfo(a);
[m n]=size(a);
for k=0:255;
% histogram
ind=find(a==k);
% t=0;
% for i=1:m
% for j=1:n
% if(a(i,j)==k)
% t=t+1;
%
%
% end

20
% end
% end
y=length(ind)/(m*n);
c(k+1)=y;
s=sum(c(:));

end
%sum=0;
subplot(5,2,3);
stem(0:255,c);
title('histogram');
% finding the variance
ut=sum((1:256).*c(1:end));
for t=1:256;

w0=max(0.0001,sum(c(1:t)));
w1=max(0.0001,sum(c(t+1:end)));
%w2=sum(c(t+1:end))
u0=((sum((1:t).*c(1:t)))/w0);
u1=((sum((t+1:256).*c(t+1:end)))/w1);
v=(w0+w1);
v1=(w0*u0)+(w1*u1);
cc=((u0-u1)*(u0-u1));
var(t)=(w0)*(w1)*(cc);
end
varsq=max(var(:));
subplot(5,2,4);
stem(var);
title('plot of variance');
[val ind]=max(var);
y=ind;
ind=find(a<y);
a(ind)=0;
ind=find(a>=y);
a(ind)=1;
subplot(5,2,5);
%imshow(a);
imagesc(a);
colormap gray;
title('segmented image');

21
[m n x]=size(a);
% extracting the 1st row
for i=1:m;
%t=0;
for j=1:n;
t(j)=b(i,j);
%v=sum(t(:))
end
v(i)=sum(t(:));
end
subplot(5,2,6);
stem(v);
title('through rows the detail');
for i=1:m;
if(v(1,i)==n);
b=0;
else
b=1;
end
s(i)=b;
end
[C I]=max(s);
h=0;
for i=I:m;
if(s(1,i)==1)
h=h+1;
else
break;
end
l=h;
end
p=I+h;
q=a(I:p,:);
subplot(5,2,7);
imagesc(q);
colormap gray
%w=im2bw(q);
[x y z]=size(q);
%y=y/3;
%f=p-I+1;

22
for i=1:y;
for j=1:x;
sa(j)=q(j,i);
end
o(i)=sum(sa(:));
end
subplot(5,2,8)
stem(o);
for i=1:y;
if(o(1,i)==x);
b=0;
else
b=1;
end
s(i)=b;
end
[C I]=max(s);
h=0;
for i=I:y;
if(s(1,i)==1)
h=h+1;
else
break;
end
l=h;
end
p=I+h;
q=q(:,I:p);
%figure;
subplot(5,2,9);
imagesc(q);
colormap gray

23
Output:

24
CHAPTER-6

CONCLUSION

8.1. Conclusion:
Image segmentation is the partitioning of an image into multiple sets of pixels
that share some common characteristics such as color intensity. It is typically
used to locate objects and boundaries in images.
We successfully extracted the first row and the first letter from a text image
using thresholding and histogram. We also used histogram equalization for
image enhancement. Histogram is an easy method and efficient method for
image processing.

8.2. Applications:
1) Content based image retrieval.
2) Medical imaging- Locate tumors and other pathologies, surgery planning.
3) Object detection- Face detection, locate objects in satellite images.
4) Recognition tasks- Face recognition, fingerprint recognition.
5) Video surveillance.

25
CHAPTER- 7

REFERENCES

[1] Rafael C. Gonzalez and Richard E. Woods, Digital image processing, 3rd
Editon.

[2]Gonzalez, R.C.Woods, R.E., and Eddins S.L.Digital image processing using


MATLAB, Prentice hall, Upper Saddle River, NJ.

[3]Duda, R.O. Hart P.E. and Stork, D.G(2001), Pattern Classification, 2nd Ed.,
John Wiley & Sons, New York.

[4]Pratt.W.K., Digital image processing , 3rd Ed., John Wiley & Sons, New
York.

[5] Image Segmentation by using Histogram Thresholding, P.Daniel Ratna Raju


G.Neelima HOD CSE Dept Asst. Prof CSE Dept Don Bosco Institute Of
Technology Acharya Nagarjuna University, & Science Tenali, India.

26

You might also like