You are on page 1of 5

2010 Sixth International Conference on Natural Computation (ICNC 2010)

An Automatic Method for Identifying Different


Variety of Rice Seeds Using Machine Vision
Technology
Ai-Guo OuYang, Rong-jie Gao, Yan-de Liu*,
Xu-dong Sun, Yuan-yuan Pan

Xiao-ling Dong
Foreign Languages College
Jiangxi Normal University
Nanchang, China,330022

Institute of Optics-Mechanics-Electronics Technology and


Application (OMETA),
East China Jiao tong University, Nanchang, China, 330013

AbstractAn automatic method for identifying different variety


of rice seeds using machine vision technology was investigated,
and a detection system which was consisted of an automatic
inspection machine and an imageprocessing unit, was also
developed. The system could continually present matrix
positioned rice seed to CCD cameras, and singularize each rice
seed image from the background. The inspection machine
comprised scattering and positioning devices, a photographing
station, a parallel discharging device, and a continuous conveyer
belt with carrying holes for the rice seed. The rice seeds image
was achieved continuously by single chip controlled device. The
line was suspended per second by the device, and the images of
seeds were collected by the camera during the intervals. Image
analysis was carried out by Visual C++ 6.0. Color features in
RGB (red, green, blue) and color spaces were computed. A backforward neural network was trained to identify rice seeds.
Almost all 86.65% rice seeds were correctly identified. The
correct classification rates for five rice varieties were: No.5
Xiannong of 99.99%, Jinyougui of 99.93%,You166 of
98.89%, No. 3 Xiannong of 82.82% and Medium you 463 of
86.65%, respectively. Based on the results, it was concluded that
the system was enough to use for inspection of varieties of
different rice seeds based on their appearance characters of
seeds.
Keywords-automatic methods; Machine
processing;Rice seeds; Neural Network

I.

vision;

Image

INTRODUCTION

This template, modified in MS Word 2003 and saved as


Word 97-2003 & 6.0/95 RTF for the PC, provides authors
The quality of rice has become more important than the
quantity in these days of satiation, and China has become to
want high quality rice. According to the change of the
propensity to rice consumption, it is necessary to explore
nondestructive, accurate and rapid methods for evaluating the
rice quality. Rice is an extremely important crop in China, and
it is important for the rice industry to assessment of rice
quality and to perform rice grading [1]. However, up to the
present time, there has not been a measurement technology
that can quantify important quality attributes. Grading of rice
is based on inspection of physical quality attributes such as
color, shape, and size. By using these physical attributes, a

trained person determines the variety of rice, the amount of


damaged seeds, seeds of other color, and foreign materials
after critically inspecting a sample. The current method for
rice quality evaluation is time consuming, tedious, and
inherently inconsistent. An objective and cost-effective
instrumentation system is needed to segregate rice kernels.
Such a system would not only facilitate rice grading but also
serve as a quality control tool for identification different
cultivars in rice industry.
There have been successful applications of machine
vision in agricultural product inspection, but most efforts are
still in the research and development stage [2]. Research
endeavors have grown rapidly in the past 10 years. Extracting
various morphological, tonal, textural, and color features for
classification of grains by variety, grades, and damage has
been the focus of the reported research, and a substantial body
of literature exists [3-17] (Liu and Paulsen, 1997; Liu et al.,
1997; Shatadal et al., 1995b; Liao et al., 1994, 1991; Egelberg
et al., 1994; Romaniuk et al., 1993; Casady et al., 1992; Krutz
et al. 1991; Ding et al., 1990; Shyy and Misra, 1989;
Sapirstein, 1995; Sapirstein and Bushuk, 1989; Neuman et al.,
1989a, 1989b; Zayas et al., 1989). Most of the work has been
on wheat and corn.
Substantial work dealing with
classification of cereal grain using different morphological
(size and shape) has been reported in the literature [18-21].
In general, machine vision systems for grain
identification have been used under the controlled conditions
of a laboratory. In most of the aforementioned applications,
well-separated seeds were manually placed on a plate or tray
for gathering images. While the Manual placement is tedious
and labor intensive when a large number of seeds in a
representative sample of grains are to be analyzed. Some
researchers [22] (Casady and Paulsen 1989) have developed
automatic seed positioning system for placing individual grain
kernels under a camera for image acquisition. Combining such
a seed presentation device with the machine vision system,
however, will make the overall system more expensive and
less portable.
Rice seed size determination from the images of bulk
samples is highly desirable; however, it is a challenge to

Financial supporter: Natural Science Foundation of Jiangxi Province (2008GQN0029, 2007GZN0266, 2007-130), National Natural Science Foundation of
China (60844007) ,National Science and Technology Support Plan (2008BAD96B04) , New Century Excellent Talents in University (NCET-06-0575), Special
Science and Technology Support Program for Foreign Science and Technology Cooperation Plan (2009BHB15200) and Technological expertise and academic
leaders training plan of Jiangxi Province (2009JX02661).

978-1-4244-5961-2/10/$26.00 2010 IEEE

84

measure size of each seed in an image with many seed


touching or overlapping. Shatadal et al. (1995) used
mathematical morphology to separate connected grain kernels
in an image [23]. Shashidhar et al. (1997) used boundary
sampling and ellipse fitting to identify touching kernels.
However, the processing speed and accuracy were two major
problems [24].
The objective of this study was to develop a machine
vision system to identify the varieties of rice seed by its color
features and its appearance size of rice seed images. The
research entailed the development of an algorithm to segment
images of both variety of different shades and colors. Seed
boundaries were separated using morphological processing
techniques and the feed-forward neural network was trained to
identify rice seeds with the correct identification rate.
.
II.

MATERIALS AND METHODS

A. Experiment Samples
Five varieties of rice seed samples were obtained from the
Jiangxi Agriculture University of Grain Inspection field office
in Nangchang. The samples included fine seeds and seeds of
three damage categories: heat-damaged, green-frost-damaged,
and stink-bug-damaged. Trained grain inspectors manually
classified the seeds. Rice appearance quality is usually
determined by subjective descriptions using multi-dimensional
attributes. Quantitative manual inspection of rice quality is not
an easy task. Rice quality categories such as fine, cracked and
off-type kernels are based on appearance as a result of
differences in variety, growing environment, blight, harvesting
maturity, storage conditions, and post-harvest processing.
Examples of four kinds of seeds are shown in Fig. 1.

(a)

(b)

(c)

Figure 1. Original images of three categories of rice seed samples: (a) No.5
Xiannong (b) Jinyougui and (c)You166

Uniform diffuse lighting was used in all experiments.


Two fluorescent tubes (32W lamp) were placed below the
surface level of sample placement platform. The sample
placement platform was made of a white colored semitransparency acrylic plate for elimination of the direct light to
camera.
2) Sample Supplying Device
The rice seeds were dropped from the feed hopper, and
scattered over a predetermined matrix positioned on a conveyer
belt in the machine feeding section. The conveyer belt moved
the positioned seeds to be photographed by the CCD (chargecoupled device) cameras connected to the computer of the
image-processing unit. The surface color of the conveyer belt
was dark green, and the background color of the carrying holes
was black.
3) Operation of the Automatic System
The system was composed of two main parts: an inspection
machine and an image-processing unit, as shown in Fig.2. The
inspection machine had a trapezoidal profile with a
photographing station on the top. In brief, the rice seed
handling procedure involved the following steps. First, rice
seeds were dropped from the feed hopper and scattered over a
predetermined matrix positioned on a conveyer belt in the
machine feeding section. The conveyer belt moved the
positioned seeds to be photographed by one CCD (chargecoupled device) cameras connected to the computer of the
image processing unit. The computer segregated the seed
images from the background, provided a recognition process,
and transferred the final sorting results to the machine
controller. In the discharging section, the controller signaled
each pneumatic valve to eject the seeds from the carrying holes
into collection containers. An interface protocol was developed
between the inspection machine and the image processing unit
to coordinate their concurrent activity.
The spatial resolutions for rows and columns in the image
were 0.09 mm/pixel. A Pentium IV 480 MHz personal
computer was used for image processing, quality recognition,
and machine operation. The surface color of the conveyer belt
was dark green, and the background color of the carrying holes
was black. The shape of the carrying holes of the conveyer belt
was elliptical, about 9 mm in length and 6 mm in width.

B. Automatic Inspection System


1) Machine Vision Hardware
A CCD color camera (Model Panasonic WV-CP410/G,
Japan) with a zoom lens of 6mm focal length (SE0612) was
used to acquire images. The camera was mounted on a stand
which provided easy vertical movement for the camera.
Manual iris control was used. The automatic gain control and
automatic white balance of the camera was used.
The color frame grabbing board (Matrox Meteor II
Standard, Canada) was installed in a personal computer
(Pentium IV, 1.2G, IBM). The frame grabber could convert
the R, G, and B color signals to H, S, and signal in real time.
The program for controlling the frame grabber was written in
C++ programming language.

Figure 2. Automatic paddy rice seed varieties inspection system.

C. Image Processing and Analysis


1)

85

Image Segmentation

Image segmentation is the first step of image analysis. An


image is subdivided into its constituent parts or objects. The
level which this subdivision is carried to is determinated by
the problem being solved. The image is usually subdivided
until the objects of interest are isolated. There are generally
two approaches for segmentation algorithms. One is based on
the discontinuity of gray-level values, and the other is based
on the similarity of gray-level values. The first approach is to
partition an image based on abrupt changes in gray levels. The
second approach is performed by using the thresholding,
region growing, region splitting and merging.
Thresholding is an important part of image segmentation.
A digital image can be expressed by a two-dimensional
function f(x, y), where (x, y) are the coordinates of each pixel
in the image and f(x, y) is the gray-level value. A welldesigned image acquisition system acquires images in which
object and background pixels often have gray levels grouped
into two dominant modes. The thresholding approach is to
select a threshold (T) that separates the two modes.
A thresholded image g(x, y) was defined as:

1 if f ( x, y ) > T
g ( x, y ) =
0 if f ( x, y ) < T

(1)

Thus, pixels labeled 1 (or any other convenient intensity


level) correspond to objects, whereas pixels labeled 0
correspond to the background. When T depends only on f(x,
y), the threshold is called global. If T depends on both f(x, y)
and on some local property, called p(x, y), the threshold is
called local. In addition, if T depends on the spatial
coordinates x and y, the threshold is called dynamic [25].
Fig.3 shows the image segmentation of one seed variety
(Threshold = 160).

Figure 3. Image segmentation of one rice kernel

2) Image Representation
The result of segmentation is usually a binary image. A
binary image contains only two types of pixels: the pixels
having a gray level value of either 0 or 1. Objects of interest
are isolated in pixel aggregations at a gray level of 1, which
are still raw data needing a suitable description for further
computer processing. The process for finding representative
characteristics of an object and expressing them with numbers,
which are able to be further processed by a computer, is called
representation [26]. The numbers expressing the
characteristics of an object are called the features of the object.
The techniques to find representations of the objects of interest
are referred to as feature extractions.
3) Image Analysis
Algorithms to extract appearance features of bulk rice
were developed as mentioned above. Every image contained
maximum 100 kernels touched each other. Image analysis was
carried out on a personal computer in real time.

Most of the machine vision systems can analysis the


contents of a rice sample with reasonable accuracy if the
sample does not contain any touching seeds. When touching
seed exist in an image, the analysis becomes non-trivial
because of the appearance of single regions in the binary
image for touching seed. A group of touching seed may be
incorrectly identified. Therefore, the segmentation of touching
seed was carried out firstly for image analysis. In order to test
the system of accuracy and rapidness to identify appearance
features of rice, 100 sets of images were acquired.
D. Feature Extraction
The whole section was segmented, and the embryo area
and the fine area were approximated. Each segmented object
was analyzed to obtain its area. The pixel area was computed
by calling the Matrox on-board statistical routine to count the
number of pixels for each object.
The first feature (x) was the ratio of the sound area to the
area of the whole section. The second feature (y) was the ratio
of the sound area to the area of the approximated embryo part.
The second feature was the more important if the
approximation of the embryo part was accurate. The two
features were comprehensively used for the classification.
E. Neural Network Classifier
A feed-forward neural network was trained for
classification of the samples into the categories of five rice
seed cultivars. The inputs to the network were the image
features computed, and four outputs formed a four-bit binary
number representing the category of classification (0001
corresponded to category 1, 0010 to category 2, and so on).
Network training was performed with the back-propagation
algorithm as described in Rumelhart et al. (1986) [27]. The
dataset of 500 samples was used to train the neural network,
and the second dataset of 250 samples was used for validation.
Different numbers of layers and nodes were tested for the
network structure. The root mean squared (RMS) error of
prediction for the validation dataset was used to select an
appropriate network structure without over-fitting. Finally, the
third dataset of 250 samples was used to test the trained neural
network for classifier. Fig.4 shows the BP-ANN structure.
hidden layer

inputs

outputs

input layer

output layer

Figure 4. A generalized BP-ANN Network structure

III.

RESULTS AND DISCUSSIONS

A. Inspection Software and Classification Algorithm


The rice quality inspection and variety identification
software was developed in the Windows environment.
Fundamental rice image processing and parameter calculation

86

Alength, Ashort as the input layer. Table 1 shows the BPANN optimization network structures
0.70
0.60
Root Mean square error

functions were developed in VC++ computer language, and


compiled into a dynamic link library (DLL). The online
inspection control, sorting parameter preparation, inspection
machine
operation,
photographing
calibration,
and
input/output functions were all developed in VC++ to facilitate
the design of a graphical and user-friendly interface.
Illumination checking functions were developed in the
inspection software to maintain the constancy of the lighting
and RGB colors between tests. The level of illumination
varied according to direction of lighting sources, camera
diaphragm setting, voltage fluctuation, and lamp decay. A
white ceramic tile was used in the photographic section to
calibrate the light distribution to ensure consistent illumination
throughout tests. From the tile, average gray-levels of RGB
components in each carrying hole were measured. The
distribution values of each color component of the 24 carrying
holes, such as average, maximum, minimum, and standard
deviation, were measured to ensure it.
The sorting algorithm was a range-selection method
implemented as a series of tables. Each table was related to
one quality category with all characteristic parameters listed as
a logical "and" assembly. The implemented parameters
appeared as a text list with lower and upper values, and either
one or both could be enabled or disabled in the window. A rice
seed was categorized when its image parameters fell within
the ranges of a table. Fig.5 shows its inspection software
interface and the image process procedure.

0.50
0.40
0.30
0.20
0

1000

2000

3000

4000

5000

6000

Learning cycles

Figure 6.

TABLE I.

Comparison of different neural network structures

TABLE 1 BP-ANN SPECIFICATIONS AND TRAINING


PARAMETERS

Parameter

Value

Input layer nodes


Input layer transfer function
Hidden layer nodes

Ten variable
Linear
4

Output layer nodes

Output layer transfer function


Learning rate
Momentum
Gain
Training presentation order

Linear
0.15
0.2
1.0
random

TABLE II.

PERCENTAGE ACCURACY FOR CLASSIFYING RICE SEEDS


VARIETY

Figure 5. Inspection software interface of rice variety

B. Optimization of Network Structure Parameters


The mean squared errors for the validation data set were
compared among different neural network structures (Fig.6).
Structures of one hidden layer with 1 to 30 hidden nodes and a
few structures of two hidden layers were tested. As shown by
Fig.6, a network with three hidden nodes in one hidden layer
resulted in the least validation errors. A network with only one
hidden node in the hidden layer could not seem to represent all
the variations properly, but one extra hidden node made the
performance comparable to the best case of four hidden nodes.
Additional hidden nodes and hidden layers appeared to cause
over-fitting, and did not improve the network performance.
The network structure with one hidden layer and four hidden
nodes was selected for classification of the test samples. Ten
parameters including RGBHSVAreaLength

Varieties

Accuracy (%)

No.5 Xiannong rice


Jinyougui rice

99.99
99.93

No. 3 Xiannong
You166 rice
Medium you 463 rice
Average

98.89
82.82
86.65
93.66

C. Classification of Rice Seed Variety


The algorithm developed in this study was able to correctly
classifying rice features, whether seeds were isolated or
grouped. The correct classification rates for five rice varieties
were: No.5 Xiannong of 99.99%, Jinyougui of
99.93%,You166 of 98.89%, No.3 Xiannong of 82.82% and
Medium you 463 of 86.65%, respectively. The reason why
the correct classification rates for the both You166 and
Medium you are lower than the three others
The morphological differences, which influent the accuracy
of classification, may be the reason that both You166 and
Medium you are lower than the three others. For one variety
of rice seed, all of the five varieties accuracy of 93.66% was
achieved. It was considered that too many kinds of rice were
mixed in the sample. Detailed results are shown in Table 1. It
took 4 seconds (include rice supplying, image acquisition,

87

image analysis, and rice discharging) to classify 100 seeds at a


time using developed machine vision system. From this result,
it was considered that the developed system was enough to use
for evaluation of appearance quality of rice.
IV.

CONCLUSIONS

In this research, the performance of an automated machine


vision rice variety inspection system was studied. During the
automatic inspection process, rice seed were positioned, carried
through photographing sections for image inspection, and then
sorted into collection containers according to their color feature
and appearance quality. Special rice variety inspection software
was developed to prepare grading parameters, and to tune the
sorting precision and machine operation. Ten parameters
relating to rice appearance characteristics were used to
categorize rice seed into five inspection categories. The correct
classification rates for five rice varieties were: No.5
Xiannong of 99.99%, Jinyougui of 99.93%,You166 of
98.89%, No.3 Xiannong of 82.82% and Medium you 463 of
86.65%, respectively. Based on the results, it was concluded
that the system was enough to use for inspection of varieties of
different rice seed based on its appearance characters of seeds.
ACKNOWLEDGMENT
The authors wish to thank Natural Science Foundation of
Jiangxi Province (2008GQN0029, 2007GZN0266, 2007-130),
National Natural Science Foundation of China (60844007)
,National Science and Technology Support Plan
(2008BAD96B04) , New Century Excellent Talents in
University (NCET-06-0575),Special Science and TechnologySupport Program for Foreign Science and Technology
Cooperation Plan(2009BHB15200) and Technological
expertise and academic leaders training plan of Jiangxi
Province (2009JX02661).
REFERENCES
[1]
[2]

[3]

[4]

[5]

[6]

P. Shatadal, An identifying damaged soybeans by color image


analysis, Applied Engineering in Agriculture, Vol. 19, PP. 65-69, 2003.
G. Zhang, S. J. Digvir, C. Karunkaran, Separation of touching grain
kernels in an image by ellipse fitting algorithm, ASAE, No.023129,
2002.
J. Liu, M. R. Paulsen, Corn whiteness measurement and classification
using machine vision, ASAE, No. 973045. St. Joseph, Mich.: ASAE,
1997.
W. Liu, Y. Tao, T. J. Siebenmorgen, H. Chen, Digital image analysis
for rapid measurement of degree of milling of rice, ASAE, No. 973028. St. Joseph, Mich.: ASAE, 1997.
I. S. Ahmed, J. F. Reid, M. R. Paulsen, Neuro-fuzzy inference of
soybean seed, ASAE Paper, No. 973041. St. Joseph, Mich.: ASAE,
1997.
P. Shatadal, D. S. Jayas, N. R. Bulley, Digital image analysis for
software separation and classification of touching grains: II.
Classification, Transactions of the ASAE, Vol. 38, PP. 645-649, 2002.

[7]

[8]

[9]

[10]

[11]

[12]

[13]
[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]
[26]

[27]

88

K. Liao, M. R. Paulsen, J. F. Reid, Real-time detection of color and


surface defects of maize kernels using machine vision, J. Agric. Engng.
Res. Vol. 59, PP. 263-271, 1994.
P. Egelberg, O. Mson, C. Peterson, Assessing cereal grain quality with
a fully automated instrument using artificial neural network processing
of digitized color video images, SPIE, Vol. 2345. Boston, Mass. 1994.
M. Romaniuk, S. Sokhansanj, H.C. Wood, Barley seed recognition
using a multi-layer neural network, ASAE Paper, No. 936569. St.
Joseph, Mich.: ASAE, 1993.
W. W. Casady, M. R. Paulsen, J. F. Reid, J. B. Sinclair, A trainable
algorithm for inspection of soybean seed quality, Transactions of the
ASAE, Vol. 35, PP. 2027-2034, 1992.
G. W. Krutz, P. E. Petersen, H. M. Rudemo, C. J. Prtti, Identification of
weed seeds by color machine vision, ASAE Paper, No. 917558 St.
Joseph, Mich.: ASAE,1991.
K. Ding, R. V. Morey, W. F. Wilcke, D. J. Hansen, Corn quality
evaluation with computer vision, ASAE, No. 903532. St. Joseph,
Mich.: ASAE, 1990.
Y. Y. Shyy, M. K. Misra, Color image analysis for soybean quality
determination, ASAE, No. 893572. St. Joseph, Mich.: ASAE, 1989.
H. D. Sapirstein, Variety identification by digital image analysis In
Identification of Food-Grain Varieties, ed. C. W. Wrigley, 91-130. St.
Paul, Minn.: American Association of Cereal Chemists. 1995,
S. Panigrahi, M. K. Misra, Y. Y. Shyy, Color image acquisition for a
machine vision system of corn germplasm, ASAE, No. MCR-89-124.
St. Joseph, Mich.: ASAE, 1989.
M. Neuman, H. D. Sapirstein, E. Shwedyk, W. Bushuk, Wheat grain
color analysis by digital image processing I. Methodology, J. Cereal
Sci. Vol. 10,PP. 183-188, 1989a.
I. Zayas, Y. Pomeranz, F. S. Lai, Discrimination of wheat and non
wheat components in grain samples by image analysis, Cereal Chem,
Vol. 66,PP. 233-237, 1989.
P. Shatadal, D.S. Jayas, N. R. Bulley, Digital image analysis for
software separation and classification of touching grains: 1.Disconnected
algorithm, Trans. of ASAE, Vol. 38, PP. 635-643, 1995.
P. Shatadal, D. S. Jayas, N. R. Bulley, Digital image analysis for
software separation and classification of touching grains:
2.Classification, Trans. of ASAE, Vol. 38, PP. 645-649, 1995.
D. A. Barker, T.A. Bouri, M.R. Hegedus, D.G. Myers, The use of ray
parameters for the discrimination of Australian wheat varities, Plant
Varieties and Seeds, Vol. 5, PP. 35-45, 1992.
S. A. Draper, A. J. Travis, Preliminary observations with a computer
based system for analysis of the shape of seeds and vegetative
structures, J. of the Nati. Insti. of Agri. Botany, Vol. 16, PP. 387-395,
1984.
W. W. Casady, M. R. Paulsen, An automated kernel-positioning device
for computer vision analysis of grains, Trans. of ASAE, Vol. 32, PP.
1821-1826, 1989.
P. Shatadal, D. S. Jayas, J. L. Hehn, N. R. Bulley, Seed classification
using machine vision, Canadian Agric. Eng., Vol. 37, PP. 163-167,
1995c.
N. S. Shashidhar, D. S. Jayas, T. G. Crown, N. R. Bulley, Processing of
digital images of touching kernels by ellipse fitting, Canadian Agric.
Eng., Vol. 39, PP. 139-142, 1997.
R. C. Gonzalez, R. E. Woods, Digital Image Processing, Singapore:
Addison-Wesley, 1992.
Y. N. Wan, Kernel Handling Performance of an Automatic Grain
Quality Inspection System, Transactions of the ASAE, Vol. 45, PP.
369-377, 2002.
D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning representations
by back-propagating errors, Nature, Vol. 323, PP. 533-536, 1986.

You might also like