Professional Documents
Culture Documents
9
ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009
(Dong-chen he and Li Wang, 1990) have described a occurrence matrix. The extracted features are used to
set of textural measures derived from the texture train the developed neural network model. The
spectrum. The proposed features are used to extract developed neural network model is tested for
textural information of an image using gray-level co- recognition and classification of different bulk sugary
occurrence matrix. object samples. The block diagram representing
(Cheng-Jin Du and Da-Wen Sun 2005) have different phases of the work is given in Figure 2.
developed an automated classification system of
pizza sauce spread using color vision and support
vector machines (SVM). The image transformed from
red, green, and blue (RGB) color space to hue,
saturation, and value (HSV) color space. A vector
quantifier is designed to quantify the HS (hue and
saturation) space to 256-dimension, and the (a)Applecake (b) Bundeladu (c) Burfi
quantified color features of pizza sauce spread were
represented by color histogram. (Zhao-yan Liu et.al
2005) have described image analysis algorithm based
on color and morphological features for identifying
different varieties of rice seeds. Seven color features
and fourteen morphological features are used for
discriminate analysis. A two-layer neural network
(d) Doodhpeda (e) Jamun (f) Jilebi
model is developed for classification.
Most of the researchers have focused on classification
of fruits, meat, processed food like pizza etc., from
their digital images. But south Indian cooked or
processed or ready to eat food objects have not been
considered so far. The present work pertains to
classification of bulk sugary food objects based on
textural features. The work finds application in (e) Kalakand (f) Ladakiladu (g) Mysorepak
automatic serving of food by robots in restaurants and
also in food quality evaluation in food industry.
The paper is organized into six different sections.
Section 2 contains methodology. Section 3 contains
detailed description of texture feature extraction
using gray level co-occurrence matrix method.
Section 4 presents development of neural network
model that is used for recognition and classification (h) Suraliholige
of bulk sugary food objects. The overall system Figure 1: Image Samples of Bulk Sugary Food
performance is analyzed in the section 5. The Objects
conclusion and the future avenues are given in
Knowledge
section 6. base
Images of different bulk sugary food objects (sweets) Food extraction Model
10
ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009
in the image and deriving a set of statistics from the scaling invariant. However, the noise free images are
distributions of the local features. Geometrical used through suitable processing. Hence, it is noise
methods try to describe the primitives and the rules invariant.
governing their spatial organization by considering
texture to be composed of texture primitives. The Table1. Texture Features used in the work
structure and organization of the primitives can be Property Formula
presented using Voronoi tessellations. Model based
∑
Contrast n −1
texture analysis methods are based on the I , J =0
PI , J | i − j | 2
construction of an image model that can be used not Correlation
only to describe texture, but also to synthesize it. The ⎡ (i − μ i )( j − μ j ) ⎤
∑i, j =0 ⎢⎢
N −1
model parameters capture the essential perceived ⎥
qualities of texture. Signal processing methods ⎣ (σ i 2
)(σ j 2
) ⎥⎦
analyze the frequency content of the image. For food
∑
Angular N −1 2
processing, the most widely used approaches are Second i , j =0
Pi , j
statistical including pixel-value run length method Moment
and the co-occurrence matrix method. Energy
ASM
∑
Dissimilarity N −1
3.1 Gray level Co-occurrence Matrix (GLCM)
i , j =0
Pi , j | i − j |
Method
∑
Entropy N −1
GLCM is a two dimensional matrix of frequencies at
i , j =0
Pi , j (− ln Pi , j )
which two pixels, separated by a certain vector,
occurs in the image. i.e., the GLCM is a tabulation of Homogeneity Pi , j
∑
N −1
how often different combination of pixel brightness
values (gray levels) occur in an image. The
i , j =0
1 + (i − j ) 2
distribution in the matrix depends on the angular and
∑
Cluster Shade N −1
distance relationship between pixels. Varying the i , j =0
((i − μ i ) + ( j − μ j )) 3 Pi , j
vector used allows the capturing of different texture
∑
Cluster N −1
characteristics. Once the GLCM has been created, Performance i , j =0
((i − μ i ) + ( j − μ j )) 4 Pi , j
various features can be computed from it. After
μ i ∑i , j =0 i ( Pi , j )
Mean N −1
creating GLCM, it is required to normalize the matrix
before texture features are calculated. The measures
μ j ∑i , j =0 j ( Pi , j )
N −1
require that each GLCM cell contain a count, but
rather a probability.
σ 2 = ∑i , j =0 ( Pi , j )(i − μ ) 2
Variance N −1
To accomplish texture analysis task, the first step is
to extract texture features that most completely
Standard
embody information about the spatial distribution of
Deviation σ = σ2
intensity variations in the textured image. Texture
Smoothness 1
features derived from GLCM using the formulae 1−
given in Table 1. (1 + σ 2 )
Contrast returns a measure of intensity contrast
∑
Third N −1
between a pixel and its neighbourhood. Contrast is 0 Movement i , j =0
( Pi , j )(i − μ i ) 3
for a constant image. Energy means uniformity, or
Maximum max ( Pi , j )
angular second moment (ASM). The more
Probability i, j
homogeneous of image, larger the value. When
energy equals to 1, the image is believed to be a
constant image. Entropy is a measure of randomness Algorithm 1: Texture features extraction GLCM
of intensity image. Image with more number of method.
occurrences of particular color configurations has Input: RGB components of original Image. Output:
resulted in higher value of entropy. Local 14 Texture Features
Homogeneity measures the similarity of pixels. Start:
Diagonal gray level co-occurrence matrix gives Step1: For all the sampled RGB components Derive
homogeneity of 1.Cluster Shade and cluster the Gray Level Co-occurrence Matrices (GLCM)
prominence are measures of skew ness of the matrix, Pφ,d(x,y) for four directions φ(00 , 450, 900 and 1350)
in other words, the lack of symmetry. When cluster and d=1 which are dependent on direction φ
shade (CS) and cluster prominence (CP) are high, Step2: Compute the co-occurrence matrix, which is
the image is not symmetry. Maximum Probability independent of direction using equation (3).
gives the maximum occurrence of gray levels which C=1/4(P00 +P450 + P900 + P1350 ) --------- (3)
satisfies the relation given in an entropy equation. Step3: GLCM features namely, mean, contrast etc.,
The steps involved in texture feature extraction are are calculated using equations given in Table 1.
given in Algorithm 1. The algorithm is rotational and Stop
11
ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009
T e xtu re m e a su re s
The number of neurons in the input layer is equal to
smoothness
3 third moment
cluster performance
Applecake, Bundeladu, Doodhpeda, Jamun, Jilebi, Figure 5: Plot for data set of sweet sample applecake
Ladakiladu, Mysorepak, Burfi, Kalakand, and Surali used for testing
holige, with 2000 samples representing 200 examples 7
T e xtu re m e a su re s
smoothness
( 10 nodes ) third moment
3 cluster performance
cluster shade
maximum probability
Contrast 2
Applecake
1
Entropy
BundeLadu -1
-2
1 1.5 2 2.5 3 3.5 4 4.5 5
Number of samples
Energy
Kalakand Figure 6: Plot for data set of sweet sample bundeladu
Figure 3: used for testing
Neural Surali Holige
Mean
Network 5. Results and discussions
Classifier This section gives results of experiments carried on
the developed neural network model. The algorithm
In the testing phase, the bulk sugary food objects was developed for texture feature extraction using
from untrained set of samples are used to test the GLCM method performed well in the task of
trained neural network model for classification. The extracting texture features from images of different
sample feature values for the applecake is shown in bulk sugary food objects. The results obtained in this
figure 4. work indicate that the ANNs can in general classify
bulk sugary food objects with success rate of 86% to
90%. Initial model is developed using only six texture
10000
features such as Contrast, Correlation, Energy,
1000 Entropy, Homogeneity, and Dissimilarity. The neural
network model performance found to be low (74%).
Feature Values
12
ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009
untrainedd bulk sugaryy objects. Thee accuracy of the deveeloped neuraal network model classsifies ten
neural neetwork model is found to bee 90%. diffeerent varietiess of bulk suggary food objjects. The
bulkk sugary foodd objects aree classified withw 90%
accuuracy. For thhe food objeccts Applecakee, Bunde,
Classificaation Accuraacy Jileb
bi, Burfi, Kallakand and suuraliholige, itt is found
that the recognitioon is 100%. AAnd for the foood objects
100 Jammun, Doodhpeda, and Mysoorepak, the reecognition
Accuracy (%)
Ladakiladu
Suraliholige
Jilebi
Jamun
Bundeladu
Kalakand
Burfi
Doodhpeda
Mysorepak
p
evalluating qualityy of the sugaryy food objectss, in order
to meet
m the consuumer’s expectaations.
y
Bulk Sugary
S Food Samples
S 7. References
R
[1] Anami B S and D G Savakar, (2009). Improved
method forr Identificatioon of Foreig gn Bodies
Mixed Foodd Grain Imagee Samples, International
Figure 7:
7 Classificatioon Accuracy of
o training dataa set Journal of Artificial
A Inteelligence andd machine
learning(AIM ML), Vol 9, Isssue 1, pp 1-9
9.
[2] B.S.Anami, Vishwanath B Burkpalli, (20
003). S. A.
Classificcation Accu
uracy
Angadi, Nagama
N Patiil, Nneural network
approach foor grain classiification and gradation,
g
100 Proceedingss of the seconnd national conference
Accuracy (%)
Jamun
Bundeladu
Ladakiladu
Kalakand
Burfi
Suraliholige
Doodhpeda
Mysorepak k
13
ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009
14