You are on page 1of 6

ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

Texture based Identification and Classification of Bulk Sugary Food


Objects
Basavaraj .S. Anami1 and Vishwanath.C.Burkpalli 2
1. Principal, K.L.E.Institute of Technology, Hubli-580030, India
2. Research Scholar, Basaveshwar Engineering College, Bagalkot – 587102, India
Abstract neural network based classification of Indian sweet
This paper presents a methodology for identification objects using textural features.
and classification of bulk sugary food objects. A method for classification and gradation of different
Comprising of south Indian typical sweets like boiled food grains like Red Gram; Green gram etc.,
Applecake, Bundeladu, Burfi, Doodhpeda, Jamun, with their level of boiling is described in (Anami B.S
Jilebi, Kalakand Ladakiladu, Mysorepak and et al, 2005). Different color and texture features are
Suraliholige. When these sweets arranged for display extracted to develop a knowledge based classifier for
at the shops exhibit different patterns and hence classification. Color and texture features are used to
texture is the basis used for recognition. The texture develop a neural network model for classification of
features are extracted using gray level co-occurrence different food objects like Idli, Wada, Bonda etc., is
matrix method. The multilayer feed forward neural proposed in (Anami B.S et al, 2005). (Anami B.S et
network is developed to classify bulk sugary food al, 2003) developed a method for classification and
objects. An analysis of the efficiency of methodology gradation of different grains such as Ground nut,
is found 90%. The work finds application in Bengal Gram, wheat etc. Geometrical and color
automatic monitoring /serving food in restaurants, features such as size, shape, RGB etc., are extracted.
hotels and malls by service robots. A neural network model is developed for
classification. (Anami B.S et al, 2009) have
Keywords: Neural Network, Sugary Food Objects, developed a knowledge based nearest mean classifier
Texture Features. for classification of Bulk Food objects. L*a*b* color
features are used for classification. (R.M. Harlick et
al, 1973) have described some easily computable
1. Introduction textural features based on gray tone spatial
Food is one of the basic requirements of every living dependencies. They have illustrated the application
being on earth. The human beings require their food in category identification tasks of three different
to be fresh, pure and of standard quality. The kinds of image data. (Giaime Ginesu et al, 2004)
standards imposed and automation carried out in food have worked on detection of foreign bodies in food
processing industry takes care of food quality. Efforts by thermal image processing. (D. Patel et al, 2004)
are being geared towards the replacement of human have described a monitoring system, which detects
operator with automated systems. In the food contaminants such as pieces of stone or fragments of
industry, some quality evaluation is still performed glass in foodstuffs.
manually by trained inspectors, which is tedious, (M Barni et.al, 1995) have described vision based
laborious, costly and inherently unreliable due to its intelligent perception system for making automated
subjective nature. Increased demands for objectivity, inspection of chicken meat feasible. The defects are
consistency and efficiency have necessitated the classified by comparing their features against defect
introduction of computer based image processing description container in a reference database. (Bin
techniques. In the context of the cost of labour Zhu et al, 2007) have developed an automated
increasing day by day, it is important to think of inspection of apple quality based on geometric or
automating the tasks of food serving in restaurants by statistical features. Paper introduces a Gabor feature-
smart robots. Hence, in order to develop an based kernel principal component analysis (PCA)
automated system for food quality evaluation and method by combining Gabor wavelet representation
food serving in the restaurants, the food is considered of apple images and the kernel PCA method for apple
as an object and the work presented here gives a quality inspection using near-infrared (NIR) imaging.

9
ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

(Dong-chen he and Li Wang, 1990) have described a occurrence matrix. The extracted features are used to
set of textural measures derived from the texture train the developed neural network model. The
spectrum. The proposed features are used to extract developed neural network model is tested for
textural information of an image using gray-level co- recognition and classification of different bulk sugary
occurrence matrix. object samples. The block diagram representing
(Cheng-Jin Du and Da-Wen Sun 2005) have different phases of the work is given in Figure 2.
developed an automated classification system of
pizza sauce spread using color vision and support
vector machines (SVM). The image transformed from
red, green, and blue (RGB) color space to hue,
saturation, and value (HSV) color space. A vector
quantifier is designed to quantify the HS (hue and
saturation) space to 256-dimension, and the (a)Applecake (b) Bundeladu (c) Burfi
quantified color features of pizza sauce spread were
represented by color histogram. (Zhao-yan Liu et.al
2005) have described image analysis algorithm based
on color and morphological features for identifying
different varieties of rice seeds. Seven color features
and fourteen morphological features are used for
discriminate analysis. A two-layer neural network
(d) Doodhpeda (e) Jamun (f) Jilebi
model is developed for classification.
Most of the researchers have focused on classification
of fruits, meat, processed food like pizza etc., from
their digital images. But south Indian cooked or
processed or ready to eat food objects have not been
considered so far. The present work pertains to
classification of bulk sugary food objects based on
textural features. The work finds application in (e) Kalakand (f) Ladakiladu (g) Mysorepak
automatic serving of food by robots in restaurants and
also in food quality evaluation in food industry.
The paper is organized into six different sections.
Section 2 contains methodology. Section 3 contains
detailed description of texture feature extraction
using gray level co-occurrence matrix method.
Section 4 presents development of neural network
model that is used for recognition and classification (h) Suraliholige
of bulk sugary food objects. The overall system Figure 1: Image Samples of Bulk Sugary Food
performance is analyzed in the section 5. The Objects
conclusion and the future avenues are given in
Knowledge
section 6. base

2. Proposed Methodology Bulk


Sugary
Texture
Feature
Neural
Network
Results

Images of different bulk sugary food objects (sweets) Food extraction Model

like Applecake, Bundeladu, Burfi, Doodhpeda,


Jamun, Jilebi, Kalakand Ladakiladu, Mysorepak and
Suraliholige are captured using 96 dpi colour camera Bulk Texture
Sugary Feature
(Digimax U-CA 401/Kenox ME4, CD4, and Food extraction

SAMSUNG). When these sweets are arranged for


display at the shops exhibit different patterns. Hence Figure 2: Block Diagram of the Proposed
images of each sample are captured in different Methodology
angles. i.e. top, front, left, right and back views are
collected. The sample digital images of different 3. Texture Features Extraction
samples of bulk sugary food objects are shown in Texture is one of the most important defining
Figure1. characteristics of an image. It is characterized by the
The texture features such as Contrast, Correlation, spatial distribution of gray levels in a neighbourhood.
Entropy, Energy, Homogeneity, Dissimilarity, There are different types of texture feature extraction
Smoothness, Cluster Shade, Cluster Performance, methods like statistical, geometrical, and model-
Angular Second Moment, Third Moment, Mean, based and signal processing reported in the literature.
Variance, Standard Deviation, and Maximum Statistical methods analyze the spatial distribution of
Probability are extracted using gray level co- gray values by computing local features at each point

10
ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

in the image and deriving a set of statistics from the scaling invariant. However, the noise free images are
distributions of the local features. Geometrical used through suitable processing. Hence, it is noise
methods try to describe the primitives and the rules invariant.
governing their spatial organization by considering
texture to be composed of texture primitives. The Table1. Texture Features used in the work
structure and organization of the primitives can be Property Formula
presented using Voronoi tessellations. Model based

Contrast n −1
texture analysis methods are based on the I , J =0
PI , J | i − j | 2
construction of an image model that can be used not Correlation
only to describe texture, but also to synthesize it. The ⎡ (i − μ i )( j − μ j ) ⎤
∑i, j =0 ⎢⎢
N −1
model parameters capture the essential perceived ⎥
qualities of texture. Signal processing methods ⎣ (σ i 2
)(σ j 2
) ⎥⎦
analyze the frequency content of the image. For food

Angular N −1 2
processing, the most widely used approaches are Second i , j =0
Pi , j
statistical including pixel-value run length method Moment
and the co-occurrence matrix method. Energy
ASM

Dissimilarity N −1
3.1 Gray level Co-occurrence Matrix (GLCM)
i , j =0
Pi , j | i − j |
Method

Entropy N −1
GLCM is a two dimensional matrix of frequencies at
i , j =0
Pi , j (− ln Pi , j )
which two pixels, separated by a certain vector,
occurs in the image. i.e., the GLCM is a tabulation of Homogeneity Pi , j

N −1
how often different combination of pixel brightness
values (gray levels) occur in an image. The
i , j =0
1 + (i − j ) 2
distribution in the matrix depends on the angular and

Cluster Shade N −1
distance relationship between pixels. Varying the i , j =0
((i − μ i ) + ( j − μ j )) 3 Pi , j
vector used allows the capturing of different texture

Cluster N −1
characteristics. Once the GLCM has been created, Performance i , j =0
((i − μ i ) + ( j − μ j )) 4 Pi , j
various features can be computed from it. After
μ i ∑i , j =0 i ( Pi , j )
Mean N −1
creating GLCM, it is required to normalize the matrix
before texture features are calculated. The measures
μ j ∑i , j =0 j ( Pi , j )
N −1
require that each GLCM cell contain a count, but
rather a probability.
σ 2 = ∑i , j =0 ( Pi , j )(i − μ ) 2
Variance N −1
To accomplish texture analysis task, the first step is
to extract texture features that most completely
Standard
embody information about the spatial distribution of
Deviation σ = σ2
intensity variations in the textured image. Texture
Smoothness 1
features derived from GLCM using the formulae 1−
given in Table 1. (1 + σ 2 )
Contrast returns a measure of intensity contrast

Third N −1
between a pixel and its neighbourhood. Contrast is 0 Movement i , j =0
( Pi , j )(i − μ i ) 3
for a constant image. Energy means uniformity, or
Maximum max ( Pi , j )
angular second moment (ASM). The more
Probability i, j
homogeneous of image, larger the value. When
energy equals to 1, the image is believed to be a
constant image. Entropy is a measure of randomness Algorithm 1: Texture features extraction GLCM
of intensity image. Image with more number of method.
occurrences of particular color configurations has Input: RGB components of original Image. Output:
resulted in higher value of entropy. Local 14 Texture Features
Homogeneity measures the similarity of pixels. Start:
Diagonal gray level co-occurrence matrix gives Step1: For all the sampled RGB components Derive
homogeneity of 1.Cluster Shade and cluster the Gray Level Co-occurrence Matrices (GLCM)
prominence are measures of skew ness of the matrix, Pφ,d(x,y) for four directions φ(00 , 450, 900 and 1350)
in other words, the lack of symmetry. When cluster and d=1 which are dependent on direction φ
shade (CS) and cluster prominence (CP) are high, Step2: Compute the co-occurrence matrix, which is
the image is not symmetry. Maximum Probability independent of direction using equation (3).
gives the maximum occurrence of gray levels which C=1/4(P00 +P450 + P900 + P1350 ) --------- (3)
satisfies the relation given in an entropy equation. Step3: GLCM features namely, mean, contrast etc.,
The steps involved in texture feature extraction are are calculated using equations given in Table 1.
given in Algorithm 1. The algorithm is rotational and Stop

11
ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

4. Neural Network Model 6


contrast

The multilayer feed forward neural network model 5


correlation
asm
energy

with back propagation algorithm for training is Homogeneity


entropy
dissimilarity

employed for classification task as shown in Figure 3. 4 mean


variance
standard deviation

T e xtu re m e a su re s
The number of neurons in the input layer is equal to
smoothness
3 third moment
cluster performance

the number of input features (15). The number of


cluster shade
maximum probability
2

neurons in the output layer is equal to the number of


categories of bulk sugary food objects (10).
1

4.1 Training and Testing 0

The developed neural network is trained with ten -1


1 1.5 2 2.5 3 3.5 4 4.5 5

different types of sugary food objects namely Number of samples

Applecake, Bundeladu, Doodhpeda, Jamun, Jilebi, Figure 5: Plot for data set of sweet sample applecake
Ladakiladu, Mysorepak, Burfi, Kalakand, and Surali used for testing
holige, with 2000 samples representing 200 examples 7

of each of ten types of sugary food objects. The 6


contrast
correlation
asm

texture features of an image are used as the inputs.


energy
Homogeneity
5 entropy
dissimilarity
Hidden Layer mean

Input Layer ( 12 nodes ) Output Layer 4 variance


standard deviation
( 15 nodes )

T e xtu re m e a su re s
smoothness
( 10 nodes ) third moment
3 cluster performance
cluster shade
maximum probability
Contrast 2

Applecake
1

Entropy
BundeLadu -1

-2
1 1.5 2 2.5 3 3.5 4 4.5 5
Number of samples

Energy
Kalakand Figure 6: Plot for data set of sweet sample bundeladu
Figure 3: used for testing
Neural Surali Holige

Mean
Network 5. Results and discussions
Classifier This section gives results of experiments carried on
the developed neural network model. The algorithm
In the testing phase, the bulk sugary food objects was developed for texture feature extraction using
from untrained set of samples are used to test the GLCM method performed well in the task of
trained neural network model for classification. The extracting texture features from images of different
sample feature values for the applecake is shown in bulk sugary food objects. The results obtained in this
figure 4. work indicate that the ANNs can in general classify
bulk sugary food objects with success rate of 86% to
90%. Initial model is developed using only six texture
10000
features such as Contrast, Correlation, Energy,
1000 Entropy, Homogeneity, and Dissimilarity. The neural
network model performance found to be low (74%).
Feature Values

100 Since these features are not sufficient for better


10 classification of bulk sugary food objects, some more
features are extracted. Hence fifteen features shown
1 in Table 1 are considered for improving the
1 3 5 7 9 11 13 15 performance. The accuracies obtained for
0.1 classification of bulk sugary food objects when
0.01 developed neural network is trained with different
Texture Features learning constants and different termination errors are
given in Table 2. From the results tabulated in Table
Figure 4: Plot for data set of sweet sample applecake 2, one can conclude that better classification is
used for training possible with smaller learning constant. The
classification accuracy for bulk sugary food samples
The developed neural network is trained with for trained and untrained samples is shown in Figure
termination error (TE) 0.01 in 62 epochs the value of 7 and Figure 8 respectively. The graph (Figure 7)
learning constant (learning rate LR) used is 0.1. The represents the accuracy of network for classification
plots for the data sets of each class, used for testing of different types of trained bulk sugary objects. The
are shown in Figure 5 and Figure 6. accuracy of the neural network is found to be 97.5%.
The graph (Figure 8) represents the accuracy of
network for classification of different types of

12
ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

untrainedd bulk sugaryy objects. Thee accuracy of the deveeloped neuraal network model classsifies ten
neural neetwork model is found to bee 90%. diffeerent varietiess of bulk suggary food objjects. The
bulkk sugary foodd objects aree classified withw 90%
accuuracy. For thhe food objeccts Applecakee, Bunde,
Classificaation Accuraacy Jileb
bi, Burfi, Kallakand and suuraliholige, itt is found
that the recognitioon is 100%. AAnd for the foood objects
100 Jammun, Doodhpeda, and Mysoorepak, the reecognition
Accuracy (%)

95 accuuracy is 80% % and the foood object ladaakiladu is


90 recoognized with 60% accuraccy. In this work w gray
85 scale images are considered. Inn future colou ur features
80 can be considerred for better classificattion. The
75
deveeloped neuraal network ccan be enhaanced for
Applecake

Ladakiladu

Suraliholige
Jilebi
Jamun
Bundeladu

Kalakand
Burfi
Doodhpeda

Mysorepak
p
evalluating qualityy of the sugaryy food objectss, in order
to meet
m the consuumer’s expectaations.
y

Bulk Sugary
S Food Samples
S 7. References
R
[1] Anami B S and D G Savakar, (2009). Improved
method forr Identificatioon of Foreig gn Bodies
Mixed Foodd Grain Imagee Samples, International
Figure 7:
7 Classificatioon Accuracy of
o training dataa set Journal of Artificial
A Inteelligence andd machine
learning(AIM ML), Vol 9, Isssue 1, pp 1-9
9.
[2] B.S.Anami, Vishwanath B Burkpalli, (20
003). S. A.
Classificcation Accu
uracy
Angadi, Nagama
N Patiil, Nneural network
approach foor grain classiification and gradation,
g
100 Proceedingss of the seconnd national conference
Accuracy (%)

80 on documennt analysis annd recognition n, held at


60 Mandya, Inddia, 0n 11-12 July, pp 394-4 405.
40 [3] B.S.Anami, D.G.Savakarr, Aziz Makaandar, and
20 P.H.Unki, (22005). A neuural network model
m for
0 classificationn of bulk grrain samples based on
colour and texture,
t Proceeedings of International
Jilebi
Applecake

Jamun
Bundeladu

Ladakiladu

Kalakand
Burfi

Suraliholige
Doodhpeda

Mysorepak k

conference ono Cognition and Recogniition, held


at Mandya, India, 0n 222 & 23 Decem mber, pp,
359 – 368.
M

[4] B.S.Anami, Vishwwanath Burkpalli,


B
Bulkk Sugary Food
d Samples Sharanabasaappa Maddival,(2005).A A texture
based approoach for classiification of buulk boiled
food grainn images, Proceedings of the
Internationaal conferencee on Cogniition and
Figure 8:
8 Classificatioon Accuracy of
o testing dataa set recognition, held at Manndya, India, 0n 11-12
July, pp 4199-426.
Table 2 Classificatioon Rate of the Neural Network [5] B.S.Anami, Vishwanaath Burkpallli, S.H
Moodel for differeent Learning Constant
C and Sangamesh, (2005). A neural netwo ork based
term
mination Errorr model for reecognition of Singleton foo od objects,
Networ Learnin Terminatioo Classificcatio Proceedingss of the Internnational confeference on
k id g n Error n Accurracy Cognation anda recognitiion documentt analysis
Constan (%) and recognition, held at Mandya, Indiia, 0n 11-
t 12 July, pp 419-426.
4
Net1 0.5 0.001 86 [6] Bin Zhu, Lu L Jiang, Yaaguang Luo & Yang
Net2 0.2 0.005 88 Tao,(2007). Gabour featuure-based app ple quality
Net3 0.2 0.003 88 inspection using
u kernel principal component
Net4 0.1 0.005 90 analysis. Jouurnal of Foodd Engineering g. Vol. 81,
Issue 4, pp 741-749.
7
[7] Cheng-Jin DuD and Da-W Wen Sun (2004 4), Recent
developmennts in the aapplications of o image
6. Concclusion and
d future woork. processing techniques for food quality
This worrk gives an efficient
e modeel for bulk suugary evaluation, Trends inn Food Sciience &
food objjects identificcation and classification.
c The Technology, vol 15, Issuee 5, pp 230-24 49.
gray leveel co-occurrennce method is i found to beb an [8] Cheng-Jin Du, Da-Wenn. (2005). Pizzza sauce
efficient method for texture
t featurre extraction. The spread classsification using color viision and

13
ICGST-GVIP Journal, ISSN: 1687-398X, Volume 9, Issue 4, August 2009

support vector machines. Journal of Food Biographies


Engineering. Vol. 66, Issue 2, Page 137-145.
[9] D. Patel, I. Hannah & E.R.Davies. (1994). Dr. Basavaraj S. Anami
Foreign Object Detection via Texture Analysis. Principal. K. L. E. Institute
Proc of the 12th IAPR international conference Of Technology. Hubli,
E., Volume 1 pp 586-588. Karnataka, India. He has
[10] D. C. He, L. Wang and Juibert (1987). Texture obtained B E degree in
feature extraction, Pattern Recognition Letters. Electrical Engineering,
No. , pp. 29-273. Masters degree in Computer
[11] Dong-Chen HE and LI Wang (1990). Texture Science and PhD in
unit, Texture Spectrum and Texture Analysis, Computer Science in the year 1981, 1986 and 2003
IEEE Trans. Geosciences and remote sensing, respectively. He worked as faculty of Computer
vol 28, no 4. Science and Engineering Department, Basaveswar
[12] Giaime Ginesu, Daniele D. Giusto, Volker Engineering College. Bagalkot in various
Margner, Peter Meinlschmidt (2004). Detection designations from 1983 to 2008. His research areas
of Foreign Bodies in Food by Thermal Image of interest are Image Processing and Pattern
Processing, IEEE Transactions on Industrial Recognition, Character recognition, Fuzzy systems
Electronics, Vol 51, No 2. and Neural Networks. He has published 50
[13] Koichi Fujiwara, Fumiaki Taked & research papers in peer reviewed International
HisayaUchida Sakoobunthu Lalita. (2002). Journals and conferences.
Dishes Extraction With Neural Network For Tel (off): +91 0836-2232681.
Food Intake Measuring System. SICE, pp1627- Fax: +91 0836 -2330688
1630. E-mail: anami_basu@hotmail.com,
[14] M. Barni, A.W.Mussa, A.Mecocci,
V.Cappelline, T.S.Durrani(1995).An Intelligent Mr. Vishwanath Burkpalli
Perception System for Food Quality inspection Research Scholar, Deportment of
using Colour Analysis, IEEE International Computer Science and
Conference on Image Processing Vol 1, pp 450- Engineering, Basaveswara
453. Engineering College Bagalkot -
[15] R M Haralick, K.Shanmugam and I.Dinstein 587102. He has obtained his
(1973). Textural Features for Image B.E. degree in Computer Science
Classification, IEEE Transactions on Systems, & Engineering in 1996 and
Man, Cybernetics, Vol (3), pp. 610-621. Masters Degree in Computer Science and
[16] Prof S. K. Shah and V. Gandhi, (2004). Image Engineering in 1999. He is working for his
Classification Based on Textural Features using Doctoral degree in Computer Science. His research
Artificial Neural Network, Januar IE(I) Journal area of interest is Image Processing and Pattern
– ET, Vol 84. Recognition.
[17] Yud-Ren Chen.(1994). Applying Knowledge- Tel (off): +91 08472- 255685
Based System to Meat Grading. US Govt Work Fax (off): +91 08472- 255685
pp 120-123. E-mail: vishwa_bc@rediffmail.com
[18] Zhao-yan Liu, Fang Cheng, Yi-bin Ying, &
Xiu-qin Rao, (2005). Identification of rice seed
varieties using neural network. Journal of
Zhejiang University, Science B, Vol. 6(11), pp
1095-1100.

14

You might also like