You are on page 1of 42

INTERNATIONAL JOURNAL OF IMAGE

PROCESSING (IJIP)






VOLUME 6, ISSUE 3, 2012

EDITED BY
DR. NABEEL TAHIR








ISSN (Online): 1985-2304
International Journal of Image Processing (IJIP) is published both in traditional paper form and in
Internet. This journal is published at the website http://www.cscjournals.org, maintained by
Computer Science Journals (CSC Journals), Malaysia.


IJIP Journal is a part of CSC Publishers
Computer Science Journals
http://www.cscjournals.org



INTERNATIONAL JOURNAL OF IMAGE PROCESSING (IJIP)

Book: Volume 6, Issue 3, June 2012
Publishing Date: 20-06- 2012
ISSN (Online): 1985-2304

This work is subjected to copyright. All rights are reserved whether the whole or
part of the material is concerned, specifically the rights of translation, reprinting,
re-use of illusions, recitation, broadcasting, reproduction on microfilms or in any
other way, and storage in data banks. Duplication of this publication of parts
thereof is permitted only under the provision of the copyright law 1965, in its
current version, and permission of use must always be obtained from CSC
Publishers.



IJIP Journal is a part of CSC Publishers
http://www.cscjournals.org

IJIP Journal
Published in Malaysia

Typesetting: Camera-ready by author, data conversation by CSC Publishing Services CSC Journals,
Malaysia



CSC Publishers, 2012


EDITORIAL PREFACE

The International Journal of Image Processing (IJIP) is an effective medium for interchange of
high quality theoretical and applied research in the Image Processing domain from theoretical
research to application development. This is the second issue of volume six of IJIP. The Journal
is published bi-monthly, with papers being peer reviewed to high international standards. IJIP
emphasizes on efficient and effective image technologies, and provides a central for a deeper
understanding in the discipline by encouraging the quantitative comparison and performance
evaluation of the emerging components of image processing. IJIP comprehensively cover the
system, processing and application aspects of image processing. Some of the important topics
are architecture of imaging and vision systems, chemical and spectral sensitization, coding and
transmission, generation and display, image processing: coding analysis and recognition,
photopolymers, visual inspection etc.

The initial efforts helped to shape the editorial policy and to sharpen the focus of the journal.
Starting with volume 6, 2012, IJIP appears in more focused issues. Besides normal publications,
IJIP intends to organize special issues on more focused topics. Each special issue will have a
designated editor (editors) either member of the editorial board or another recognized specialist
in the respective field.

IJIP gives an opportunity to scientists, researchers, engineers and vendors from different
disciplines of image processing to share the ideas, identify problems, investigate relevant issues,
share common interests, explore new approaches, and initiate possible collaborative research
and system development. This journal is helpful for the researchers and R&D engineers,
scientists all those persons who are involve in image processing in any shape.

Highly professional scholars give their efforts, valuable time, expertise and motivation to IJIP as
Editorial board members. All submissions are evaluated by the International Editorial Board. The
International Editorial Board ensures that significant developments in image processing from
around the world are reflected in the IJIP publications.

IJIP editors understand that how much it is important for authors and researchers to have their
work published with a minimum delay after submission of their papers. They also strongly believe
that the direct communication between the editors and authors are important for the welfare,
quality and wellbeing of the Journal and its readers. Therefore, all activities from paper
submission to paper publication are controlled through electronic systems that include electronic
submission, editorial panel and review system that ensures rapid decision with least delays in the
publication processes.

To build its international reputation, we are disseminating the publication information through
Google Books, Google Scholar, Directory of Open Access Journals (DOAJ), Open J Gate,
ScientificCommons, Docstoc and many more. Our International Editors are working on
establishing ISI listing and a good impact factor for IJIP. We would like to remind you that the
success of our journal depends directly on the number of quality articles submitted for review.
Accordingly, we would like to request your participation by submitting quality manuscripts for
review and encouraging your colleagues to submit quality manuscripts for review. One of the
great benefits we can provide to our prospective authors is the mentoring nature of our review
process. IJIP provides authors with high quality, helpful reviews that are shaped to assist authors
in improving their manuscripts.


Editorial Board Members
International Journal of Image Processing (IJIP)

EDITORIAL BOARD

EDITOR-in-CHIEF (EiC)

Professor Hu, Yu-Chen
Providence University (Taiwan)


ASSOCIATE EDITORS (AEiCs)


Professor. Khan M. Iftekharuddin
University of Memphis
United States of America

Assistant Professor M. Emre Celebi
Louisiana State University in Shreveport
United States of America

Assistant Professor Yufang Tracy Bao
Fayetteville State University
United States of America

Professor. Ryszard S. Choras
University of Technology & Life Sciences
Poland


Professor Yen-Wei Chen
Ritsumeikan University
Japan


Associate Professor Tao Gao
Tianjin University
China


EDITORIAL BOARD MEMBERS (EBMs)



Dr. C. Saravanan
National Institute of Technology, Durgapur West Benga
India

Dr. Ghassan Adnan Hamid Al-Kindi
Sohar University
Oman

Dr. Cho Siu Yeung David
Nanyang Technological University
Singapore

Dr. E. Sreenivasa Reddy
Vasireddy Venkatadri Institute of Technology
India

Dr. Khalid Mohamed Hosny
Zagazig University
Egypt

Dr. Chin-Feng Lee
Chaoyang University of Technology
Taiwan

Professor Santhosh.P.Mathew
Mahatma Gandhi University
India

Dr Hong (Vicky) Zhao
Univ. of Alberta
Canada

Professor Yongping Zhang
Ningbo University of Technology
China

Assistant Professor Humaira Nisar
University Tunku Abdul Rahman
Malaysia

Dr M.Munir Ahamed Rabbani
Qassim University
India

Dr Yanhui Guo
University of Michigan
United States of America

Associate Professor Andrs Hajdu
University of Debrecen
Hungary

Assistant Professor Ahmed Ayoub
Shaqra University
Egypt

Dr Irwan Prasetya Gunawan
Bakrie University
Indonesia

Assistant Professor Concetto Spampinato
University of Catania
Italy

Associate Professor Joo M.F. Rodrigues
University of the Algarve
Portugal

Dr Anthony Amankwah
University of Witswatersrand
South Africa
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012
TABLE OF CONTENTS




Volume 6, Issue 3, June 2012


Pages

167 - 181 A New Method for Indoor-outdoor Image Classification Using Color Correlated Temperature
A. Nadian Ghomsheh & A. Talebpour

182 - 197 Effect of Similarity Measures for CBIR Using Bins Approach
Dr. H. B. Kekre & Kavita Sonawane








A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 167
A New Method for Indoor-outdoor Image Classification Using
Color Correlated Temperature


A. Nadian Ghomsheh a_nadian@sbu.ac.ir
Electrical and Computer Engineering Department
Shahid Beheshti University
Tehran, 1983963113, Iran

A. Talebpour talebpour@sbu.ac.ir
Electrical and Computer Engineering Department
Shahid Beheshti University
Tehran, 1983963113, Iran

Abstract

In this paper a new method for indoor-outdoor image classification is presented; where the
concept of Color Correlated Temperature is used to extract distinguishing features between
the two classes. In this process, using Hue color component, each image is segmented into
different color channels and color correlated temperature is calculated for each channel.
These values are then incorporated to build the image feature vector. Besides color
temperature values, the feature vector also holds information about the color formation of the
image. In the classification phase, KNN classifier is used to classify images as indoor or
outdoor. Two different datasets are used for test purposes; a collection of images gathered
from the internet and a second dataset built by frame extraction from different video
sequences from one video capturing device. High classification rate, compared to other state
of the art methods shows the ability of the proposed method for indoor-outdoor image
classification.

Keywords: Indoor-outdoor Image Classification, Color Segmentation, Color Correlated
Temperature


1. INTRODUCTION
Scene classification problem is a big challenge in the field of computer vision [1]. With rapid
technological advances in digital photography and expanding online storage space available
to users, demands for better organization and retrieval of image data base is increasing.
Precise indoor-outdoor image classification improves scene classification and allows such
processing systems to improve the performance by taking different methods based on the
scene class [2]. To classify images as indoor-outdoor, it is common to divide an image into
sub-blocks and process each block separately, benefiting from computational ease. Within
each image sub-block, different low-level features such as color, texture, and edge
information are extracted and used for classification.

Color provides strong information for characterizing landscape senses. Different color spaces
have been tested for scene classification. In a pioneer work [3], an image was divided into
4 4
blocks, and Ohta color space [4]was used to construct the required histograms for
extracting color information. Using only color information they achieved a classification
accuracy of 74.2%. LST color space used by [5] achieved a classification accuracy of 67.6%.
[6] used first order statistical features from color histograms, computed in RGB color space,
and achieved a classification rate with 65.7% recall and 93.8% precession. [7] compared
different features extracted from color histograms. These features include: opponent color
chromaticity histogram, color correlogram [8], MPEG-7 color descriptor, colored pattern
appearance histogram [9], and layered color indexing [10].Results of this comparison showed
that no winner could be selected for all types of images but significant amount of redundancy
in histograms can be removed. In [11] mixed channels of RGB, HSV, and Luv color spaces
were incorporated to calculate color histograms. First and second order statistical moments of
each histogram served as significant features for indoor-outdoor classification. To improve
A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 168
classification accuracy, camera metadata related to capture conditions was used in [12]. In
[13], each image was divided into
24 32
sub-blocks and a 2D color histogram in Luv color
space along with a composite feature that correlates region color with spatial location derived
from H component in HSV color space where used for indoor-outdoor image classification.
[14, 15] proposed Color Oriented Histograms (COH) obtained from images divided into 5
blocks. They also used Edge Oriented Histograms (EOH) and combined them together to
obtain CEOH as the feature to classify images into indoor-outdoor classes.

Texture and Edge features were also used for indoor-outdoor image classification. [5]
extracted texture features obtained from a two-level wavelet decomposition on the L-channel
of the LST color space[16]. In [17] each image was segmented via fuzzy C-means clustering
and the extracted mean and variance of each segment represented texture features. [18]
used variance, coefficient of variation, energy, and entropy to extract texture features for
indoor-outdoor classification. Texture orientation was considered in [19]. From the analysis of
indoor and outdoor images, and images of synthetic and organic objects, [20] observed that
organic objects have a larger amount of small erratic edges due to their fractal nature. The
synthetic objects, in comparison, have edges that are straighter, and less erratic.

Although color has shown to be a strong feature for indoor-outdoor image classification,
dividing images into different blocks regardless of the information present in each individual
image will degrade the classification results, as the mixture of colors in each block would be
unpredictable. Another thing that has not gained much attention is the scene illumination
where the image was captured in. This is an important aspect since indoor and outdoor
illuminations are quite different and incorporating such information can effectively enhance
the classification results.

To overcome such limitations when color information is considered for indoor-outdoor image
classification, a new method based on the Color Correlated Temperature feature (CCT) is
proposed in this paper. As will be shown, the apparent color of an object will change when the
scene illumination changes. This feature can be very effective for the aims of this paper, since
the illuminants of indoor scenes are much different compared to outdoor scenes. CCT is a
way to show the apparent color of an image under different lighting conditions. In this
process, each image is divided into different color segments and CCT is found for each
segment. These values form the image feature vector, where, other than CCT information,
color formation of the image is inherently present in the feature vector. In the classification
phase KNN classifier is used for classification [21]. The focus of this paper is on the ability of
color information for indoor-outdoor image classification. Edge and texture features can also
be added to the image to enhance the classification rate, but this is beyond the scope of this
paper.

The reminder of the paper is organized as follows: section 2 reviews the theory of color image
formation in order to provide insight to why the concept of CCT can be effective for the aims
of this research. In section 3 the proposed method for calculating the temperature vector for
an arbitrary image is explained. Section 4 shows the experimental results, and section 5
concludes the paper.

2. REVIEW OF COLOR IMAGE THEORY
A color image is result of light illuminating the scene, the way objects reflect the light hitting
their surfaces, and characteristics of the image-capturing device. In the following subsections,
first, Dichromatic Reflection Model (DRM) is described and then various light sources are
briefly looked at to show differences among them. This explanation shows why the concept of
CCT can be utilized for indoor-outdoor image classification.

2.1 Dichromatic Reflection Model
A scene captured by a color camera can be modeled by spectral integration. This is often
described by DRM. Light striking a surface of non-homogeneous materials passes through air
and contacts the surface of the material. Due to difference in mediums index of refraction,
some of the light will reflect from the surface of the material which is called surface
reflectance (Ls). The light that penetrates into the body of the material is absorbed by the
colorant, transmitted through the material, or will re-emit from the entering surface. This
A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 169
component is called the body reflectance (Lb). [22]. The total light (L), reflected from an object
is the sum of surface and body reflectance, and can be formulized as;
) , ( ) , ( ) , ( + =
b s
L L L
(1)
where,

is the wavelength of the incident light and the photometric angle that includes
the viewing angle e, the phase angle g, and the illumination direction angle i (fig.1). Ls and Lb
are both dependent on the relative spectral power distribution (SPD), defined by Cs and Cb,
and a geometric scaling factor ms and mb which can be formulized by;
) ( ) ( ) , (
s s s
C m L =
(2)
) ( ) ( ) , (
b b b
C m L =
(3)

Cs and Cb are also both product of the incident light spectrum (E), and the materials spectral
surface reflectance (S) and body reflectance (B), defined by;
) ( ) ( ) (
s s s
E C =
(4)
) ( ) ( ) (
b b b
E C =
(5)


i
e
g
Incident light
Surface reflected light
Body reflected light
Normal


FIGURE 1: DRM model, showing surface reflectance and body reflectance from surface of an object.

By inspecting equations, (2) to (5) it can be observed that the light that enters into the camera
is dependent on the power spectrum of the light source; therefore, for the same object
different apparent color is perceived under different illuminations. Due to this fact, it can be
implied that different groups of illuminants give an object different perceived colors. The less
within class variance among a group of light sources and the more between class variance
among different groups of light sources indicate a better distinguishing feature for indoor
outdoor image classification when scene illumination is considered. In the next subsection
different light sources are reviewed to how different classes of light sources differ from each
other.

2.2- Light Sources
The color of a light source is defined by its spectral composition. The spectral composition of
a light source may be described by the CCT. A spectrum with a low CCT has a maximum in
the radiant power distribution at long wavelengths, which gives the material a reddish
appearance, e.g. sunlight during sunset. A light source with a high CCT has a maximum in
the radiant power distribution at short wavelengths and gives the material a bluish
appearance, e.g., special fluorescent lamps [23]. Fig 2-a shows three fluorescent lamp SPDs
compared with a halogen bulb, and Fig 2-b shows SPD of daylight in various hours of day.
Comparison between these two types shows how the spectrums of fluorescents lamps follow
the same pattern and how they are different with respect to the halogen lamp.

Fig. 2-c shows three diverse light spectrums; Blackbody radiator, a fluorescent lamp, and
daylight all having a CCT of 6200 K. It can be seen that the daylight spectrum is close to the
Blackbody spectrum whereas the fluorescent lamp spectrum deviates significantly from the
blackbody spectrum.172 light sources measured in [24] showed that illuminant chromaticity
A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 170
falls on a long thin band in the chromaticity plane which is very close to the Planckian locus of
Blackbody radiators

By reviewing the SPDs of various light sources it can be implied that the same group of light
sources show the same pattern in their relative radiant power when compared to other
groups. This distinguishing feature makes it possible to classify a scene based on scene
illuminant; which in this paper is incorporated for indoor-outdoor image classification.

300 400 500 600 700 800
0
.5
1
1.5
2
2.5
Wavelength (nm)
R
e
l
a
t
i
v
e

r
a
d
i
a
n
t

p
o
w
e
r


Sylvania halogen
Mabeth flurescent
Philips flurescent
Sylvania flurescent


(a)

(b) (c)

FIGURE 2: Different light sources with their SPDs. (a) SPDs for three different fluorescent lamps
compared to SPD of halogen bulb [23]. (b) Daylight SPDs at different hours of day, normalized at
560nm. (c) Spectra of different light sources all with a CCT of 6200K and normalized at = 560 nm [24].

3. PROPOSED METHOD
Review of color image formation showed that the color of an image captured by the camera is
result of the reflected light from the surface of objects as a sum of body and surface
reflectance (eq. 1). Surface reflectance has same properties of the incident light and it is a
significant feature for detecting the light source illuminating the scene. Body reflectance on
the other hand most closely resembles the color of the material taking into account the
spectra of the incident light; which in this paper is used as a metric to classify images as
indoor and outdoor. Different steps of the proposed algorithm are shown in fig. 3.
A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 171
.
.
.
Channel 1
Channel 2
Channel n
.
.
.
.
.
Average Channel 1
Calculate CCT 1
Average Channel 2
Calculate CCT 2
Average Channel 1
Calculate CCT 1
T
e
s
t

F
e
a
t
u
r
e

V
e
c
t
o
r

(
T
F
V
)
Indoor Fv
i
i=1,2,,n
Outdoor FV
i
i=1,2,,m
KNN classifier
Indoor outdoor
Input
image
Color
Segmentation
CCT calculation
Training vectors


FIGURE 3: Block Diagram of the proposed method for indoor-outdoor classification

In the first step, color segmentation is performed and the image is partitioned into N color
channels (each color segment is thought of an independent color channel). Next CCT is
calculated for each channel, and the feature vector obtained is used for indoor-outdoor image
classification. Finally a KNN classifier is used to partition images into two classes of indoor
and outdoor. Each step of the algorithm is next explained in detail.

3.1 Color Segmentation
Many color spaces have been used for color segmentation. HSV is nonlinear color spaces
and a cylindrical-coordinate representation of points in RGB color model. The nonlinear
property of this color space makes it possible to segment colors in one color component
independent of other components which is an advantage over linear color spaces [25]. This
is color space is used here to segment images into desired number of segments; where each
segment is called a color channel. HSV is obtained from RGB space by:

\
|

=
=
b if
g r
g if
r b
r if
b g
if
h
max 240
min max
60
max 120
min max
60
max 360 mod
min max
60
min max 0
(6)

=
=
otherwise
if
s
max min/ 1
0 max 0
(7)
max = v
(8)

here max and min are the respective maximum and minimum values of r, g, and b, the RGB
color components for each pixel in the image.

The following steps show how each image is converted to its corresponding color channels
(Ch).For an input image with color components: Red, Green, and Blue:
1) Convert image from RGB color space to HSV
2) Quantize H component to 2
n
where n=1,2,,N
3) Find a mask that determines each color channel (ch):
0 ) , (
1 ) , ( ) , (
=
= =
y x mask else
y x mask n y x H if
n
n

4) Each Ch is then obtained by:
.Blue)] mask .Green), ( k .red), (ms =[(mask Ch
n n n n


A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 172
where . Represent point-by-point multiplication of matrixes. After dividing each image into
the respective color channels, CCT for each available channel has to be calculated.

3.2 Calculating CCT
A few algorithms have been introduced to estimate temperature of a digital image for scene
classification or finding scene illumination[26, 27]. Different purpose of the previous
algorithms makes them impractical for the aim of this paper.

Planckian, or blackbody locus can by calculated by colorimetrically integrating the Planck
function at many different temperatures, with each temperature specifying a unique pair of
chromaticity coordinates on the locus [28]. Inversely if one has the Planckian locus, then CCT
can be calculated for every chromaticity coordinate. To find the CCT for each color channel it
is necessary to find a chromaticity value to represent each color channel. The algorithm
proposed for calculation CCT for each color channel has the following steps:

1) From V component of HSV color space, discard dark pixels that their value is smaller
than 10% of maximum range of V for each ch.
2) Take average of the remaining pixels in RGB color space to discard surface reflectance
through the iterative process.
3) Find the CCT of the color channel using the average chromaticity value.

From the DRM model it will be straight forward that dark pixels in the image do not hold
information about the scene luminance since they are either not illuminated, or the reflected
light from their surface does not enter the camera. In steps 2, pixels, which their values hold
information mainly about surface reflectance, are discarded through an iterative process. In
this process instead of just discarding bright points, the algorithm discards pixels with respect
to luminance of other pixels. Using this average value, the CCT of each color channel is the
calculated.

3.2.1 Calculating Average Chromaticity of L
b

The aim of averaging each channel is to discard pixels that have been exposed to direct light,
or where the reflection from objects surface is in line with camera lenses. The value of such
pixels may vary with respect to the surface reflectance and lighting condition. The flow chart
to implement the proposed algorithm is shown in Fig. 4.

A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 173
) (
n
ch channel color for
2 1
1
) (
) (
:
n
i
n
i
ch
ch average
calculate
=
=
+
+

1 + i
1
1
T if
i i
>
+

otherwise if y x Mask
T A T if y x Mask
y x Ch A
n
n
n
i
0 ) , (
1 ) , (
) , (
3 3
1
=
< < =
=
+

n n n
Mask y x ch y x ch ). , ( ) , ( =


FIGURE 4: Different steps of color segmentation process

In this algorithm, Ch(x,y) is pixel value at position (x,y) in Ch
n
, M is the mask that discards the
unwanted information. . is the inner product. is variance, and is the mean value of the
remaining pixels in iteration i. T1, T2, and T3 are the thresholds that have to be determined.
The convergence of this algorithm is guaranteed since in worth case only one pixel of the
image will be left unchanged and the calculated for that pixel will stay constant. Therefore,
thresholds T1, T2, and T3 can be chosen arbitrary. However, to find a good set of thresholds
they are found by training them on 40 training images. To do so it is desired to find triple
T=(T
1
,T
2
,T
3
)

that for each color channel the best average point is obtained. This point is
defined as "the point where the average values that is most frequently occurred". Let each
Ch
i,n,C
be all the training color channels:


(9)

where i (i=1,2,,I) is the number of image samples, n (n= 1,2,,N) is the number of color
channel in each image, and C (C=1,2,3) shows Red, Green, and Blue components. Also let
T
1
, T
2,
and T
3
be vectors:
], 5 ..., , 2 , 1 [
] 5 . 1 ..., , 2 . 1 , 1 . 1 [
] 3 . 1 ..., , 1 . 1 , 05 . 1 [
3
2
1
=
=
=
T
T
T

which can take the form:
{ } ) ( ),..., ( ), ( ) (
5
3
6
2
6
1
2
3
1
2
1
1
1
3
1
2
1
1
T T T T T T T T T T =
(10)
where =1,2,,180. Set T() consists of all threshold combinations that have to be checked
in order to find the best choice of thresholds, T
opt
. For each sample Ch, the average value of
the channel in the respective color component is calculated for T(), and stored in the
average vector Avg:
(
(
(

=
1 , 1 ,
1 , , 1 1 , 1 , 1
, ,
i
n
C n i
ch
ch ch
ch M
L
(
(
(

2 , 1 ,
2 , , 1 2 , 1 , 1
i
n
ch
ch ch
M
L
(
(
(

3 , , 3 , 1 ,
3 , , 1 3 , 1 , 1
n i i
n
ch ch
ch ch
L
M O M
L
A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 174
))] 180 ( ( )),..., 2 ( ( )), 1 ( ( [ ) ( T Avg T Avg T Avg = Avg

(11)
Next, the histogram of Avg with ten bin resolution is calculated (Fig. 5). The most frequent bin
is considered as the acceptable average values of the Ch defined by Avg
max
. Set T
opt
shows
the thresholds that yield Avg
max
for a certain Ch, and is defined as:
)) ( ( max arg
) (

T T
T
opt
Avg =
(12)
By calculating T
opt
for all color channels a set T
opt
(t), t=1,2,,960 is achieved which can be
written as:

=
= = =
blue
opt
Green
opt
red
opt
t T
T T T
blue green red
t t t
obt
960 : 641 640 : 321 320 : 1
, , ) (
(13)

190 200 210
0
50
100
150
Average Values
F
r
e
q
u
e
n
c
y

FIGURE 5: Finding different T that yield Avgmax

From this set it is possible to find the optimum thresholds for each color component, for
example for the red channel T
opt,red
, can be written as:
`
) ( max arg
,
t T red
opt
T
red
t
red obt
=
(14)
T
obt,green
and T
opt,blue
are calculated in the same way as (14), and from them all the trained
thresholds are obtained as:
(
(
(
(

=
blue obt green obt red obt
blue obt green obt red obt
blue obt green obt red obt
T T T
T T T
T T T
,
3
,
3
,
3
,
2
,
2
,
2
,
1
,
1
,
1
obt
T
(15)
After calculating the average chromaticity value , it is possible to find the CCT of each Ch
n.
.

3.2.2 Calculating CCT For The Average Chromaticity
To calculate the CCT of a chromaticity value (x,y), it is transformed into Luv color space [25],
where pixel is represented by coordinates (u,v). Fig 6 shows the Luv chromaticity plane with
the plankian locus showing various Color temperatures, the perpendicular lines on the locus
are iso-temperature lines.

A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 175


FIGURE 6: Luv chromaticity plane with the plankian locus.

By utilizing iso-temperature line CCT can be found by interpolation from look-up tables and
charts. The most famous such method is Robertson's [29] (Fig 7). In this method CCT of a
chromaticity value T
c
, can be found by calculating:
|
|

\
|

+
+ =
+1 2 1
1
1
1 1 1 1
i i c
T T T T

(16)
where, is the angle between two isotherms. T
i
and T
i+1
are the color temperatures of the
look-up isotherms and i is chosen such that T
i
<T
c
<T
i+1
(fig 7). If the isotherms are tight
enough, it can assumed that, (
1
/
2
)(sin
1
/sin
2
), leading to:
|
|

\
|

+ =
+ + 1 1 1
1 1 1 1
i i i i
i
c
T T d d
d
T T
(17)
d
i
is distance of the test point to the i'th isotherm given by:
2
1
) ( ) (
i
i c i i c
i
m
u u m v v
d
+

=
(18)
where, (u
i
,v
i
) is the chromaticity coordinates of the ith isotherm on the Planckian locus and m
i
is the isotherm's slope. Since it is perpendicular to the locus, it follows that m
i
=-1/l
i
where l
i

is
the slope of the locus at (u
i
,v
i
). Upon calculation of CCT for all color channels, the feature
vector fv containing the respective CCT for each color channel is obtained:
] ,..., , [
2 1 N
CCT CCT CCT fv = (19)
where, N is the number of color channels in each image. This vector can now be used for
classification. In the classification phase, KNN classifier is used for classification. K is a user-
defined constant, and the test feature vectors are classified by assigning the label which is
most frequent among the K training samples nearest to that test point.
1

1 +

i
d
i
d
i
T
1 + i
T

FIGURE 7: Robertson method for finding CCT of a chromaticity value.
A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 176
4. EXPERIMENTAL RESULTS
To assess the performance of the proposed method, some tests are conducted on two sets of
indoor and outdoor images. First a collection 800 images gathered from the internet named
DB1 and a second dataset selected by extracting 800 frames from several video clips
captured by a Sony HD handy cam (DB2).

Using DB2 it is possible to denote that camera effects are the same for all images; hence the
classification ability of the proposed method is evaluated regardless of camera calibrations. A
total of 40 clips captured in outdoors, and 25 different clips captured indoor were used to
make DB2. The clips were captured at 29.97 frames per second and data were stored in RGB
color space with 8bit resolution for each color channel. Fig 8 shows some sample indoor and
outdoor frames. To show the difference between normal averaging and the averaging process
introduced in this paper where the effect of surface reflectance is eliminated fig 9 is utilized.




FIGURE 8: Top and middle rows: 4 indoor and outdoor and Bottom row: 4 consecutive outdoor frames


(a)

(b) (c)
A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 177
FIGURE 9: An example of the proposed algorithm for averaging color channels. a) Green channels of
pictures taken in outdoor (top row) and at indoor situations (bottom row). (b) Normal averaging of the
pixels. (c) Averaging pixels by the proposed algorithm.

Fig 9.a shows ten green color channels from five outdoor and five indoor scenes. For each
color channel the corresponding average chromaticity value in the uv plane is shown. Outdoor
images are shown with circles and indoor images are shown with black squares. As it can be
seen when the effect of surface reflectance is obscured, the indoor and outdoor images are
significantly separable. To calculate the average points T
opt
was trained. The histogram of
average values obtained for channel Ch
1,1,1
(as an example) are shown in fig10. Based on the
histograms calculated for all different channels, the matrix T
opt
was obtained as:
(
(
(

=
3 . 1 4 . 1 4 . 1
1 . 1 2 . 1 1 . 1
1 1 2
opt
T



FIGURE 10: Average values of the Ch1,1,1 component on the database

N=4 N=8 N=12 N=16 N=18
K=15
indoor 73.5 90.5 88 85.5 83.2
outdoor 51 69.2 80.5 89.5 82.4
K=20
indoor 78 86.5 92 87.5 88.1
outdoor 63.5 74 80.5 89 82.2
K=30
indoor 70 85.5 90 87 87.1
outdoor 52 72 80 87.5 77.2

TABLE 1: Results of classification on DB1

After finding the fv for an image, KNN classifier is used for classification. Table1 shows the
result of image classification for different N (number of color channels) and K (number of
neighbors in KNN classifier). From this table it can be seen N=16 yields the best result when
considering both indoor and outdoor detection rates. Furthermore when K=20 is chosen,
87.5% indoor and 89% outdoor images are correctly classified. Table 2 shows the
classification results on DB2. In this experiment, the results also show that choosing 16
channels for classification achieves higher classification rates. The overall comparisons on
DB1 and DB2 show that results only differ by a small percentage. This shows that the method
is robust for classification of images taken under unknown camera calibration and image
compression.







A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 178


N=4 N=8 N=12 N=16 N=18
K=15
indoor 77 92.5 91.5 87.5 90.5
outdoor 54 72 82.5 90 78.5
K=20
indoor 81.5 88.5 90.5 90 85.2
outdoor 63.5 86 84.5 89.5 86.5
K=30
indoor 74 87.5 88.5 88.5 84.5
outdoor 57 74.5 82.5 89.5 83.7

TABLE 2: Results of classification on DB2
To observe the results based on accuracy of classification and to find the K which yields the
best results, fig. 11 shows the accuracy of each experiment. Based on this figure it can be
seen that when DB1 and DB2 with N=16 and K=20 are chosen best classification rates with
88.25% and 89.75% in accuracy are obtained.

In Most cases, it can be seen that indoor images have been detected with more accuracy.
This because the outdoor image are exposed to more diverse lighting conditions since the
spectrum of sunlight is changing during daytime and also reflections from the sea, clouds and
the ground makes the spectrum of the reflected light more complex.

59
64
69
74
79
84
89
15 20 30
A
c
c
u
r
a
c
y

(
%
)
K
N= 4 DB1
N= 8 DB1
N=12 DB1
N=16 DB1
N=18 DB1
N= 4 DB2
N= 8 DB2
N=12 DB2
N=16 DB2
N=18 DB2


FIGURE 11: Accuracy of the all test cases
To evaluate classification accuracy of the proposed method, two color features: Color
Oriented Histograms (COH) and the Ohta color space used in [15] and [3] respectively are
extracted and tested on images in DB1. Table 3 shows the result of this comparison. These
features are extracted as explained in the original document.









A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 179

CCT COH Ohta
K=15
Indoor 85.5 94 85.5
Outdoor 89.5 68.5 70.25
Accuracy 87.5 81.25 78.75
K=20
Indoor 87.5 95 84
Outdoor 89 69.5 70
Accuracy 88.25 82.25 77
K=25
Indoor 87 96.5 86
Outdoor 87.5 68 68.5
Accuracy 87.25 82.25 77.25

TABLE 3: Comparison of different methods
From the results of this table, it is clear that the proposed method outperforms the state of the
art methods at least by 5%. The indoor classification rate using COH is in most cases more
significant in comparison to the classification rate using CCT, but outdoor classification using
COH shows quite low detection rates. Ohta color space shows not to be a preferable color
space for indoor-outdoor image classification. To further investigate the robustness of the
proposed method, in another experiment the result of indoor-outdoor classification is tested
when the JPEG Compression Ratio (CR) is changed for image in DB1[30].

CR=2 CR=3 CR=5 CR=10
CCT
Indoor 86 87.5 77 74.5
Outdoor 80 76 73 67
Accuracy 83 81.75 75 70.75
COH
Indoor 88 79.5 76 72.5
Outdoor 62.5 60 53 54.5
Accuracy 75.25 69.75 64.5 63.5
Ohta
Indoor 71 73.5 69.5 63
Outdoor 69.3 67 61 54.5
Accuracy 70.15 70.25 65.25 58.75

TABLE 4:. Effect of JPEG compression classification results

Table 4 summarizes the result of indoor-outdoor image classification for different compression
rates on DB1. From this table it can be seen that the CCT of the image is less affected when
CR=2 and 3 but as CR is increased the results for all methods start to degrade. For CR=10
classification accuracy based on CCT feature is still higher than 70% while for two other
tested approached , they are decreased to less than 65%. This result shows the robustness
of the proposed method against changes in compression changes applied to digital images.

5. CONCLUSIONS
In this paper, a new method based on image CCT was proposed for indoor-outdoor image
classification. Images were first segmented into different color channels and based on the
proposed algorithm CCT of each color channels was calculated. These CCTs formed the
feature vector which was fed to a KNN classifier to classify images as indoor or outdoor. Tests
were conducted on two datasets collected from the internet and video frames extracted from
40 different video clips. The classification results showed that incorporating CCT information
yields high classification accuracy of 88.25% on DB1 and 89.75% on DB2. The result of
classification on DB1 showed 5% improvement compared to other state of the art methods. In
addition, the method was tested against changes in the JPEG compression rate applied to
A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 180
images where the method showed to be more robust compared to other methods. High
classification rate and robustness of the presented method makes it highly applicable for the
application of indoor-outdoor image classification.

6. REFERENCES
[1] Angadi, S.A. and M.M. Kodabagi, A Texture Based Methodology for Text Region
Extraction from Low Resolution Natural Scene Images. International Journal of Image
Processing, 2009. 3(5): p. 229-245.
[2] Bianco, S., et al., Improving Color Constancy Using IndoorOutdoor Image
Classification. IEEE transaction on Image Processing, 2008. 17(12): p. 2381-2392.
[3] Szummer, M. and R.W. Picard, Indoor-outdoor image classification, in IEEE Workshop
on Content-Based Access of Image and Video Database. 1998: Bombay, India. p. 42-
51.
[4] Ohta, Y.I., T. Kanade, and T. Sakai, Color information for region segmentation.
Computer Graphics and Image Processing, 1980. 13(3): p. 222-241.
[5] Serrano, N., A. Savakis, and J. Luo, A computationally efficient approach to
indoor/outdoor scene classification, in International Conference on Pattern Recognition.
2002: QC, Canada, . p. 146-149.
[6] Miene, A., et al., Automatic shot boundary detection and classification of indoor and
outdoor scenes, in Information Technology, 11th Text Retrieval Conference. 2003,
Citeseer. p. 615-620.
[7] Qiu, G., X. Feng, and J. Fang, Compressing histogram representations for automatic
colour photo categorization. Pattern Recognition, 2004. 37(11): p. 2177-2193.
[8] Huang, J., et al., Image indexing using color correlograms. International Journal of
Computer Vision, 2001: p. 245268.
[9] Qiu, G., Indexing chromatic and achromatic patterns for content-based colour image
retrieval. Pattern Recognition, 2002. 35(8): p. 1675-1686.
[10] Qiu, G. and K.M. Lam, Spectrally layered color indexing. Lecture Notes in Computer
Science, 2002. 2384: p. 100-107.
[11] Collier, J. and A. Ramirez-Serrano, Environment Classification for Indoor/Outdoor
Robotic Mapping, in Canadian Conference on Computer and Robot Vision. 2009, IEEE.
p. 276-283.
[12] Boutell, M. and J. Luo, Bayesian Fusion of Camera Metadata Cues in Semantic Scene
Classification, in Computer Vision and Pattern Recognition (CVPR). 2004: Washington,
D.C. p. 623-630.
[13] Tao, L., Y.H. Kim, and Y.T. Kim, An efficient neural network based indoor-outdoor
scene classification algorithm, in International Conference on Consumer Electronics.
2010: Las Vegas, NV p. 317-318.
[14] Vailaya, A., et al., Image classification for content-based indexing. IEEE Transactions
on Image Processing, 2001. 10(1): p. 117-130.
[15] Kim, W., J. Park, and C. Kim, A Novel Method for Efficient IndoorOutdoor Image
Classification. Signal Processing Systems, 2010. 61(3): p. 1-8.
[16] Daubechies, I., Ten Lectures on Wavlets. 1992, Philadelphia: SIAM Publications.
A. Nadian Ghomsheh & A. Talebpour
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 181
[17] Gupta, L., et al., Indoor versus outdoor scene classification using probabilistic neural
network Eurasip Journal on Advances in Signal Processing, 2007. 1: p. 123-133.
[18] Tolambiya, A., S. Venkatraman, and P.K. Kalra, Content-based image classification
with wavelet relevance vector machines. Soft Computing, 2010. 14(2): p. 129-136.
[19] Hu, G.H., J.J. Bu, and C. Chen, A novel Bayesian framework for indoor-outdoor image
classification, in International Conference on Machine Learning and Cybernetics 2003:
Xian. p. 3028-3032
[20] Payne, A. and S. Singh, Indoor vs. outdoor scene classification in digital photographs.
Pattern Recognition, 2005. 38(10): p. 1533-1545.
[21] Songbo, T., An effective refinement strategy for KNN text classifier. Expert Systems
with Application, 2006. 30(2): p. 290-298.
[22] Shafer, S.A., Using color to separate reflection components. Color Research and
Application, 1985. 10(4): p. 210.
[23] Ebner, M., Color constancy. 2007, West Sussex: Wily.
[24] Finlayson, G., G. Schaefer, and S. Surface, Single surface colour constancy, in 7th
Color Imaging Conference: Color Science, Systems, and Applications. 1999:
Scottsdale, USA. p. 106-113.
[25] Cheng, H.D., et al., color image segmentation: advances and prospects. Pattern
Recognition, 2001. 34(12): p. 2259-2281.
[26] WNUKOWICZ, K. and W. SKARBEK, Colour temperature estimation algorithm for
digital images - properties and convergence. OPTO-ELECTRONICS REVIEW, 2003.
11(3): p. 193-196.
[27] Lee, H., et al., One-dimensional conversion of color temperature in perceived
illumination. Consumer Electronics, IEEE Transactions on, 2001. 47(3): p. 340-346.
[28] Javier, H., J. Romero, and L.R. Lee, Calculating correlated color temperatures across
the entire gamut of daylight and skylight chromaticities. Applied Optis, 1999. 38(27): p.
5703-5709.
[29] Robertson, A. and R. Alan, computation of Correlated Color Temperature abd
Distribution Temperature. Color Research and Application, 1968. 58(11): p. 1528-1535.
[30] Yang, E. and L. Wang, Joint optimization of run-length coding, Huffman coding, and
quantization table with complete baseline JPEG decoder compatibility. Image
Processing, IEEE Transactions on, 2009. 18(1): p. 63-74.


Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 182
Effect of Similarity Measures for CBIR Using Bins Approach


Dr. H. B Kekre hbkekre@yahoo.com
Professor,Department of Computer Engineering
NMIMS University,
Mumbai, Vileparle 056, India

Kavita Sonawane kavitavinaysonawane@gmail.com
Ph .D Research Scholar
NMIMS University,
Mumbai, Vileparle 056, India

Abstract

This paper elaborates on the selection of suitable similarity measure for content based image
retrieval. It contains the analysis done after the application of similarity measure named
Minkowski Distance from order first to fifth. It also explains the effective use of similarity measure
named correlation distance in the form of angle cos between two vectors. Feature vector
database prepared for this experimentation is based on extraction of first four moments into 27
bins formed by partitioning the equalized histogram of R, G and B planes of image into three
parts. This generates the feature vector of dimension 27. Image database used in this work
includes 2000 BMP images from 20 different classes. Three feature vector databases of four
moments namely Mean, Standard deviation, Skewness and Kurtosis are prepared for three color
intensities (R, G and B) separately. Then system enters in the second phase of comparing the
query image and database images which makes of set of similarity measures mentioned above.
Results obtained using all distance measures are then evaluated using three parameters PRCP,
LSRR and Longest String. Results obtained are then refined and narrowed by combining the
three different results of three different colors R, G and B using criterion 3. Analysis of these
results with respect to similarity measures describes the effectiveness of lower orders of
Minkowski distance as compared to higher orders. Use of Correlation distance also proved its
best for these CBIR results.

Keywords: Equalized Histogram, Minkowski Distance, Cosine Correlation Distance, Moments,
LSRR, Longest String, PRCP.


1. INTRODUCTION
Research work in the field of CBIR systems is growing in various directions for various different
stages of CBIR like types of feature vectors, types of feature extraction techniques,
representation of feature vectors, application of similarity measures, performance evaluation
parameters etc[1][2][3][4][5][6]. Many approaches are being invented and designed in frequency
domain like application of various transforms over entire image, or blocks of images or row
column vector of images, Fourier descriptors or various other ways using transforms are
designed to extract and represent the image feature[7][8][9][10][11][12]. Similarly many methods
are being design and implemented in the spatial domain too. This includes use of image
histograms, color coherence vectors, vector quantization based techniques and many other
spatial features extraction methods for CBIR [13][14][15][ 16][17]. In our work we have prepared
the feature vector databases using spatial properties of image in the form statistical parameters
i.e. moments namely Mean, Standard deviation, Skewness and Kurtosis. These moments are
extracted into 27 bins formed by partitioning the equalized histograms of R, G and B planes of
image into 3 parts.[18][19][20]. The core part of all the CBIR systems is calculating the distance
between the query image and database images which has great impact on the behavior of the
CBIR system as it actually decides the set of images to be retrieved in final retrieval set. Various
Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 183
similarity measures are available can be used for CBIR [21][22][23][24]. Most commonly used
similarity measure we have seen in the literature survey of CBIR is Euclidean distance. Here we
have used Minkowski distance from order first to fifth where we found that performance of the
system goes on improving with decrease in the order (from 5 to 1) of Minkowski distance; one
more similarity measure we have used in this work is Cosine Correlation distance [25][26][27][28],
which has also proved its best after Minkowski order one. Performance of CBIRs various
methods in both frequency and spatial domain will be evaluated using various parameters like
precision, recall, LSRR (Length of String to Retrieve all Relevant) and various others
[29][30][31][32][33]. In this paper we are using three parameters PRCP, LSRR and Longest
String to evaluate the performance of our system for all the similarity measures used and for all
types of feature vectors for three colors R, G and B. We found scope to narrate and combine
these results obtained separately for three feature vector databases based on three colors. This
refinement is achieved using criterion designed to combine results of three colors which selects
the image in final retrieval set even though it is being retrieved in results set of only one of these
three colors [11[12].

2. ALGORITHMIC VIEW WITH IMPLEMENTATION DETAILS

2.1 Bins Formation by Partitioning the Equlaized Histogram of R, G, B Planes
i. First we have separated the image into R, G and B Planes and calculated the equalized
histogram for each plane as shown below.
ii. These histograms are then partitioned into three parts with id 0, 1 and 2. This
partitioning generates the two threshold for the intensities distributed across x axis of
histogram for each plane. We have named these threshold or partition boundaries as
GL1 and GL2 as shown in Figure 2.


FIGURE 1: Query Image: Kingfisher



FIGURE 2: Equalized Histograms of R, G and B Planes With Three partitions 0, 1 and 2.
iii. Determination of Bin address: To determine the destination for the pixel under process of
extracting feature vector we have to check its R, G and B intensities where they fall, in
which partition of the respective equalized histogram either 0,1 or 2 and then this
way 3 digit flag is assigned to that pixel itself its destination bin address. Like this we
have obtained 000 to 222 total 27 bin addresses by dividing the histogram into 3 parts.
Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 184
2.2 Statistical Information Stored in 27 Bins: Mean, Standard Deviation, Skewness and
Kurtosis
Basically these bins obtained are having the count of pixels falling in particular range. Further
these bins are used to hold the statistical information in the form of first four moments for each
color separately. These moments are calculated for the pixel intensities coming into each bin
using the following Equations 1 to 4 respectively.

Mean

=
=
N
i
i
R
N
R
1
1

(1)
Skew

(3)
Standard deviation
( )

=
=
N
i
R R
N
SD
R
1
2 1


(2)
Kurtosis

(4)
Where R is Bin_Mean_R in eq. 1, 2, 3 and 4.

These bins are directed to hold the absolute values of central moments and likewise we could
obtained 4 moments x 3 colors =12 feature vector databases, where each feature vector is
consist of 27 components. Following Figure 3 shows the bins of R, G, B colors for Mean
parameter. Sample 27 Bins of R, G and B Colors for Kingfisher image shown in Figure 1.
0
50
100
150
200
250
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
'27' Bins
MeanR Mean G Mean B
M
e
a
n

o
f

C
o
u
n
t

o
f

P
i
x
e
l
s

i
n
E
a
c
h

B
i
n


FIGURE 3: 27 Bins of R, G and B Colors for MEAN Parameter.

In above Figure 3 we can observe that Bin number 3, 7, 8, 9, 12, 18, 20, 21 and 24 are empty
because the count of pixels falling in those bins is zero in this image.

2.3 Application of Similarity Measures
Once the feature vector databases are ready we can fire the desired query to retrieve the similar
images from the database. To facilitate this, retrieval system has to perform the important task of
applying the similarity measure so that distance between the query image and database image
will be calculated and images having less distance will be retrieved in the final set. In this work we
are using 6 similarity measures we named them L1 to L6, which includes Minkowski distance
from order 1 to order 5(L1 to L5) and L6 is another distance i.e Correlation distance for the image
retrieval. We have analyzed their performance using different evaluation parameters. These
similarity measures are given in the following equations 5 and 6.
Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 185
Minkowski Distance :
r
n
I
r
I I
DQ
Q
D Dist
1
1
|

\
|
=

=
(5)
Where r is a parameter, n is dimension and I is the
component of Database and Query image feature
vectors D and Q respectively.
Cosine Correlation Distance :

( ) ( )
(


2
) (
2
) (
) ( ) (
n Q n D
n Q n D
(6)
Where D(n) and Q(n) are Database and
Query feature Vectors resp.

Minkowski Distance: Here the parameter r can be taken from 1 to . We have used this
distance with r in the range from 1 to 5. When r is =2 it is special case called Euclidean
distance (L2).
Cosine Correlation Distance: This can be expressed in the terms of Cos







1

2



FIGURE 4 : Comparison of Euclidean and Cosine Correlation Distance
Observation: ed2>ed1 But ed1 >ed2
Correlation measures in general are invariant to scale transformations and tend to give the
similarity measure for those feature vectors whose values are linearly related. In Figure 4. Cosine
Correlation distance is compared with the Euclidean distance. We can clearly notice that
Euclidean distance ed2 > ed1 between query image QI with two database image features DI1
and DI2 respectively for QI. At the same time we can see that 1 > 2 i.e distance L6 for DI1 and
DI2 respectively for QI.

If we scaled the query feature vector by simply constant factor k it becomes k.QI ; now if we
calculate the ED for DI1 and DI2 with query k.QI we got ed1 and ed2 now the relation they
have is ed1 > ed2 which is exactly opposite to what we had for QI. But if we see the cosine
correlation distance; it will not change even though we have scaled up the query feature vector to
k.QI. It clearly states that Euclidean distance varies with variation in the scale of the feature
vector but cosine correlation distance is invariant to this scale transformation. This property of
correlation distance triggered us to make use this for our CBIR. Actually this has been rarely used
for CBIR systems and here we found very good results for this similarity measure as compared to
Euclidean distance and the higher orders of Minkowski distance.

2.4 Performance Evaluation
Results obtained here are interpreted in the terms of PRCP: Precision Recall Cross over Point.
This parameter is designed using the conventional parameters precision and recall defined in
equation 7 and 8.
ed1
ed2
ed2
ed1
QI
DI1
DI2
k .QI
Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 186
According to this once the distance is calculated between the query image and database images,
these distances are sorted in ascending order. According to PRCP logic we are selecting first 100
images from sorted distances and among these we have to count the images which are relevant
to query; this is what called PRCP value for that query because we have total 100 images of each
class in our database.
Precision: Precision is the fraction of the relevant images which has been retrieved (from all
retrieved)

Recall: Recall is the fraction of the relevant images which has been retrieved (from all relevant):


(7)


(8)


Further performance of this system is evaluated using two more interesting parameters about
which all CBIR users will always be curious, that are LSRR: Length of String to Retrieve all
Relevant and Longest String: Longest continuous string of relevant images.

3. EXPERIMENTAL RESULTS AND DISCUSSIONS
In this work analysis is done to check the performance of the similarity measures for CBIR using
bins approach. That is why the results presented are highlighting the comparative study for
different similarity measures named as L1 to L6 as mentioned in above discussion.

3.1 Image Database and Query Images
Database used for the experiments is having 2000 BMP images which include 100 images from
20 different classes. The sample images from database are shown in Figure 5. We have
randomly selected 10 images from each class to be given as query to the system to be tested. In
all total 200 queries are executed for each feature vector database and for each similarity
measure. We have already shown one sample query image in Figure 1. i.e. Kingfisher image for
which the bins formation that is feature extraction process is explained thoroughly in section II
part A and B.

3.2 Discussion With Respect to PRCP
As discussed above the feature vector databases containing feature vectors of 27 bins
components for four absolute moments namely Mean, Standard deviation, Skewness and
Kurtosis for Red, Green and Blue colors separately are tested with 200 query images for six
similarity measures and the results obtained are given below in the following tables. Tables I to
XII are showing the results obtained for parameter PRCP i.e. Precision Recall Cross over Point
values for 10 queries from each class. Each entry in the table is representing the total retrieval of
(out of 1000 outputs) relevant images in terms of PRCP for 10 queries of that particular class


FIGURE 5 : 20 Sample Images from database of 2000 BMP images having 20 classes
Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 187
TABLE 2: PRCP FOR GREEN MEAN FOR L1 TO L6
CLASS L1 L2 L3 L4 L5 L6
Flower 258 214 182 173 165 239
Sunset 714 664 633 614 610 674
Mountain 147 127 121 124 126 124
Building 189 158 149 134 133 158
Bus 421 308 247 236 223 307
Dinosaur 223 189 168 163 160 200
Elephant 176 127 107 102 103 127
Barbie 537 503 486 478 468 463
Mickey 243 225 212 205 203 237
Horses 331 303 290 279 272 310
Kingfisher 350 314 286 282 286 321
Dove 199 188 179 170 166 190
Crow 147 136 120 117 115 110
Rainbowrose 652 613 590 563 555 647
Pyramids 172 138 114 110 106 132
Plates 240 215 198 169 156 210
Car 242 247 250 252 263 272
Trees 263 221 205 185 167 227
Ship 302 289 285 270 266 294
Waterfall 226 182 175 162 157 191
Total 6032 5361 4997 4788 4700 5433

mentioned in the first left most column of all the tables. Last rows of all the tables represent the
total PRCP retrieval out of 20,000 for 200 images. When we observe the individual entry in the
tables that is total of 10 queries for many classes with respect to distances L1 and L6 we have
found very good PRCP values for average of 10 queries in the range from 0.5 to 0.8 which is
quite good achievement. We can say that precision and recall both are reached to good height
which seems difficult in the field of CBIR for large size databases. Further we have planned to
improve these results not limiting to average of 10 queries but towards average of 200 queries.
To obtain this refinement what we did here is we have combined and reduced the results
obtained for three colors separately to single results set of three colors together by applying the
criterion explained below.
Criterion: The image will be retrieved in the final set if it is being retrieved in any one color results
from R, G and B.

By applying this criterion to all results obtained for three colors, four moments mentioned in the
tables from I to XII we have improved the systems performance to very good extent for average
of 200 queries for moments namely Mean and Standard Deviation with similarity measures L1,
L6, L2 and L3 in increasing order. Results obtained are shown in Chart 1. We can see in chart
that the best average for 200 queries for PRCP values we could obtained is 0.5

0






















CHART 1:. Results using Criterion to combine the R, G B color results for L1 to
TABLE 1: PRCP FOR RED MEAN FOR L1 TO L6
CLASS L1 L2 L3 L4 L5 L6
Flower 388 321 264 225 198 357
Sunset 764 707 603 522 461 727
Mountain 144 116 117 112 110 114
Building 177 165 161 163 161 162
Bus 512 474 439 414 407 472
Diansour 251 202 171 152 145 192
Elephant 157 128 124 119 120 133
Barbie 517 483 474 438 432 504
Mickey 305 308 301 302 300 314
Horses 285 230 194 177 173 214
Kingfisher 300 258 235 223 215 268
Dove 207 194 196 185 178 187
Crow 177 169 183 183 185 106
Rainbowrose 643 618 596 585 575 638
Pyramids 186 141 114 121 121 135
Plates 238 199 176 163 142 197
Car 134 111 104 93 91 105
Trees 283 239 231 213 206 242
Ship 327 276 256 252 244 249
Waterfall 281 214 195 190 191 205
Total 6276 5553 5134 4832 4655 5521

Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 188
L6.























TABLE 4 : PRCP FOR RED STD FOR L1 TO L6
CLASS L1 L2 L3 L4 L5 L6
Flower 312 296 279 257 243 298
Sunset 719 681 648 619 600 726
Mountain 206 208 190 172 167 199
Building 278 262 249 235 228 257
Bus 508 481 455 430 417 484
Diansour 409 430 416 416 406 366
Elephant 286 311 320 336 342 304
Barbie 485 433 386 337 320 426
Mickey 254 244 241 230 223 242
Horses 513 509 479 454 437 518
Kingfisher 417 429 420 404 388 441
Dove 330 309 275 251 237 306
Crow 201 194 188 184 184 127
Rainbowrose 501 507 498 469 448 588
Pyramids 285 281 266 258 248 222
Plates 323 300 280 267 255 329
Car 211 204 180 176 173 244
Trees 310 300 294 290 285 268
Ship 389 354 332 312 306 394
Waterfall 422 430 434 425 425 442
Total 7359 7163 6830 6522 6332 7181

TABLE 3: PRCP FOR BLUE MEAN FOR L1 TO L6
CLASS L1 L2 L3 L4 L5 L6
Flower 313 340 315 286 268 374
Sunset 542 479 474 463 455 445
Mountain 173 156 147 141 142 160
Building 170 136 114 109 100 139
Bus 433 355 346 334 327 357
Diansour 233 188 167 144 152 180
Elephant 193 176 162 145 142 183
Barbie 476 395 411 380 375 416
Mickey 217 189 173 162 161 196
Horses 297 230 192 185 183 236
Kingfisher 337 332 340 344 351 340
Dove 201 178 140 117 114 195
Crow 127 96 84 72 67 96
Rainbowrose 642 635 627 621 611 662
Pyramids 165 113 93 90 88 106
Plates 234 204 180 169 161 189
Car 162 146 138 131 132 131
Trees 251 195 165 154 153 200
Ship 307 245 203 191 180 246
Waterfall 252 176 147 135 138 187
Total 5725 4964 4618 4373 4300 5038

Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 189
4. PERFORMANCE EVALUATION USING LONGEST STRING AND LSRR
PARAMETERS
Along with the conventional parameters precision and recall used for CBIR we have evaluated
the system performance using two additional parameters namely Longest String and LSRR. As
discussed in section 2.4, CBIR users will always have curiosity to check what will be the
maximum continuous string of relevant images in the retrieval set which can be obtained using
the parameter longest string. LSRR gives the performance of the system in terms of the
maximum length of the sorted distances of all database images to be traversed to collect all
relevant images of the query class.

4.1 Longest String
This parameter is plotted through various charts. As we have 12 different feature vector
databases prepared for 4 moments for each of the three colors separately. We have calculated
the longest string for all the 12 database results, but the plots for longest string are showing the
maximum longest string obtained for each class for distances L1 to L6 irrespective of the three
colors and this way we have obtained total 4 sets of results plotted in charts 2, 3, 4 and 5 for first
four moments respectively. Among these few classes like Sunset, Rainbow rose, Barbie, Horses
and Pyramids are giving very good results that more than 60 as maximum longest string of
relevant images we could retrieve. In all the resultant bar of all graphs we can notice that L1 and
L6 are reaching to good height of similarity retrieval.






















TABLE 5 : PRCP FOR GREEN STANDARD DEV.
CLASS L1 L2 L3 L4 L5 L6
Flower 320 352 332 319 296 376
Sunset 802 794 771 746 729 789
Mountain 243 249 236 225 223 238
Building 310 312 306 303 297 283
Bus 463 430 392 367 346 465
Diansour 359 358 347 338 328 304
Elephant 321 335 333 334 334 328
Barbie 461 416 401 395 385 430
Mickey 239 238 217 210 210 241
Horses 523 470 412 374 352 473
Kingfisher 368 389 363 353 348 383
Dove 355 307 270 243 238 315
Crow 238 211 192 192 187 120
Rainbowrose 647 652 624 590 577 708
Pyramids 351 350 334 323 319 174
Plates 345 345 330 317 311 370
Car 323 355 354 343 339 389
Trees 295 274 269 265 258 270
Ship 378 342 316 306 304 377
Waterfall 421 423 410 403 407 412
Total 7762 7602 7209 6946 6788 7445

TABLE 6 : PRCP FOR BLUE STANDARD DEV.
CLASS L1 L2 L3 L4 L5 L6
Flower 315 324 319 318 315 325
Sunset 696 593 529 483 462 630
Mountain 210 204 217 212 212 209
Building 224 214 194 191 183 196
Bus 480 484 474 439 422 531
Diansour 318 298 278 273 271 261
Elephant 228 252 257 256 259 245
Barbie 454 363 319 284 264 381
Mickey 222 213 199 196 190 229
Horses 453 446 425 404 403 445
Kingfisher 322 336 333 321 318 333
Dove 352 334 300 280 262 338
Crow 208 165 160 158 152 109
Rainbowrose 615 619 599 587 558 687
Pyramids 242 238 232 228 226 196
Plates 263 261 255 251 246 290
Car 227 218 211 195 187 250
Trees 253 228 215 200 191 227
Ship 414 402 387 375 367 435
Waterfall 273 258 247 246 239 260
Total 6769 6450 6150 5897 5727 6577

Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 190
4.2 LSRR
Similar to Longest String, the parameter LSRR is also used to evaluate the performance of 12
feature vector databases. As said earlier it gives the maximum length we need to travel in the
string of distances sorted in ascending order to collect all images from database which are
relevant to query image or say of query class. According to this logic of LSRR ; the value of LSRR
should be as low as possible so that with minimum traversal length and with less time we can
recall all the images from database. Results obtained for this parameter are the minimum values
in terms of percentage of LSRR are calculated for all 12 feature vector databases for 200 query
images with respect to all six similarity measures. The chart 6 is showing the results as best of
LSRR that is minimum LSRR for each class of image for all distance measures L1 to L6
irrespective of three colors and four moments.
CHART 2: Max. In Results of Longest string of Mean parameter into 27 Bins


















TABLE 7 : PRCP FOR RED SKEWNESS
CLASS L1 L2 L3 L4 L5 L6
Flower 268 221 197 193 183 232
Sunset 646 578 524 495 482 635
Mountain 209 200 185 177 169 200
Building 223 211 199 182 176 214
Bus 422 411 391 380 369 429
Diansour 347 334 317 304 293 283
Elephant 246 271 280 281 277 237
Barbie 482 406 350 312 290 393
Mickey 245 249 241 237 226 229
Horses 399 389 350 313 303 391
Kingfisher 365 376 348 321 304 390
Dove 335 350 354 349 343 384
Crow 167 142 139 139 141 123
Rainbowrose 359 394 391 382 374 489
Pyramids 225 190 174 168 162 198
Plates 267 232 196 178 163 247
Car 155 161 157 152 148 225
Trees 296 279 260 248 247 225
Ship 342 297 268 256 249 311
Waterfall 362 352 332 319 309 263
Total 6360 6043 5653 5386 5208 6098

TABLE 8 : PRCP FOR GREEN SKEWNESS
CLASS L1 L2 L3 L4 L5 L6
Flower 375 361 319 291 275 379
Sunset 674 617 563 530 506 679
Mountain 216 203 191 184 186 205
Building 252 224 214 207 212 203
Bus 441 418 378 349 342 451
Diansour 293 257 230 220 210 200
Elephant 222 227 219 210 206 204
Barbie 459 450 451 450 446 436
Mickey 234 237 226 213 208 233
Horses 383 335 294 271 248 380
Kingfisher 327 356 354 354 343 355
Dove 349 336 316 305 300 370
Crow 181 161 146 143 137 134
Rainbowrose 508 540 519 500 481 577
Pyramids 282 298 284 273 268 153
Plates 237 236 228 218 211 246
Car 276 363 374 377 367 404
Trees 216 180 173 174 170 192
Ship 316 281 267 257 249 292
Waterfall 321 292 267 250 248 279
Total 6562 6372 6013 5776 5613 6372

Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 191
CHART 3: Max. In Results of Longest String of Standard Deviation parameter 27 Bins

.























CHART 4: Max. In Results of Longest String of Skewness parameter 27 Bins
TABLE 10: PRCP FOR RED KURTOSIS
CLASS L1 L2 L3 L4 L5 L6
Flower 337 302 273 254 243 326
Sunset 340 695 655 624 610 734
Mountain 727 210 196 193 188 202
Building 217 240 226 220 218 257
Bus 274 493 485 468 459 500
Diansour 524 354 342 325 318 283
Elephant 349 343 355 361 367 333
Barbie 311 447 400 366 342 438
Mickey 488 255 240 236 227 250
Horses 260 486 461 432 416 511
Kingfisher 496 444 430 410 393 440
Dove 439 362 354 351 345 402
Crow 355 164 161 155 147 124
Rainbowrose 167 534 522 504 488 599
Pyramids 516 269 256 250 240 222
Plates 280 300 276 267 259 320
Car 315 190 179 176 174 242
Trees 206 287 282 269 269 260
Ship 309 363 334 322 316 389
Waterfall 405 434 436 430 422 420
Total 7315 7172 6863 6613 6441 7252
TABLE 9: PRCP FOR BLUE SKEWNESS
CLASS L1 L2 L3 L4 L5 L6
Flower 335 331 322 314 302 342
Sunset 666 607 540 513 481 576
Mountain 205 208 201 195 191 209
Building 179 174 164 153 145 168
Bus 416 433 416 386 370 497
Diansour 290 247 231 226 222 244
Elephant 168 169 161 162 162 173
Barbie 458 419 387 372 341 413
Mickey 219 215 211 208 204 218
Horses 434 438 417 404 394 461
Kingfisher 247 262 258 255 250 253
Dove 385 346 333 317 314 399
Crow 177 162 147 153 149 118
Rainbowrose 490 514 519 517 497 575
Pyramids 204 195 184 174 169 194
Plates 249 241 230 218 210 262
Car 169 192 187 185 181 225
Trees 252 218 199 188 184 200
Ship 331 313 284 272 264 317
Waterfall 236 219 208 208 200 204
Total 6110 5903 5599 5420 5230 6048
Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 192























CHART 5 : Max. In Results of Longest String of Kurtosis Parameter _27 Bins
TABLE 12 : PRCP FOR BLUE KURTOSIS
L1 L2 L3 L4 L5 L6
Flower 346 347 345 338 330 352
Sunset 760 688 604 566 541 674
Mountain 200 205 205 214 214 209
Building 214 208 196 189 177 205
Bus 487 493 459 436 420 530
Diansour 303 276 270 257 254 252
Elephant 211 224 231 230 230 234
Barbie 460 414 374 354 346 407
Mickey 231 222 218 213 212 231
Horses 469 454 449 434 422 459
Kingfisher 327 354 348 334 339 337
Dove 400 367 341 325 323 409
Crow 160 145 132 128 128 105
Rainbowrose 630 635 621 608 584 691
Pyramids 240 244 250 251 241 218
Plates 267 262 259 255 253 284
Car 214 211 197 187 183 235
Trees 246 216 196 185 179 204
Ship 407 393 380 370 360 408
Waterfall 276 249 243 244 245 253
Total 6848 6607 6318 6118 5981 6697

TABLE 11 : PRCP FOR GREEN KURTOSIS
L1 L2 L3 L4 L5 L6
Flower 393 412 386 369 350 423
Sunset 801 788 761 735 717 803
Mountain 263 256 239 240 232 240
Building 316 295 289 281 267 274
Bus 533 478 428 411 384 503
Diansour 308 297 287 275 271 245
Elephant 321 323 329 328 329 313
Barbie 452 446 440 440 444 440
Mickey 254 246 241 220 210 238
Horses 512 441 377 343 326 454
Kingfisher 388 415 407 398 390 417
Dove 374 350 323 319 309 380
Crow 197 185 177 162 155 125
Rainbowrose 677 679 655 631 606 713
Pyramids 335 340 317 309 303 168
Plates 338 335 315 313 313 353
Car 327 363 357 358 356 398
Trees 279 249 245 240 231 251
Ship 395 344 320 306 302 368
Waterfall 413 406 390 385 382 397
Total 7876 7648 7283 7063 6877 7503

Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 193

CHART 6. Min. In Results of LSRR for L1 to L6 Irrespective of Color and Moment

In above chart we can observe that many classes are performing well means minimum traversal
is giving 100% recall for them the classes giving best results are sunset, bus, horses, kingfisher
and pyramids etc. among these best is Sunset class where 14 %, traversal of 2000 images only
will give 100 % recall for sunset query for L6, 20% for L1 distance measure.

We have shown first few images from the PRCP result obtained for Kingfisher query image in
Figure 6. This is obtained for feature vector Green Kurtosis with the L1 distance measures. We
retrieved total 65 images as PRCP(from first 100) for this query.

Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 194
CONCLUSION
The Bins Approach explained in this paper is new and simple in terms of computational
complexity for feature extraction. It is based on histogram partitioning of three color planes. As
histogram is partitioned into 3 parts, we could form 27 bins out of it. These bins are directed to
extract the features of images in the form of four statistical moments namely Mean, Standard
Deviation, Skewness and Kurtosis.
Similarity measures used to facilitate the comparison of database and query images we have
used two similarity measures that are Minkowski distance and Cosine correlation distance. We
have used multiple variations of Minkowski distance from order 1 to order 5 with nomenclature L1


Query Image
Retreived Images


FIGURE 6 : Query Image and first 46 images retreived out of 65
Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 195
to L5 and L6 is used for cosine correlation distance. Among these six distances L1 and L6 are
giving best performance as compared to other increasing orders of Minkowski distance. Here we
have seen that performance goes on decreasing with increase in Minkowski order parameter r
given in equation 5.

Conventional CBIR systems are mostly designed with Euclidean distance. We have shown the
effective use of other two similarity measures Absolute distance and Cosine correlation
distance. The work presented in this paper has proved that AD and CD are giving far better
performance as compared to the commonly adopted conventional similarity measure Euclidean
distance. In all tables having PRCP results we have highlighted first two best results and after
counting them and comparing we found that AD and CD are better in maximum cases as
compared to ED.

Comparative study of types of feature vectors based on moments, even moments are performing
better as compared to odd moments i.e. standard deviation and kurtosis are better than mean
and skewness.
Observation of all performance evaluation parameters delineates that the best value obtained for
PRCP is 0.8 for average of 10 queries for many out of the 20 classes. Whereas combining the R,
G, B color results using special criterion; the best value of PRCP works out to 0.5 for average of
200 queries which is the most desirable performance for any CBIR. The maximum longest string
of relevant images obtained is for class rainbow rose and sunset; the value is around 70 (out of
100) for L1 and L6 distance measure as shown in charts 3 and 5 for even moments. The
minimum length traversed to retrieve all the relevant images from database i.e LSRRs best value
is 14% for L6 and 20% for L1 for class sunset.

We have also worked with 8 bins and 64 bins by dividing the equalized histogram in 2 and 4 parts
respectively. However the best results are obtained for 27 bins which are presented here.

REFERENCES
[1] Yong Rui and Thomas S. Huang , Image Retrieval: Current Techniques, Promising
Directions, and Open Issues Journal of Visual Communication and Image Representation
10, 3962 (1999).

[2] Dr. H.B. Kekre, Mr. Dhirendra Mishra, Mr. Anirudh Kariwala , Survey Of Cbir Techniques
And Semantics. International Journal of Engineering Science and Technology (IJEST),
Vol. 3 No. 5 May 2011.

[3] Raimondo Schettini, G. Ciocca, S. Zuffi, A Survey Of Methods For Color Image Indexing
And Retreival In Image Databases.
www.intelligence.tuc.gr/~petrakis/courses/.../papers/color-survey.pdf

[4] Sameer Antania, Rangachar Kasturia; , Ramesh Jainb A surveyon the use of pattern
recognition methods for abstraction, indexing and retrieval of images and video Pattern
Recognition 35 (2002) 945965.

[5] Hualu Wang, Ajay Divakaran, Anthony Vetro, Shih-Fu Chang, and Huifang Sun Survey of
compressed-domain features used in audio-visual indexing and analysis 2003 Elsevier
Science (USA). All rights reserved.doi:10.1016/S1047-3203(03)00019-1.

[6] S. Nandagopalan, Dr. B. S. Adiga, and N. Deepak, A Universal Model for Content-Based
Image Retrieval World Academy of Science, Engineering and Technology 46 2008.

[7] C. W.Ngo, T. C. Pong, R.T. Chin. Exploiting image indexing techniques in DCT domain
IAPR International Workshop on multimedia Media Information Analysis and Retrieval.
Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 196
[8] Elif Albuz, Erturk Kocalar, and Ashfaq A. Khokhar, Scalable Color Image Indexing And
Retrieval Using Vector Wavelets IEEE Transactions on Knowledge and Data Engineering,
Volume 13 Issue 5, September 200.

[9] Mann-Jung HsiaoYo-Ping HuangTe-Wei Chiang, A Region-Based Image Retrieval
Approach Using Block DCT 0-7695-2882-1/07 $25.00 2007 IEEE.

[10] Wu Xi, Zhu Tong, Image Retrieval based on Multi-wavelet Transform 978-0-7695-3119-
9/08 $25.00 2008 IEEE, 2008 Congress on Image and Signal Processing.

[11] H. B. Kekre , Kavita Sonawane, Query Based Image Retrieval Using kekres, DCT and
Hybrid wavelet Transform Over 1st and 2nd Moment, International Journal of Computer
Applications (0975 8887), Volume 32 No.4, October 2011.

[12] H. B. Kekre, Kavita Sonawane, Retrieval of Images Using DCT and DCT Wavelet Over
Image Blocks. (IJACSA) International Journal of Advanced Computer Science and
Applications, Vol. 2, No. 10, 2011.

[13] Arnold W.M. Smeulders, Senior Member, IEEE, Marcel Worring, Simone Santini, Member,
IEEE, Amarnath Gupta, Member, IEEE, and Ramesh Jain, Fellow, IEEE , Content-Based
Image Retrieval at the End of the Early Years.

[14] Zur Erlangung des Doktorgradesder Fakult, Angewandte Wissenschaften Feature
Histograms for Content-Based Image Retrieval2002

[15] Young Deok Chun, Sang Yong Seo, and Nam Chul Kim ,Image Retrieval Using BDIP and
BVLC Moments, IEEE Transactions On Circuits And Systems For Video Technology, Vol.
13, No. 9, September 2003.

[16] Dr. H.B.Kekre, Dr. Sudeep D. Thepade, Shrikant P. Sanas, Sowmya Iyer , Shape Content
Based Image Retrieval using LBG VectorQuantization. (IJCSIS) International Journal of
Computer Science and Information Security,Vol. 9, No. 12, December 2011.

[17] H.B.Kekre, Sudeep D. Thepade, Tanuja K. Sarode, Shrikant P. Sanas Image Retrieval
Using Texture Features Extracted Using Lbg, Kpe, Kfcg, Kmcg, Kevr With Assorted Color
Spaces, International Journal of Advances in Engineering & Technology, Jan
2012.IJAET ISSN: 2231-1963 520 Vol. 2, Issue 1, pp. 520-531.

[18] Dr. H.B.Kekre , Kavita Sonawane, Bins Approach To Image Retrieval Using Statistical
Parameters Based On Histogram Partitioning Of R, G, B Planes, Jan 2012. IJAET ISSN:
2231-1963.

[19] Image Retrieval Using BDIP and BVLC Moments, Young Deok Chun, Sang Yong Seo, and
Nam Chul Kim, IEEE Transactions On Circuits And Systems For Video Technology, Vol.
13, No. 9, September 2003.

[20] H. B. Kekre , Kavita Sonawane, Feature Extraction in Bins Using Global and Local
thresholding of Images for CBIR International Journal Of Computer Applications In
Applications In Engineering, Technology And Sciences, ISSN: 0974-3596, October 09
March 10, Volume 2 : Issue 2

[21] Guang Yang, Yingyuan Xiao, A Robust Similarity Measure Method in CBIR System 978-
0-7695-3119-9/08 $25.00 2008 IEEE, 2008 Congress on Image and Signal Processing.

Dr. H. B. Kekre, Kavita Sonawane
International Journal of Image Processing (IJIP), Volume (6) : Issue (3) : 2012 197
[22] Zhi-Hua Zhou Hong-Bin Dai, Query-Sensitive Similarity Measure for Content-Based Image
RetrievalICDM, 06, Proceedings of sixth International Conference on data Mining. IEEE
Comp. Society, Washington, DC, USA 2006.

[23] Ellen Spertus, Mehran Sahami, Orkut Buyukkokten, Evaluating Similarity Measures:A
LargeScale Study in the Orkut Social network Copyright 2005ACM.The definitive version
was published in KDD 05, August 2124,
2005http://doi.acm.org/10.1145/1081870.1081956.
[24] Simone Santini, Member, IEEE, and Ramesh Jain, Fellow, IEEE, Similarity Measures
IEEETransactions On Pattern Analysis And Machine Intelligence, Vol. 21, No. 9,
September 1999.

[25] John P., Van De Geer, Some Aspects of Minkowski distance, Department of data theory,
Leiden University. RR-95-03.

[26] Dengsheng Zhang and Guojun Lu Evaluation Of Similarity Measurement For Image
Retrieval www. Gscit.monash.edu.au/~dengs/resource/papers/icnnsp03.pdf.

[27] Gang Qian, Shamik Sural, Yuelong Gu Sakti Pramanik, Similarity between Euclidean and
cosine angle distance fornearest neighbor queries, SAC04, March 14-17, 2004, Nicosia,
Cyprus Copyright 2004 ACM 1-58113-812-1/03/04.

[28] Sang-Hyun Park, Hyung Jin Sung, Correlation Based Image Registration for Pressure
Sensitive Paint, flow.kaist.ac.kr/upload/paper/2004/SY2004.pdf .

[29] Julia Vogela, Bernt Schiele, Performance evaluation and optimization for content-based
image retrieval, 0031-3203/$30.00 _ 2005 Pattern Recognition Society. Published by
Elsevier Ltd. All rights reserved. doi:10.1016/j.patcog.2005.10.024

[30] Stphane Marchand-Maillet, Performance Evaluation in Content-based Image
Retrieval:The Benchathlon Network,

[31] Thomas Deselaers, Daniel Keysers, and Hermann Ney , Classification Error Rate for
Quantitative Evaluation of Content-based Image Retrieval Systems
http://www.cs.washington.edu/research/imagedatabase/groundtruth/, and http://www-
i6.informatik.rwthaachen.de/deselaers/uwdb

[32] Dr. H. B. Kekre, Dhirendra Mishra Image Retrieval using DST and DST Wavelet
Sectorization, (IJACSA) International Journal of Advanced Computer Science and
Applications, Vol. 2, No. 6, 2011.

[33] Dr. H. B. Kekre, Kavita Sonawane, Image Retrieval Using Histogram Based Bins of Pixel
Counts and Average of Intensities, (IJCSIS) International Journal of Computer Science
and Information Security, Vol. 10, No.1, 2012


INSTRUCTIONS TO CONTRIBUTORS

The International Journal of Image Processing (IJIP) aims to be an effective forum for interchange
of high quality theoretical and applied research in the Image Processing domain from basic
research to application development. It emphasizes on efficient and effective image technologies,
and provides a central forum for a deeper understanding in the discipline by encouraging the
quantitative comparison and performance evaluation of the emerging components of image
processing.

We welcome scientists, researchers, engineers and vendors from different disciplines to
exchange ideas, identify problems, investigate relevant issues, share common interests, explore
new approaches, and initiate possible collaborative research and system development.

To build its International reputation, we are disseminating the publication information through
Google Books, Google Scholar, Directory of Open Access Journals (DOAJ), Open J Gate,
ScientificCommons, Docstoc and many more. Our International Editors are working on
establishing ISI listing and a good impact factor for IJIP.

The initial efforts helped to shape the editorial policy and to sharpen the focus of the journal.
Starting with volume 6, 2012, IJIP appears in more focused issues. Besides normal publications,
IJIP intends to organize special issues on more focused topics. Each special issue will have a
designated editor (editors) either member of the editorial board or another recognized specialist
in the respective field.

We are open to contributions, proposals for any topic as well as for editors and reviewers. We
understand that it is through the effort of volunteers that CSC Journals continues to grow and
flourish.

LIST OF TOPICS
The realm of International Journal of Image Processing (IJIP) extends, but not limited, to the
following:

Architecture of imaging and vision systems Autonomous vehicles
Character and handwritten text recognition Chemical and spectral sensitization
Chemistry of photosensitive materials Coating technologies
Coding and transmission Cognitive aspects of image
understanding
Color imaging Communication of visual data
Data fusion from multiple sensor inputs Display and printing
Document image understanding Generation and display
Holography Image analysis and interpretation
Image capturing, databases Image generation, manipulation,
permanence
Image processing applications Image processing: coding analysis and
recognition
Image representation, sensing Imaging systems and image scanning
Implementation and architectures Latent image
Materials for electro-photography Network architecture for real-time video
transport
New visual services over ATM/packet network Non-impact printing technologies
Object modeling and knowledge acquisition Photoconductors
Photographic emulsions Photopolymers
Prepress and printing technologies Protocols for packet video
Remote image sensing Retrieval and multimedia
Storage and transmission Video coding algorithms and
technologies for ATM/p

CALL FOR PAPERS

Volume: 6 - Issue: 5 - October 2012

i. Paper Submission: July 31, 2012 ii. Author Notification: September 15, 2012

iii. Issue Publication: October 2012






























CONTACT INFORMATION

Computer Science Journals Sdn BhD
B-5-8 Plaza Mont Kiara, Mont Kiara
50480, Kuala Lumpur, MALAYSIA
Phone: 006 03 6207 1607
006 03 2782 6991

Fax: 006 03 6207 1697

Email: cscpress@cscjournals.org

You might also like