You are on page 1of 9

A comparison oI diIIerential evolution, particle swarm

optimization, artiIicial bee colony Ior entropy based multilevel


thresholding
Sushil Kumar*, Millie Pant, A.K.Ray
Department oI Applied Science and Engineering, Indian Institute oI Technology Roorkee, Roorkee, India
Abstract
Multilevel thresholding is one oI the most popular techniques Ior image segmentation. Image segmentation can be
broadly classiIied as bi-level and multilevel. In bi-level image segmentation the whole image is divided into partitions based
on a threshold value, whereas in multilevel segmentation, multiple threshold values are required. Proper threshold values
should be assigned to optimize a criterion such as entropy or between-class variance, Ior a successIul segmentation. Several
algorithms are available in literature Ior image segmentation. In this paper we have considered some popular nature inspired
metaheuristics (NIM) Ior multilevel maximum entropy thresholding. The algorithms used are; DiIIerential Evolution (DE),
Particle Swarm Optimization (PSO) and ArtiIicial Bee Colony (ABC). These are well known global heuristics algorithms and
in the present study have been applied to Iind the optimal multilevel thresholds. Otsu`s between class-variance and Kapur`s
maximum entropy techniques are used as Iitness Iunctions. Experiments perIormed on various images and numerical results
are compared. It is observed that in some cases Otsu method is giving the same perIormance as DE, PSO and ABC. But when
class size increases DE shows better results in comparison to others.
2013 Elsevier Science. All rights reserved
Keywords- Particle Swarm Optimization, Artificial bee colony, Differential Evolution, maximum entropy, Otsu.
1. Introduction
Segmentation plays a vital role in image processing. For example in medical imaging, it helps in locating tumors,
measuring tissue volumes, in computer guided surgery, in treatment planning etc. Also segmentation is used in satellite
imaging like locating roads, Iields and buildings. Some other examples include Iace recognition, iris recognition and
Iingerprint recognition etc. Considering their importance, researchers have paid a lot oI attention in developing suitable
methods Ior image segmentation. Images can be segmented manually but in real liIe applications manually segmentation is
not preIerred.
Segmentation is a challenging task and elementary step in image processing. It divides the image into non-overlapping
partitions based on region oI interest and homogeneous regions. In Bi-level segmentation image is divided into two classes
one is background and second is region oI interest or object. Bi-level thresholding can be used to create binary images, while
multilevel thresholding determines multiple thresholds which divide the pixels into multiple groups.
Threshold value oI an image can be divided into two diIIerent parts one is local threshold value and other is global
threshold. Local threshold value is assigned on the basis oI each part oI the image; it may be varying Ior one image. In global
threshold only one value is assigned to whole image as a threshold value.
Two methods are used Ior handling local or global probability density Iunction oI the grey level histogram is (1)
parametric and (2) nonparametric. In parametric method some statistical parameters are estimated Ior Iinding the suitable
Proceedings oI International ConIerence on Computing Sciences
WILKES100 ICCS 2013
ISBN: 978-93-5107-172-3
604 Elsevier Publications, 2013
Corresponding author - Sushil Kumar
Sushil Kumar, Millie Pant and A. K. Ray
threshold value. Computationally these are very expensive and results depend on initial conditions. In nonparametric
approach threshold values are calculated by maximizing some criterion such as between class variance |1| and entropy
measures |2|.
In this paper PSO, ABC and DiIIerential Evolution are used Ior multilevel thresholding. DE shows a signiIicant
diIIerence in small class size also in comparison oI ABC and PSO. B.Akay |7| makes some comparison oI entropy based and
between-class variance on the basis oI PSO and ABC, ABC is more eIIicient as class size increases.
The paper is organized as Iollows in section 2, Literature review is given Ior image segmentation. In section 3, multilevel
thresholding problem is deIined. In section 4, Kapur`s Entropy is explained. In section 5, all three algorithms PSO, DE and
ABC are brieIly explained. Section 6, has discuss all the experiments and results. And Iinally section 7 conclude the paper
2. Literature Review
PSO, ABC and DE have successIully applied in many imaging optimization Iields as shown below:
Table 1.Literature survey Ior PSO, DE and ABC in image segmentation
Researcher (Year) Applied to:
Particle Swarm Optimization
Omran et al. (2006)|15| Image segmentation
Maitra et. al. (2008)|16| Image segmentation using multilevel thresholding
Chander et al. (2011)|17| Image segmentation
Zhang et. al. (2011)|18| Image segmentation with Mahalanobis distance
Lee et. al. (2012)|19| Saliency-directed color image segmentation
Zahara et al. |20| a hybrid Nelder-Mead Particle Swarm Optimization (NM-PSO) method
DiIIerential Evolution
Rahnamayan et al (2006)|21| Image threshold
Aslantas and Tunckanat (2007)|22| Segmentation oI wound images
Rahnamayan et al (2008)|23| Image threshold using micro-opposition based DE
Kumar et al. (2011)|24| DE embedded with Otsu method to Iind Image threshold
Pavan et al. (2012)|25| Automatic tissue segmentation in medical images
ArtiIicial Bee Colony (ABC)
Horng and Jiang (2010)|26| multilevel maximum entropy thresholding
Zhang and Wu (2011)|27| Global multi-level thresholding method Ior image segmentation.
Ma et al. (2011)|28| Fast synthetic aperture radar image segmentation
Akay and Karaboga (2011)|29| Determination oI the thresholds Ior image compression
Horng (2011)|30| Multiple thresholds which are very close to the optimum.
Sushil et al. (2012)|33| Segmentation oI CT lung Images
Akay (2012)|33| Multilevel thresholding
3. Problem Definition
A threshold value partitioned the image into two disjoint subsets say

and

. While

contains all the pixels


having grey level value below threshold value ; the set

contains all the pixels having grey level value above threshold
value in Bi-level thresholding. Assume that an image can be represented by grey levels; bi-level thresholding can be
deIined as in given equation:
} 1 ) , ( , ) , (
} 1 ) , ( 0 , ) , (
1
0
=
=
L y x g t I y x g C
t y x g I y x g C
(1)
Multilevel thresholding use more than one threshold value and partition the image into more partitions as in given
equation.
} 1 ) , ( , ) , (
} 1 ) , ( , ) , (
} 1 ) , ( , ) , (
} 1 ) , ( 0 , ) , (
1 0
1 0
2 1 0
1 0
=
=
=
=
+
+
n n
i i
t y x g t I y x g C
t y x g t I y x g C
t y x g t I y x g C
t y x g I y x g C
(2)
Where

is the th threshold value, and the is the number oI thresholds.


605 Elsevier Publications, 2013
A comparison of Differential Evolution, Particle Swarm Optimization, Artificial Bee Colony for Entropy Based Multilevel Thresholding
In bi-level thresholding it is very easy to take a threshold value. High computational eIIorts are required Ior multilevel
thresholding. Otsu method |3| and Kapur`s |2| entropy provides objective Iunctions Ior nonparametric approach. Thresholded
image satisIies the desired criterion based on Otsu and Kapur`s entropy method.
4. Kapurs Entropy
Entropy-based thresholding segmentation technique is based on the probability distribution oI the gray level histogram.
The purpose is to Iind the optimal thresholds yielding the maximum entropy because when entropy is maximum the optimal
thresholds separating the classes are assigned properly. The entropy oI a discrete source is obtained Irom the probability
distribution

, where

is the probability oI the system in possible state |4|. The probability oI each gray level is the
relative occurrence Irequency oI the gray level , normalized by the total number oI gray levels as described in equation:
1 ,..., 2 , 1 , 0 ,
) (
) (
1
0
= =

=
L i
i h
i h
p
L
i
i
(3)
For bi-level thresholding, Kapur`s entropy may be described by equation:

=
=
=
1
1 1
1
1
0
0 0
0
, ln
, ln
L
t i
i i
t
i
i i
p p
H
p p
H

=
=
=
1
1
1
0
0
L
t i
i
t
i
i
p
p

(4)
The threshold is optimum when the summation oI the class entropies are maximum as described in given equation, it is
objective Iunction.
) max( arg
1 0
*
H H t + = (5)
For multilevel thresholding Kapur`s entropy can be extended as described in given equation.

+
=

=
=
=
=
=
1
1
1
3
2
2 2
2
1
2
1
1 1
1
1
1
0
0 0
0
, ln
, ln
, ln
, ln
n
t
n
t i
c
i
c
i
c
t
t i
i i
t
t i
i i
t
i
i i
p p
H
p p
H
p p
H
p p
H



=
=
=
=
=
1
3
1
2
2
1
1
1
1
1
0
0
L
t i
i c
L
t i
i
L
t i
i
t
i
i
p
p
p
p

(6)
The multilevel thresholding consists dimensional vector

, which optimizes the


objective Iunction:
) ,... , , max( arg
3 1 0
*
c
H H H H t = (7)
4.1. Between Class Variance
A nonparametric segmentation method is based on between-class variance that divides the whole image into classes so
that the variance oI the diIIerent classes is maximum. In the bi-level thresholding method, the pixels oI image are divided into
two classes

, with gray levels and

with gray levels by the threshold t. The gray level


probability distributions Ior the two classes are given as:

+ =
=
= =
= =
1
1
2 2
0
1 1
) Pr(
) Pr(
L
t i
i
t
i
i
p C w
p C w
(8)
The Means oI class

are
606 Elsevier Publications, 2013
Sushil Kumar, Millie Pant and A. K. Ray
2
1
1
2
1
0
1
w
ip
u
w
ip
u
L
t i
i
t
i
i

+ =
=
=
=
(9)
The total mean oI gray levels is denoted by


2 2 1 1
u w u w u
T
+ = (10)
The class variances are
(11)
The between-class variance is
2
2 2
2
1 1
2
) ( ) (
T T B
u u w u u w + = (12)
Otsu method chooses the optimal threshold t by maximizing the between-class variance, which is equivalent to
minimizing the within-class variance, since the total variance (the sum oI the within-class variance and the between-class
variance) is constant Ior diIIerent partitions. Objective Iunction is:
)}} ( argmax
2
1
t t
B L t o


= (13)
For Multilevel thresholding the extended between-class variance can be extended to:
2
2
2
2 2 2
2
1 1 1
2
0 0 0
0
) (
) (
) (
) (
) (
) (
T m m m
T j j j
T
T
T
m
i
i
u u w
u u w
u u w
u u w
u u w
t
=
=
=
=
=
=

(14)
5. Brief Explanation of the Algorithms
PSO, ABC and DE all are optimization algorithms, have applied in many real time applications. PSO, ABC and DE have
Iewer control parameters to be settled. All these algorithms have a high perIormance and low complexity. A brieI introduction
to all these algorithms is as Iollows:
5.1. Particle Swarm Optimization
Particle Swarm Optimization |5| is inspired by the social Ioraging behaviour oI some animals such as Ilocking behaviour
oI birds and the schooling behaviour oI Iish. PSO was proposed by by Eberhart and Kennedy in 1995. The goal oI the
algorithm is to have all the particles locate the optima in a multi-dimensional hyper-volume. This is achieved by assigning
initially random positions to all particles in the space and small initial random velocities. The algorithm is executed like a
simulation, advancing the position oI each particle in turn based on its velocity, the best known global position in the problem
space and the best position known to a particle. The objective Iunction is sampled aIter each position update. Over time,
through a combination oI exploration and exploitation oI known good positions in the search space, the particles cluster or
converge together around optima, or several optima. |6|
All oI the particles are initialised at random positions by (19), and they start to move in the search space by changing
their velocities and then positions:
) )( 1 , 0 (
min max min
j j j ij
x x rand x x + = (15)
2
1
1
2
2
2
1
1
0
2
1
2
2
) (
) (
w
p u i
w
p u i
L
t i
i
t
i
i

+ =
=

607 Elsevier Publications, 2013


A comparison of Differential Evolution, Particle Swarm Optimization, Artificial Bee Colony for Entropy Based Multilevel Thresholding
Where

is the position oI the th particle, and is the dimension oI the problem. Once a population is generated
the algorithm iterates as in Algorithm 1in Fig. 1.
Algorithm 1 (Main steps of the PSO algorithm)
1: Initialize the population
2: repeat
3: Calculate the Iitness values oI the particles
4: Update the best experience oI each particle
5: Choose the best particle
6: Calculate the velocities oI the particles
7: Update the positions oI the particles
8: until requirements are met
Fig. 1. Pseudo-Code Ior Particle Swarm Optimization
The PSO algorithm is comprised oI a collection oI particles that move around the search space inIluenced by their own
best past location and the best past location oI the whole swarm or a close neighbour. In each iteration a particle's velocity is
updated using:
))) ( ( () ( ))) ( ( () ( ) ( ) 1 (
2 1
t p p rand c t p p rand c t v t v
i
best
g i
best
i i i
+ + = + (16)
Where,

is the new velocity Ior the th particle,

are the weighting coeIIicients Ior the personal best and


global best positions respectively.

is the th particle`s position at time ,

is the

particle`s best known position


and

is the best position known to the swarm. The Iunction generates a uniIormly random variable .
Variants on this update equation consider best positions within a particles local neighbourhood at time .
5.2. Differential Evolution
DE is a parallel direct search method using a population oI N parameter vectors Ior each generation. At generation G, the
population

is composed oI

. The initial population

can be chosen randomly according to (1).


Where

and

are the lower and higher boundaries oI dimensional vector

. II
some a priori knowledge is available about the problem, the preliminary solution can be included to the initial population by
adding normally distributed random deviations to the nominal solution.
For each parent parameter vector, DE generates a candidate child vector based on the distance oI two other parameter
vectors. For each dimension j|1, d|, this process is shown, as is reIerred to as scheme DE/rand/1 by Storn and Price.
) .(
2 1 3
' G
r
G
r
G
r
x x F x x + = (17)
Where, the random integers

are used as indices to index the current parent object vector. As a result, the
population size N must be greater than 3. F is a real constant positive scaling Iactor and normally F (0, 1). F controls the
scale oI the diIIerential variation

|8|, |9|.
Selection oI this newly generated vector is based on comparison with another DE1 control variable, the crossover
constant CR |0, 1| to ensure the search diversity. Some oI the newly generated vectors will be used as child vector Ior the
next generation, others will remain unchanged. The process oI creating new candidates is described in the pseudo code shown
in Fig. 2, |8| |9|, |10-12|.
1: Initialization
2: Evaluation
3: repeat
4: Mutation
5: Crossover
6: Selection
7: Memorize the best solution achieved so Iar
8: until A termination criteria is satisIied
Fig. 2. Pseudocode Ior DiIIerential Evolution
5.3. Artificial Bee Colony
608 Elsevier Publications, 2013
Sushil Kumar, Millie Pant and A. K. Ray
ABC The ArtiIicial Bee Colony (ABC)|34| algorithm is a swarm based meta-heuristic algorithm that was introduced by
Karaboga in 2005 Ior optimizing numerical problems. It was inspired by the intelligent Ioraging behavior oI honey bees. The
algorithm is speciIically based on the model proposed by Tereshko and Loengarov (2005) Ior the Ioraging behavior oI honey
bee colonies. The model consists oI three essential components: employed and unemployed Ioraging bees, and Iood sources.
The Iirst two components, employed and unemployed Ioraging bees, search Ior rich Iood sources, which is the third
component, close to their hive. The model also deIines two leading modes oI behavior which are necessary Ior selI-organizing
and collective intelligence: recruitment oI Ioragers to rich Iood sources resulting in positive Ieedback and abandonment oI
poor sources by Ioragers causing negative Ieedback.
The main phases oI the algorithm are given step-by-step in Algorithm 3.
Algorithm 3 (Main steps of the ABC algorithm)
1: Initialization
2: Evaluation
3: repeat
4: Employed Bee Phase
5: Onlooker Bee Phase
6: Scout Bee Phase
7: Memorize the best solution achieved so Iar
8: until A termination criteria is satisIied
Fig. 3. Pseudocode Ior ArtiIicial Bee Colony
6. Experimental Results
The multilevel thresholding deals with Iinding optimal thresholds within the range that maximise a Iitness
criterion. Search space oI the problem will be

. is number oI threshold values Ior multilevel thresholding.


In the PSO algorithm, particle positions represent the thresholds, and the aim is to Iind the optimal thresholds by
changing the velocities and the positions oI particles in the search space |7|.
In DE algorithm, vector positions represent the thresholds, aIter crossover and mutation selection process Iind out the
optimal threshold value.
Experiments are carried out on some images, shown in Fig. 4. All the images are taken Irom the literature oI Digital
Image Processing written by R.C. Gonzalez and R.E.Woods.
(a) (b) (c)
(a) Original image oI thumb-print, (b) Original image oI Strawberry, (c) Original image oI Sticks
Segmented images by DE method
609 Elsevier Publications, 2013
5.3. Artificial Bee Colony
(a) (b) (c)
(a) Segmented image oI thumb-print, (b) Segmented image oI Strawberry, (c)Segmented image oI sticks
A comparison of Differential Evolution, Particle Swarm Optimization, Artificial Bee Colony for Entropy Based Multilevel Thresholding
Segmented images by PSO
(a) (b) (c)
(a) Segmented image oI thumb-print, (b) Segmented image oI Strawberry, (c) Segmented image oI Sticks
Fig. 4. Comparison oI Segmented images by diIIerent methods
Table 2. Comparison oI the best thresholds obtained Irom the methods based on the between-class variance criterion Ior real images.
Image m Between Class variance Thresholds
Otsu PSO ABC DE Otsu PSO ABC DE
Thumb-Print 2 .931832 .931832 .931832 .931832 88, 165 88, 165 88, 165 87,169
3 .946720 .940123 .947213 .948165 73, 152, 186 72, 151, 198 71, 147, 198 71,153,197
4 .953215 .951871 .955712 .957213 69, 97, 157, 201 74, 109, 153, 189 76, 112, 181, 206 75,109,197,206
5 .965172 .963192 .968162 .969812 55, 97, 131, 158, 204 48, 79, 103, 161, 199 58, 86, 123, 168, 201 61,85,120,169,208
Strawberry 2 .917618 .917618 .917618 .917618 72, 153 72, 153 72, 153 72, 153
3 .935761 .935012 .935891 .935912 67, 122, 159 66, 113, 171 63, 105, 173 68,125,171
4 .944213 .944081 .944981 .945671 57, 102, 134, 162 21, 66, 126, 173 45, 95, 129, 172 42,93,182,194
5 .963621 .962871 .964672 .968924 39, 98, 141, 172, 197 19, 81, 119, 142, 159 39, 74, 108, 166, 159 48,82,118,172,201
Sticks 2 .927631 .927631 .927631 .927631 82, 175 82, 175 82, 175 82, 175
3 .938291 .936219 .939821 .939983 69, 149, 197 75, 146, 193 79, 136, 197 80,146,200
4 .947217 .946538 .948629 .949824 47, 101, 142, 189 47, 83, 141, 193 55, 90, 128, 178 57,83,141,187
5 .958721 .957382 .958729 .959828 41, 81, 121, 159, 210 51, 63, 121, 165, 201 28, 76, 111, 148, 198 54,82,122,157,201
7. Conclusion
It is observed that out oI the algorithms considered in the present study, DE is better than ABC and PSO Ior multilevel
thresholding. For number oI classes equal to 2, all algorithms gave good results same as that oI Otsu`s method. However, DE
gives good results Ior small as well as large number oI classes. In Iuture DE can be used Ior color image segmentation and
video segmentation.
References
|1|. LiaoP-S., ChungP-C., ChenT-S.: A fast algorithm for multilevel thresholding, Journal oI InIormation Science and Engineering 17
713727(2001).
|2|. KapurJ.N.: Maximum-entropy Models in Science and Engineering, John Wiley and Sons, 1989.
|3|. OtsuN., A threshold selection method from gray-level histograms, IEEE Transactions on Systems, Man and Cybernetics 9 (1) 62
66(1979).
|4|. Portes de AlbuquerqueM., EsqueII.A.,GesualdiMelloA.R.: Image thresholding using tsallis entropy, Pattern Recognition Letters 25
1059 1065(2004).
|5|. KennedyJ.,EberhartR.C.: Particle swarm optimization, IEEE International ConIerence on Neural Networks, 4, 19421948(1995).
|6|. http://www.cleveralgorithms.com/nature-inspired/swarm/pso.html
|7|. AkayB.: A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding, Appl. SoIt Comput.
J. (2012), http://dx.doi.org/10.1016/j.asoc.2012.03.072.
|8|. StornR. andPriceK.: Differential Evolution A simple and efficient adaptive scheme for global optimization over continuous spaces,
Technical Report TR-95-012, March 1995, Itp.ICSI.Berkeley.edu/pub/techreports/1995/tr-95-012.ps.Z.
|9|. Price K.: An introduction to differential evolution, In CorneD., DorigoM and GloverF., (eds.)New Ideas inOptimization, pp.79-108
McGraw Hill, 1999.
|10|. PriceK.:Differential evolution: a fast and simple numerical optimizer. In SmithM.,LeeM., KellerJ., and YenJ. (eds.) Biennial
ConIerence oI the North American FuzzyInIormation Processing Society, IEEE Press, New York, NY, pp. 524527(1996).
610 Elsevier Publications, 2013
Sushil Kumar, Millie Pant and A. K. Ray
|11|. PriceK.V.:Differential evolution vs. the functions of the 2
nd
ICEO, Proc. IEEE International ConIerence on EvolutionaryComputation,
pp. 153 157(1997).
|12|. BecerraR.L.,andCoelloCoelloC.A.: Culturizing differential evolution for constrained optimization, Proc. The IiIth
MexicanInternational ConIerence in Computer Science pp. 304 311(2004).
|13|. GonzalezR.C. and WoodsR. E.: Digital Image Processing, Prentice Hall, Upper Saddle River, NJ, 2002.
|14|. Sezgin, M. and Sankur,B.: Survey over image thresholding techniques and quantitative performance evaluation,Journal oI Electronic
Imaging 13 (1), pp. 146165(2004).
|15|. Omran, M.G., Engelbrecht, A.P., Salman, A.: Dynamic clustering using particle swarm optimization with application in image
segmentation, Pattern Analysis & Applications vol8 (4), 332-344 (2006).
|16|. Maitra, M., Chatterjee, A.:A hybrid cooperativecomprehensive learning based PSO algorithm for image segmentation using
multilevel thresholding.Expert Systems with Applications, vol34(2):1341-1350 (2008).
|17|. Chander, A., Chatterjee, A., Siarry,P.:A new social and momentum component adaptive PSO algorithm for image
segmentation.Expert Systems with Applications, vol 38(5):4998-5004(2011).
|18|. Zhang, Y., Huang, D., Ji, M., Xie, F.: Image segmentation using PSO and PCM with Mahalanobisdistance.Expert Systems with
Applications, volume 38(7):9036-9040.
|19|. Lee, C.Y., Leou, J.J., Hsiao,H.H.:Saliency-directedcolor image segmentation using modified particle swarm optimization. Signal
Processing, vol92(1):1-18 (2012).
|20|. Zahara, E., Fan S.K.S, and TsaiD.M.:Optimal multi-thresholding using a hybrid optimization approach, Pattern Recognition Letters
26, pp. 1082-1095(2005).
|21|. Rahnamayan, S., Tizhoosh, H.R., Salama, M.M.A.: Image thresholding using differential evolution (2006).
|22|. Aslantas, V., Tunckanat, M.: Differential evolution algorithm for segmentation of wound images (2007).
|23|. Rahnamayan, S., Tizhoosh, H.R.: Image thresholding using micro opposition based differential evolution. In: Proceeding oI IEEE
CEC 2008. pp 1409-1416 (2008).
|24|. Kumar, S., Pant, M., Ray, A.K.: Differential evolution embedded otsus method for optimized image thresholding. In:proceeding oI
World congress in InIormation and Communication Technology (WICT-11), pp 325-329(2011).
|25|. Pavan, K.K., Srinivas, V.S., Srikrishna, A., Reddy, B.E.:Autometic tissu segmentation in medical image using differential evolution.
J Appl Sci. 12(6): 587-592 (2012).
|26|. Horng, M.H.: Multilevel thresholding selection based on the artificial bee colony algorithm for image segmentation. Expert
SystAppl 38 (11):1378513791 (2011).
|27|. Zhang, Y., Wu, L.: Optimal multi-level thresholding based on maximum tsallis entropy via an artificial bee colony approach.
Entropy 13(4):841859 (2011).
|28|. Ma, M., Liang, J., Guo, M., Fan, Y., Yin,Y.:Sar image segmentation based on artificial bee colony algorithm. Appl SoIt Comput
11(8):52055214 (2011).
|29|. Akay, B., Karaboga, D.: Wavelet packets optimization using artificial bee colony algorithm. In: Proceeding oI IEEE Congress on
Evolutionary Computation (CEC), pp 8994 (2011).
|30|. Horng, M.H.: Multilevel thresholding selection based on the artificial bee colony algorithm for image segmentation. Expert
SystAppl 38 (11):1378513791 (2011).
|31|. Kumar, S., Pant, M., Ray, A.K.: Differential evolution embedded otsus method for optimized image thresholding. In:proceeding oI
World congress in InIormation and Communication Technology (WICT-11), pp 325-329(2011).
|32|. Akay, B., Karaboga, D.: Wavelet packets optimization using artificial bee colony algorithm. In: Proceeding oI IEEE Congress on
Evolutionary Computation (CEC), pp 8994 (2011).
|33|. Horng, M.H.:A multilevel image thresholding using the honey bee mating optimization, Applied Mathematics and Computation, 215
(2010), pp.33023310.
|34|. D. Karaboga, An idea based on honey bee swarm for numerical optimization, Technical Report TR06, Erciyes University,
Engineering Faculty, Computer EngineeringDepartment, 2005.
611 Elsevier Publications, 2013
Index

A
Artificial bee colony, 608609

B
Bi-level thresholding, 606

D
Differential evolution (DE), 608

K
Kapur's entropy
between class variance, 606607

M
Multilevel thresholding, 604605
experimental results, 609

P
Particle swarm optimization (PSO), 607608

S
Segmentation, 604

You might also like