Professional Documents
Culture Documents
Abstract: Association rule mining is a data mining task for which a great deal of academic research has
been done and many algorithms are proposed. Evolutionary algorithms Among EAs Genetic Algorithm
(GA) and Particle Swarm Optimization (PSO) is more suited for mining Association rules. The
bottleneck inboth GA and PSO is setting the precise values for their control parameters for the right
problem. Both ga and pso parameters are to be tuned. This paper proposes adaptive methodology for
parameter control of both GA and PSO. In Adaptive Genetic Algorithm (AGA) the mutation rate is
varied and in Adaptive Particle Swarm Optimization the acceleration coefficients are adjusted through
Estimation of Evolution State (EES) and inertia weight adaptation is based on fitness values. Both
methods tested on five datasets from UCI repository proved to generate association rules with better
accuracy and rule measures when compared to simple GA and PSO.
Keywords: Association Rule Mining, Genetic Algorithm, Particle Swarm Optimization, Adaptive GA,
Adaptive PSO, Estimation of Evolution State.
1. Introduction
Association rule mining aims in extracting interesting correlations or patterns among sets of items in the
transaction databases or other data repositories. It is one of the most important and well researched
techniques of data mining. The application area of data mining varies from market analysis to business
intelligence, has been now extended to medical domain, temporal/spatial data analysis and web
mining.AR mining Atsk Hence the accuracy of the association rules mined and the relationship between
attributes has become an important issue. The standard association rule mining methods such as Apriori
[5,7], FP growth tree [6,7] scans the whole dataset for each attribute match, increasing the input/output
overhead of the system. The rules generated are with single objective aiming at accuracy alone and the
number of rules generated is vast. Pruning and summarization are needed to filter the significant rules.
The efficiency of association rule mining can be enhanced by
Reducing the number of passes over the database
Making the process as multiobjective
Sustaining the search space efficiently
Evolutionary Algorithms (EA) such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO),
provides solution for meeting the above three requirements for mining association rules[1,2,3]. Both
methods have proven to generate association rules with better predictive accuracy and reduction in
execution time. However one laborious aspect of all EAs, including PSO and GA, is performing the
appropriate parameter adjustments [11].
In solving problems with GA and PSO, their properties affect their performance. The properties of both
GA and PSO depend on the parameter setting and hence user needs to tune the parameters to optimize the
performance. The interaction between the parameters has complex process and a single parameter will
have a different effect depending on the value of the others. Without prior knowledge of the problem,
parameter tuning is difficult and time consuming. Different combinations of parameter values are to be
tried to find the best setting. The two major ways of parameter setting are through parameter tuning and
parameter control. Parameter tuning is the commonly practiced approach that amounts to find the
appropriate values for the parameters before running the algorithm. Parameter control steadily modifies
1
the control parameter values during the run. This could be achieved either through deterministic or
adaptive or self-adaptive techniques[citation needed].
Deterministic parameter control takes place using a deterministic rule that modifies the strategy parameter
without any feedback. This method becomes unreliable for most problems due to As the parameter
adjustments must rely on the status of the problem at current time and this method becomes unreliable for
most problems. Some connection words as contrary needed. In self-adaptive approach the parameters to
be controlled are encoded into the candidate solution which may result to deadlock change the statements
of deadlock obtaining the good solution depends on finding the good setting of parameters, but
meanwhile, obtaining the good setting of parameters depends on finding the good solution, which is a
kind of the chicken or the egg causality dilemma. Moreover, extra bits are required to store these strategy
parameters, so the dimensionality of the search space is increased. Thus the corresponding search space
becomes larger and hence the complexity of the problem is increased.
Therefore the adaptive method is the solution. By using adaptive operatorin genetic algorithm proposed
by Srinvivas [12] is an improvement of the basic genetic algorithm. By using adaptive operator, high
convergence speed and high convergence precision has been obtained. An adaptive parameter control is
applied for inertia control in Eberhart and Kennedy [13], for linearly decreasing inertia over the
generations in Arumugam and Rao [14], and for a fuzzy logical controller in Luo and Yi [15]. It was also
used for the acceleration coefficients in Ratnaweera et al. [16] and Arumugam and Rao [14] by balancing
the cognitive and the social components. The analysis of change in the parameters over evolution and
adaptation accordingly can enhance the performance of the EA. In this paper the parameter control
mechanism (EES) is proposed for adopting the acceleration coefficients in PSO and the inertia weight
parameters are adopted during the evolutionary process based on the fitness value Revise.
The rest of the paper is organized as follows. Section 2 introduces the preliminaries of association rules,
GA and PSO. Section 3 reviews the literature related to proposed methodology, section 4 reports the
experiment settings, presents experimental results and discussions. Finally Section 5 draws a conclusion.
2. Preliminaries
This section briefly discusses about association rules and their related factors. The basic features of
multiobjective optimization, Genetic Algorithm and Particle Swarm Optimization are also discussed.
2.1
Association Rule
Association rules are a class of important regularities in data. Association rule mining is commonly stated
as [4]: Let I = {i1,i2, , in} be a set of n binary attributes called items. Let D = {t1, t2, , tm} be a set of
transactions called the database. Each transaction in D has a unique transaction ID and contains a subset
of the items in I. A rule is defined as an implication of the form X Y where X, Y I and X Y = .
The sets of itemsets X and Y are called antecedent (left-hand-side or LHS) and consequent (right-handside or RHS) of the rule. Often rules are restricted to only a single item in the consequent.
There are two basic measures for association rules, support and confidence. Support of an association rule
is defined as the percentage/fraction of records that contain X Y to the total number of records in the
database. The count for each item is increased by one every time the item is encountered in different
transaction T in database D during the scanning process. Support is calculated using the following
equation
(1)
Confidence of an association rule is defined as the percentage/fraction of the number of transactions that
contain X Y to the total number of records that contain X, where if the percentage exceeds the threshold
of confidence an interesting association rule XY can be generated.
(2)
Confidence is a measure of strength of the association rule
2.2 Multiobjective Optimization
A general minimization problem of M objectives can be mathematically stated as: Given
x2, . . . , xd], where d is the dimension of the decision variable space,
Minimize :
= [x1,
], subject to :
gj( ) 0, j= 1, 2, . . . , J, and
hk( ) = 0, k = 1, 2, . . .,K, where
is the ith objective function, gj( ) is the jth inequality
th
constraint, and hk( ) is the k equality constraint.
A solution is said to dominate another solution if it is not worse than that solution in all the objectives and
is strictly better than that in at least one objective. The solutions over the entire solution space that are not
dominated by any other solution are called Pareto-optimal solutions.
Association rule mining using GA and PSO is treated as a multiobjective problem where the objectives
are
Predictive Accuracy
Laplace
Conviction
Leverage
Predictive Accuracy measures the effectiveness of the rules mined. The mined rules must have high
predictive accuracy.
(3)
where |X&Y| is the number of records that satisfy both the antecedent X and consequent Y, |X| is the
number of rules satisfying the antecedent X.
Laplace is a confidence estimator that takes support into account, becoming more pessimistic as the
support of X decreases[34,35]. It ranges within [0, 1] and is defined as
(4)
Conviction is sensitive to rule direction and attempts to measure the degree of implication of a rule[33].It
ranges within[0.5, ]. Values far from 1 indicate interesting rules.
(5)
Leverage also known as Piatetski-Shapiro measure [36], measures how much more counting is obtained
from the co-occurrence of the antecedent and consequent from the expected, i.e., from independence. It
ranges within [0.25, 0.25] and is defined as
(6)
Each particle p, at some iteration t, has a position x, and a displacement velocity v. The particles best
(pBest) and global best (gBest) position are stored in the associated memory. The velocity and position
are updated using equations 7 and 8 respectively.
(7)
(8)
Where
is the inertia weight
vi
is the particle velocity of the ith particle
xi
is the ith, or current, particle
i
is the particles number
d
is the dimension of searching space
rand ( ) is a random number in (0, 1)
c1
is the individual factor
c2
is the societal factor
pBest is the particle best
gBest is the global best
Inertia weight controls the impact of the velocity history into the new velocity and balances global and
local searches. Suitable fine-tuning of cognitive and social parameters c1 and c2 result in faster
convergence of the algorithm and alleviate the risk of settling in one of the local minima.
The Pseudo code for PSO is given below.
3. Literature review
There are many methods to control the parameter setting during an EA run. Researchers have proposed
different self-adaptive approaches in EA to make the parameters evolve by themselves. Back [23] embeds
the mutation rate into chromosomes in GA to observe the performance of the self-adaptive approach in
different functions. Spears [24] adds an extra bit in the schematic of chromosome to investigate the
relative probability of operating two-point crossover and uniform crossover in the GA. Hop and
Tabucanon [25] presented a new and original approach to solve the lot size problem using an adaptive
genetic algorithm with an automatic self-adjustment of three operator rates namely, the rate of crossover
operation, the rate of mutation operation and the rate of reproduction operation. Adaptive directed
mutation operator in real coded genetic algorithm was applied to solve complex function optimization
problems [31]. It enhances the abilities of GAs in searching global optima as well as in speeding
convergence by integrating the local directional search strategy and the adaptive random search strategies.
A hybrid and adaptive fitness function, in which both filter and wrapper approaches was applied for
feature selection via genetic algorithm to generate different subsets for the individual classifiers
wasproposed in [32]. revise
cf of particle swarm optimization the evolution direction of each particle is redirected dynamically by
adjusting the two sensitive parameters i.e. acceleration coefficients of PSO in the evolution process [30].
Several strategies [26,27,28] have been proposed to tune the contribution of various sub-algorithms
adaptively according to their previous performance, and have shown pretty good effect.
4. Methodology
Any heuristic search can be characterized by two concepts, exploitation and exploration. These concepts
are often mutually conflicting in the sense that if exploitation of a search is increased, then exploration
decreases and vice versa. The manipulation of control parameters in GA and PSO balances the above
problem. This paper proposes adaptive parameter control for both Genetic Algorithm and Particle Swarm
Optimization.
4.1 Adaptive Genetic Algorithm (AGA)
Genetic algorithms when applied for mining association rules perform global search and copes better with
attribute interaction. With roulette wheel selection the parents for crossover and mutations are selected
based on their fitness values, i.e. if a candidate is having high fitness value then the chances of selection is
high. The crossover and mutation operations are manifested by the crossover and mutation rate usually
defined by the user. The efficiency of the rules mined by genetic algorithm mainly depends on the
mutation rate, while the crossover rate affects the convergence rate [ add citation]. Therefore tuning the
value for the mutation rate becomes an important criterion for mining association rules with GA. Higher
mutation rate results in generation of chromosomes much deviated from original values thereby resulting
in higher exploration time. Lower mutation rate results in cluster of reproduced chromosomes crowded
towards the global optima region thereby limiting the search space. Hence setting the mutation rate
adaptively results to a better solution where the change from generation to generation based on the
feedback and the fitness value is the solution.
The algorithm for adaptive genetic algorithm for mining association rules is as given below
{
{
Perform selection;
Perform crossover and mutation;
Evaluate fitness of each individual;
Change mutation operator.
}
}
The mutation operator is made adaptive as given in equation below.
(8)
is the (n+1)th generation mutation rate. The first generation mutation rate is
, fi(m) is the fitness
of the nth individual itemset i. fmax(n+1) is the highest fitness of the (n+1)th individual stocks. fin is the fitness
of the nth individual i. m is the number of item sets. is the adjustment factor, which is set within range
[0,1].
4.2 Adaptive Particle Swarm Optimization (APSO)
A swarm[17] consists of a set of an integer number, M, of particles, xi, moving within the search space,
S Rd, each representing a potential solution of the problem as
Find;
where F is the fitness function associated with the problem, which we consider to be a minimization
problem without loss of generality.
PSO is mainly conducted by three key parameters important for the speed, convergence and efficiency of
the algorithm [10]: the inertia weight
and two positive acceleration coefficients(c1 and c2). Inertia
weight controls the impact of the velocity history into the new velocity. Acceleration parameters are
typically two positive constants, called the cognitive and social parameters, respectively.
The role of the inertia weight is considered critical for the PSO algorithms convergence behavior. As it
balances global and local searches, it has been suggested to have it decrease linearly with time, usually in
away to first emphasize global search. Suitable fine-tuning of cognitive and social parameters c1 and c2
may result in faster convergence of the algorithm and alleviate the risk of settling in one of the local
minima. The pseudo code for the Adaptive PSO is given below
/* Ns: size of the swarm, C: maximum number of iterations, Of : the final output */
1) t = 0, randomly initialize S0,
Initialize xi, i, i {1, . . .,Ns}
Initialize vi, i, i {1, . . .,Ns}
Pbi xi, i, i {1, . . .,Ns}
Gb xi
2) for t = 1 to t = C,
for i = 1 to i = Ns
7
3) Of At and stop
/* Of : Output*/
adjust parameters(i, c1i, c2i) in the above pseudo code is achieved through adaptive mechanism is
proposed here. The proposed approach distributes the evolution into four states based on evolutionary
state estimation: Convergence, Exploration, Exploitation and Jumping out.
Estimation of Evolutionary State (EES)
Based on the search behaviors and the population distribution characteristics of the PSO the EES is done
as follows
1. The distance between particles is calculated using the Euclidean distance measure for each
particle i using the equation
(9)
where N is the population size, xi and xj are the ith an jth particle in the population respectively.
2. Calculate the evolutionary state estimator e, defined as
(10)
Where
is the distance measure of the gBest particle,
minimum distance measures respectively from step1.
3. Record the evolutionary e factor for 100 generations for each dataset individually
4. Classify the estimator e into the states: Exploration, Exploitation, Convergence, Jumping out for
the datasets based on the evolutionary states through fuzzy classification techniques.
5. The intervals arrived through fuzzification is as shown in table below
States/Datasets
Convergence
Exploitation
Exploration
Jumping out
Lenses
0.0, .03
0.1, 0.4
0.2. 0.7
0.6, 1
Car
Evaluation
0.0, 0.15
0.15, 0.25
0.1, 0.3
0.3, 1
Habermans
Survival
0.0, 0.4
0.3, 0.7
0.6, 0.9
0.8, 1
Postoperative
Patient
0, 0.5
0.2, 0.6
0.4, 0.8
0.7, 1
Zoo
0.0, 0.15
0.1, 0.3
0.2, 0.4
0.3, 1
Exploitation
c1
c2
Increase
Increase Slightly
Increase Slightly
Decrease
Decrease
Decrease Slightly
Increase Slightly
Increase
Exploration: During exploration particles should be allowed to explore as many optimal regions as
possible. This avoids crowding over single optima, probably the local optima and explores the target
thoroughly. Increase in the value of c1 and decrease in c2 values facilitate this process.
Exploitation: In this state based on the historical best positions of each particle they group towards those
points. The local information of the particles aids this process. A slight increase in c 1 advances the search
around particle best (pBest) positions. At the same time the slight decrease in c2 avoids the deception of
local optima as the final global position has yet to be explored.
Convergence: In this state the swarm identifies the global optima. All the other particles in the swarm
should be lead towards the global optima region. The slight increase in the value of c 2 helps this process.
To fasten up the process of convergence a slight increase in the value of c1 is adopted.
Jumping Out: The global best (gBest) particle moves away from the local optima towards global optima,
taking it away from the crowding cluster. Once any particle in the swarm reaches this region then all
particles are to follow the same pattern rapidly. A large c2 along with a relatively small c1 value helps to
obtain this goal.
The adjustments on the acceleration coefficients should be minimal. Hence, the maximum increment or
decrement between two generations is bounded by the range of [0.05,0.1]. The sum of the acceleration
coefficients is limited to 4.0, when the sum exceeds this limit then both c1 and c2 are normalized based on
equation.
(11)
Inertia Weight Adaptation
The inertia weight controls the impact of previous flying experience, which is utilized to keep the balance
between exploration and exploitation. The particle adjusts its trajectory according to its best experience
and enjoys the information of its neighbors. In addition, the inertia weight is also an important
convergence factor; the smaller the inertia weight, the faster the convergence of PSO. A linear decrease in
inertia weight gradually may swerve the particle from their global optima. Hence a nonlinear adaptation
of inertia weight as proposed in the given equation is the solution. The global best particle is derived
based on the fitness value of the particles in the swarm. The proposed methodology for adopting the
inertia weight is based on the fitness values exhibited by the particles.
(12)
{
5. Experimental Results and Discussions
Five datasets from University of California Irvine (UCI) repository [39]: Lenses, Habermans Survival,
Car Evaluation, Postoperative Patient and zoo have been used for the experiment. The results are
compared with the general traditional PSO and GA performance [38]. The experiments were developed
using Java and run in windows environment. The best of the five executions were recorded. The number
of iterations was fixed to hundred. The datasets considered for the experiments is listed in Table 1.
Attributes
4
6
3
8
16
Instances
24
1728
310
87
101
Attribute characteristics
Categorical
Categorical, Integer
Integer
Categorical, Integer
Categorical,
Binary,
Integer
In adaptive GA methodology mutation rate was made self adaptive, based on the analysis of control
parameters [37], where the mutation operation was found to influence the accuracy of the system while
the crossover operation affects the convergence rate alone. Hence the crossover rate is kept at fixed value.
For the mutation rate in addition to the feedback from earlier mutation rate, the fitness value is also
considered while adaption. This enhances the accuracy of the association rules mined. The predictive
accuracy achieved using adaptive methods for both GA and traditional GA is given table 2.
Table 2. Accuracy Comparison between GA and Adaptive GA
Dataset
Traditional GA
Accuracy No. of Generations
Lenses
75
38
Habermans Survival
52
36
Car Evaluation
85
29
Adaptive GA
Accuracy No. of Generations
87.5
35
68
28
96
21
10
Postoperative Patient
84
63
92
52
Zoo
82
58
91
47
The predictive accuracy of the rules mined by the AGA is improved in comparison with the traditional
GA methodology. The convergence rate has also been reduced due to effectiveness in fixing up of the
search space by adaptation methodology. The mutation rate of the adaptive GA at the final iteration is
noted. This mutation rate is replaced for static mutation rate in traditional GA. The predictive accuracy of
the association rules mined by this method is compared with the results of original GA and Adaptive GA
as plotted in figure 1.
120
100
80
GA
60
AGA
40
GA with AGA
mutation
20
0
Lenses
Zoo
0.65201
infinity
0.0324
Haberman's
Survival
0.64782
Infinity
0.0569
Car
Evaluation
0.652488
Infinity
0.04356
Postoperative
Patient
0.6528
infinity
0.0674
Zoo
0.6524
Infinity
0.0765
The Laplace and Leverage values away from 1 for all the datasets indicates that the rules generated are of
interest. The conviction value, infinity again signifies the importance of the rules generated.
11
The APSO methodology for mining association rules adapts the acceleration coefficients based on the
Estimation of evolutionary state. The estimate factor(e) determines one among the four states:
Exploration, Convergence, Exploitation, Jumping out , in which the particle is and adjusts the acceleration
coefficients accordingly. Based on the fitness values the inertia weight is adapted. Then the velocity
modification is based on the state in which the particle lies. This balances between exploration and
exploitation thus escaping from premature convergence. The predictive accuracy of the association rules
mined is improved through the evolution process.
The adaptation of the acceleration coefficients c1 and c2 change of states in estimation of evolution state
for Zoo dataset shown in figure 2.
2.4
2.3
2.2
2.1
2
1.9
1.8
1.7
1.6
1.5
C1
C2
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61
Generation Number
100
10
98
20
96
30
94
40
92
50
60
90
70
88
80
86
90
84
100
Car
Haberman
Lens
Postop
Zoo
12
Lenses
Laplace
Conviction
Leverage
0.52941
infinity
0.026
Car
Evaluation
0.502488
Infinity
0.002394
Postoperative
Patient
0.5028
infinity
0.0301
Zoo
0.5024
Infinity
0.0249
The Laplace measure when away from 1 indicates that the antecedent values are dependent on the
consequent values and hence the rules generated are of significance. The conviction measure when
infinity for all datasets shows that the rules generated are interesting. The Leverage measure being
faraway from 1 again insists on the interestingness of the rules generated. The predictive accuracy
achieved form the adaptive PSO methodology is compared with the traditional PSO as in figure 4. The
Adaptive PSO performs better than the traditional PSO for all the five datasets when applied for
association rule mining.
105
100
95
90
PSO
85
APSO
80
75
Lenses
Car
Habermans Po-opert
Evaluation Survival
Care
Zoo
13
40
35
30
25
20
15
10
5
0
PSO
APSO
Lens
Postop
Haberman
Zoo
Car
12000
10000
8000
PSO
6000
APSO
4000
2000
0
Car
Haberman
Lens
Postop
Zoo
The major drawback of the traditional PSO is its premature convergence where the particle fixes some
local optima as the target (global search area), where all particles converge locally. One of the objectives
of Adaptive PSO is to avoid the premature convergence, thereby balancing between exploration and
exploitation. The iteration number at which predictive accuracy is high is plotted for both the APSO and
PSO methodologies for mining association rules for the datasets used in figure 7.
Iteration Number
120
100
80
PSO
60
APSO
40
20
0
Lenses
Zoo
Car
Habermans
Evaluation Survival
Postop
When tested on five datasets from UCI repository both the AGA an APSO generated association rules
with better predictive accuracy and good rule set measures. The execution time of both the methods for
mining association rules is found to be optimum taking into account the accuracy achieved.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
P. Deepa Shenoy, K.G. Srinivasa, K.R. Venugopal, Lalit M. Patnaik, Evolutionary approach for
mining association rules on dynamic databases, in: Proc. of PAKDD, LNAI, 2637, SpringerVerlag, 2003, pp. 325336.
Jacinto Mata Vzquez, Jos Luis lvarez Macas, Jos Cristbal Riquelme Santos, Discovering Numeric
association rules via evolutionaryalgorithm, PAKDD, 2002, pp. 4051.
Kuo. R. J, Chao C.M, Chiu Y. T (2011) , Application of Particle Swarm Optimization to
Association rule Mining , Journal of Applied soft computing, vol 11, Issue 1, Jan 2011
Agrawal R, Imielinski T, Swami A. Mining association rules between sets of items in large
databases. In: Buneman P, Jajodia S, eds. SIGMOD 93: Proceedings of the 1993 ACM SIGMOD
International Conference on Management of Data. New York: ACM Press; 1993, 207216.
R.Agrawal and R.Srikant, Fast Algorithms for Mining Association Rules in Large Databases,
proceedings of the 20th International Conference on Very Large Data Bases(VLDB94), Morgan
Kaufmann, 1994, pp 478-499.
J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation, In Proc.
SIGMOD00, Dallas, TX, May 2000, pp 1-12.
J.Han and M.Kamber, Data mining: Concepts and Techniques, Morgan Kaufmann, 2001.
J. Kennedy, R.C. Eberhart, Particle swarm optimization, Proceeding of the IEEE International
Conference on Neural Networks 4 (1995) 19421948.
J. Kennedy, R.C. Eberhart, Swarm Intelligence, Morgan Kaufman, 2001.
Yao, X., Liu, Y., Lin, G., 1999. Evolutionary programming made faster. IEEE Transactions on
evolutionary Computation 3 (2), 82102.
Ueno, G., Yasuda, K., Iwasaki, N., 2005. Robust adaptive particle swarm optimization. IEEE
International Conference on Systems, Man and Cybernetics 4, 39153920.
S. Mand P. L. M, Adaptive probabilities of crossover and mutation in genetic algorithms,
System, Man and Cybernetics, Vol. 24, No. 4, 1994,pp:656-667.
Eberhart, R., Kennedy, J., 1995. A new optimizer using particle swarm theory. In: Proceedings of
the Sixth International Symposium on Micro Machine and Human Science, Piscataway, NJ, IEEE
Service Center, pp. 3943
Arumugam, M.S., Rao, M.V.C., 2008. On the improved performances of the particle swarm
optimization algorithms with adaptive parameters, cross-over opera- tors and root mean square
(RMS) variants for computing optimal control of a class of hybrid systems. Applied Soft
Computing 8 (1), 324336.
Luo, Q., Yi, D., 2007. A co-evolving framework for robust particle swarm optimization. Applied
Mathematics and Computation, Applied Mathematics and Computation 199 (2), 611622.
Ratnaweera, A., Halgamuge, S.K., Watson, H.C., 2004. Self-organizing hierarchical particle
swarm optimizer with time-varying acceleration coefficients. IEEE Transactions on Evolutionary
Computation 8 (3), 240255.
A.P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, John Wiley & Sons,
2006.
Y. Shi, R.C. Eberhart, Parameter selection in particle swarm optimization, in: Proceeding of the
7th Conference Evoluationary Programming, New York,1998, pp. 591600.
16
19. K.E. Parsopoulos, M.N. Vrahatis, UPSOA unified particle swarm optimization scheme,
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
17
37. K. Indira ,S. Kanmani, Association Rule Mining Using Genetic Algorithm: The role of
18