You are on page 1of 8

2011 Ninth International Conference on ICT and Knowledge Engineering

An Improved PSO Approach for Solving Non-


Convex Optimization Problems


P.Vasant
Department of Fundamental &
Applied Sciences, University
Technology Petronas, Malaysia



T.Ganesan
Department of Mechanical
Engineering, University Technology
Petronas, Malaysia



I.Elamvazuthi
Department of Electronic &
Electrical Engineering, University
Technology Petronas, Malaysia,


AbstractThe aim of this paper is to propose an improved
particle swarm optimization (PSO) procedure for non-convex
optimization problems. This approach embeds classical methods
(Kuhn-Tucker (KT) conditions and the Hessian matrix) into the
fitness function. This generates a semi-classical hybrid PSO
algorithm (HPSO). The classical component improves the PSO
algorithm in terms of its capabilities to search for optimal
solutions in non-convex scenarios. In this work, the development
and the testing of the refined HPSO algorithm was carried out.
The HPSO algorithm was tested against four engineering design
problems which were; optimization of the design of a pressure
vessel (P1), optimization of the design of a tension/compression
spring (P2) and two design optimization problems in
engineering (P3 and P4). The computational performance of the
HPSO algorithm was then compared against the best optimal
solutions from previous work on the same engineering problems.
Comparative studies and analysis were then carried out based on
the optimized results. It was observed that the HPSO provided a
better minimum with a higher quality constraint satisfaction as
compared to the PSO approach in the previous work.
Keywords Kuhn-Tucker conditions (KT), non-convex
optimization, particle swarm optimization (PSO), semi-classical
particle swarm optimization (HPSO), and engineering design
problems.
I. INTRODUCTION
In recent years, many engineering design problems are
classified as non-convex constrained optimization problems
(see [1], [2] and [3]). Constrained optimization techniques
(meta-heuristic) such as genetic algorithms [4], genetic
programming [5] as well as particle swarm optimization (PSO)
[6] have been developed and implemented to these problems.
Particle Swarm Optimization (PSO) is an optimization
method developed based on the movement and intelligence of
swarms. PSO integrates the concept of social interaction to
problem solving and decision-making. PSO was developed by
James Kennedy and Russell Eberhart [6] in 1995. Particle
swarm is the system model or social structure of a basic
creature which makes a group to have some objectives such as
food searching and predator-prey interactions. Hence, the
governing principle is that it is an important to take part with
the most of the population in a group that has the same activity.
Recently, PSO has been applied to various fields of power
system including economic dispatch problems as well as in
optimization problems in electric power systems (see
Phuangpornpitak et al [7]).
As observed in [8], [9] and [10], classical optimization
methods such as the Augmented Lagrange, Penalty and
Karush-Kuhn-Tucker (KKT) techniques can be used to modify
existing artificial intelligence algorithms such as neural
networks. In [9] and [10], the Augmented Lagrange HNN was
used to solve the economic load dispatch problem which
involves only equality constraints. Similarly, in this work, the
Kuhn-Tucker (KT) conditions [11] and the determinant of the
Hessian matrix [12] is used to modify certain components of
the PSO fitness function to provide the algorithm with the
capacity to handle the nonlinearities, such as those encountered
in engineering design problems. The HPSO algorithm is then
used to solve four engineering design problems. The problems
are the optimization of the design of a pressure vessel (P1)
[13], optimization of the design of a tension/compression
spring (P2) [14] and two design optimization problems in
engineering (P3, [15],[16] and P4, [17]). The results are then
compared against the results obtained by other methods in [16],
[17] and [18].
This paper is organized as follows: Section 2 presents the
problem description for P1, P2, P3 and P4. Section 3 discusses
the development of the HPSO approach. The analysis and
computational results are included in Section 4. In Section 5,
some discussions on the computational experiments are
provided. Section 6 presents the concluding remarks and
recommendations for future research work.
II. DESCRIPTION OF THE ENGINEERING DESIGN
APPLICATION DATA
The four engineering design problems are considered in
this work the optimization of the design of a pressure vessel
(P1) [13], optimization of the design of a tension/compression
spring (P2) [14] and two design optimization problems in
engineering (P3, [15], [16] and P4, [17]).
The problem P1 is a pressure vessel design optimization
problem. The optimization of the design of a compressed air
storage tank that has a working static pressure of 3,000 psi and
a minimum volume of 750 ft
3
is considered. This cylindrical
vessel is capped (closed) at both ends by hemispherical heads.
The tank is constructed of rolled steel plates and its shell is
978-1-4577-2162-5/11/$26.00 2011 IEEE
80

made in two halves. These two halves are joined by the
longitudinal welds to form the cylinder. The objective is to
minimize the total cost (including the cost of the materials that
forms the welding [13]). The design variables involved are the
thickness x
1
, the thickness of the head x
2
, the inner radius x
3
,
and the length of the cylindrical section of the vessel x
4
. The
variables x
1
and x
2
are discrete values which are integer
multiples of 0.0625 inches. The formal statement of problem
P1 is as the following:
Minimize:
3
2
1 4
2
1
2
3 2 4 3 1
84 . 19 1661 . 3
7781 . 1 6224 . 0 ) (
x x x x
x x x x x x f
+ +
+ =
&
(1)
subject to
0 0193 . 0 ) (
3 1 1
s + = x x x g
&
&
(2)
0 00954 . 0 ) (
3 2 2
s + = x x x g (3)
0 1296000
3
4
) (
3
3
2
4
2
3 3
s + = t t x x x x g
&
(4)
0 240 ) (
4 4
s = x x g
&
(5)
0625 . 0
1
> x , , , (6) 1875 . 6
2
s x 10
3
> x 200
4
s x
where x
1
, x
2
,x
3
and, x
4
are the decision variables,
and are the constraints and ) ( ), (
2 1
x g x g
& &
) (
3
x g
&
) (x f
&
is the
objective function.
The second problem considered in this work is the
optimization of the design of a tension/compression spring
(P2). The aim of this problem is to minimize the weight of a
tension/compression spring, subject to certain constraints. The
constraints considered are the minimum deflection, shear
stress, surge frequency; limits on the outside diameter and on
design variables. There are three design variables (or decision
variables) in this problem which is the wire diameter x
1
, the
mean coil diameter x
2
, and the number of active coils x
3
. The
formal representation of problem P2 is as the following:
Minimize:
2
1 2 3
) 2 ( ) ( x x x x f + =
&
(7)
subject to
0
7178
1 ) (
4
1
3
3
2
1
s
|
|
.
|

\
|
=
x
x x
x g
&
(8)
0 1
5108
1
12566
4
) (
2
1
4
1 2
3
1
2 1
2
2
2
s +

=
x x x x
x x x
x g
&
(9)
0
45 . 140
1 ) (
3
2
2
1
3
s
|
|
.
|

\
|
=
x x
x
x g
&
(10)
0 1
5 . 1
) (
2 1
4
s |
.
|

\
|

+
=
x x
x g
&
(11)
2 05 . 0
1
s s x , , (12) 3 . 1 25 . 0
2
s s x 15 2
3
s s x
where x
1
, x
2
and x
3
are the decision variables, ) ( ), (
2 1
x g x g
& &
,
) (
3
x g
&
and ) (
4
x g
&
are the constraints and ) (x f
&
is the
objective function.
The optimization problem in chemical engineering, P3 studied
in [15] and [16] is as the following:
Minimize:
2 1
2 ) ( x x x f + =
&
(13)
subject to
0 1 16 ) (
2 1 1
s + = x x x g
&
(14)
0 1 4 4 ) (
2
2
2
1 2
s + = x x x g
&
(15)
] 1 , 0 [ ,
2 1
e x x (16)
where x
1
, and x
2
are the decision variables, ) (
1
x g
&
and ) (
2
x g
&

are the constraints and ) (x f
&
is the objective function.
The optimization problem in engineering, P4 was presented in
the works of [17]. The formal statement of this problem is as
the following:
Minimize:
2 2
2 1
2
2
2
1
) 7 ( ) 11 ( ) ( + + + = x x x x x f
&
(17)
subject to
0 ) 5 . 2 ( ) 05 . 0 ( 85 . 4 ) (
2
2
2
1 1
> = x x x g
&
(18)
0 84 . 4 ) 5 . 2 ( ) (
2
2
2
1 2
> + = x x x g
&
(19)
] 6 , 0 [ ,
2 1
e x x (20)
where x
1
, and x
2
are the decision variables, ) (
1
x g
&
and ) (
2
x g
&

are the constraints and ) (x f
&
is the objective function.
III. HYBRID PARTICLE SWARM OPTIMIZATION
The PSO algorithm is a technique that originates from two
different ideas. The first idea was based on the observation of
swarming or flocking habits of certain types of animals (for
instance; birds, bees and ants). The second concept was mainly
related to the study of evolutionary computation. The PSO
algorithm works by searching the search space for candidate
solutions and evaluating them to some fitness function with
respect to the associated criterion. The candidate solutions are
analogous to particles in motion (swarming) through the
fitness landscape in search for the optimal solution. Initially
the PSO algorithm chooses some candidate solutions
(candidate solutions can be randomly chosen or be set with
some a priori knowledge). Then each particles position and
velocity (candidate solutions) are evaluated against the fitness
function. Then if the fitness function is not satisfied, then
update the individual and social component with some update
rule. Next update the velocity and the position of the particles.
This procedure is repeated iteratively until the all candidate
81

solutions satisfy the fitness function and thus converges into a
fix position. It is important to note that the velocity and
position updating rule is crucial in terms of the optimization
capabilities of the PSO algorithm. The velocity of each
particle in motion (swarming) is updated using the following
equation.
)] ( ) ( [
)] ( ) ( [ ) ( ) 1 (
2 2
1 1
n x n g r c
n x n x r c n wv n v
i
i i i i
+
+ = +

(21)
where each particle is identified by the index i, v
i
(t) is the
particle velocity and x
i
(t) is the particle position with respect to
iteration (n). The parameters w, c
1
, c2, r
1
and r
2
are usually
defined by the user. These parameters are typically constrained
by the following inequalities:
we [0,1.2], c
1
[0,2], c
2
[0,2], e e
r
1
e [0,1], r
2
e [0,1]. (22)
The term wv
i
(t) in equation 13 is the inertial term which keeps
the particle moving in the same direction as its original
direction. The inertial coefficient w serves as a dampener or an
accelerator during the particles motion. The term
also known as the cognitive component
functions as memory. Hence, the particle tends return to the
location in the search space where the particle had a very high
fitness value. The term
)] ( ) ( [
1 1
n x n x r c
i i

)] ( ) ( [
2 2
n x n g r c
i
known as the social
component serves to move the particle to the locations where
the swarm has moved in the previous iterations. After the
computation of the particle velocity, the particle position is
then calculated as follows:
) 1 ( ) ( ) 1 ( + + = + n v n x n x
i i i
(23)
The iterations are then continued until the all candidate
solutions are at their fittest positions in the fitness landscape
and some stopping criterion which is set by the user is met.
For more comprehensive texts on PSO methods, refer to [16],
[17] and [18]
In this work, the fitness criterion described above is
enhanced by using the determinant of the Hessian matrix and
the KT conditions. The KT conditions are the necessary
conditions for optimality. Let the multivariate optimization
problem can generalized as the following:

Min f(x) subject to
g
1
(x
1
, ... , x
n
) b
1
s
g
2
(x
1
, ... , x
n
) b
2
s
g
m
(x
1
, ... , x
n
) b
m
s (24)

where g
1
(x
1
, ... , x
n
) are the constraints, f(x) is the objective
function, n is the maximum number of decision variables, m is
the maximum number of constraints and N e j i, .Then the
Lagrangian of the system can be expressed as the following:
_
=
=
m
i
j i i j i j
x g x f x L
1
) ( ) ( ) , ( : ] , 1 [ m i e ] , 1 [ n j e (25)
It can be observed that the vector, x
j
is the stationary point if
for some R
i
e the stationarity condition holds. The
stationarity equation (differential of the Lagrangian) is as the
following:
_
=
=
c
c

c
c
= V
m
i
j
j i
i
j
j
i j
x
x g
x
x f
x L
1
0
) ( ) (
) , ( : ] , 1 [ n j e (26)
However, the KT conditions will only identify the stationary
points but will not provide the information if the points are
locally minimal or maximal. To overcome this setback, the
Hessian matrix of the differential Lagrangian is employed. If
the determinant of the Hessian matrix of the Lagrangian is
positive definite ( ( ) 0 ) ( ( det > V
i j
, x L H ), then the point is
locally convex (minimum point) and vice versa. Therefore, the
fitness criterion of the HPSO algorithm is embedded with the
KT conditions [11] and the Hessian [12] of the Lagrangian to
provide enhance its search accuracy for the local optimal.
A. Formulations for Problem P1
For problem P1, the stationarity equation where ] 4 , 1 [ e i
are as the following:
1 2 1
4 1 4 3 1
68 . 39
3322 . 6 6224 . 0 ) , (

+ +
+ = V
x x
x x x x x L
i
(27)
2
2
3 2
7781 . 1 ) , ( + = V x x L
i
(28)
3
2
3 3
2
4 3 2 1
2
1 3 2 4 1 3
4 2 00954 . 0 0193 . 0
84 . 19 5562 . 3 6224 . 0 ) , (
t t

x x x
x x x x x x L
i
+ +
+ + = V
(29)
4 3
2
3 4
2
1 3 1 4
2
1161 . 3 6224 . 0 ) , (
t

+
+ = V
x x
x x x x L
i
(30)
For ] 4 , 1 [ e j ,
1
and
2
can be calculated directly since the
necessary condition of optimality is that the Lagrangian is
conserved (stationary condition). The Hessian matrix of the
differential Lagrangian is represented as the following:
j i
i j
x x
L
, x L H
c c
c
= V
2
)) ( ( (31)
In this work, following the notation in equation 32, the
Hessian matrix of the differential Lagrangian is as the
following:
j i
i
j
x x
L
L
c c
c
=
2
(32)
82

|
|
|
|
|
.
|

\
|
=
4
4
4
3
4
2
4
1
3
4
3
3
3
2
3
1
2
4
2
3
2
2
2
1
1
4
1
3
1
2
1
1
L L L L
L L L L
L L L L
L L L L
L
i
j
for (33) ] 4 , 1 [ , e j i
The determinant off the Hessian matrix of the differential
Lagrangian is then solved using Laplace expansion [19].
B. Formulations for Problem P2
For problem P2, the stationarity equations where ] 3 , 1 [ e i
are as the following:
4
3
2
2
3
3
1
2
2 3
1
2
1 2
2
2
1
2
2 1 2
2 4
1
3
1 2
3
1
2
1 2
2
2 2
5
1
3
3
2 1
2 1 3 2 1 1
3
2 45 . 140
2554
) 12566 (
) 3 25132 (
) 12566 (
) 4 37698 ( 4
7178
4
4 2 ) , (

+ +

+
+ = V
x x x
x x x
x x x x
x x x
x x x x
x
x x
x x x x x x L
i
(34)
4
3
3
2
1 3
2 4
1
3
1 2
5
1
4
1 2 2
4
1
3 2 1 2
1 3
2
1 2
3
2 9 . 280
) 12566 (
) 4 (
7178
3
2 ) , (

+
+
+ + = V
x x
x
x x x
x x x
x
x x
x x x x L
i
2
(35)
2
3
2
2
1 3
4
1
3
2 1
2
2
1 3
45 . 140
7178
) , (
x x
x
x
x
x x x L
i

+ = V
(36)
In problem P2, the distribution of the
i
in the differential
lagrangian equations makes
1
a free variable. Therefore, the
values of
i
can only be obtained iteratively by first guessing
the free variable
1
. The notation used in P1 is applied
similarly in P2. The Hessian matrix of the differential
Lagrangian will take the following form:
|
|
|
|
.
|

\
|
=
3
3
3
2
3
1
2
3
2
2
2
1
1
3
1
2
1
1
L L L
L L L
L L L
L
i
j for (37) ] 3 , 1 [ , e j i
The determinant off the Hessian matrix of the differential
Lagrangian is then solved using Laplace expansion [19].
C. Formulation for Problem P3
In problem P3, the stationarity equations where ] 2 , 1 [ e i
are as the following:
2 1 1 1 2 1
8 16 2 ) , ( x x x L
i
+ + = V (38)
2 2 1 1 2
8 16 1 ) , ( x x x L
i
+ = V (39)
Similar to problem P3, for , ] 2 , 1 [ e j
1
and
2
can be
calculated directly from stationary condition. The Hessian
matrix of the differential Lagrangian is as the following:
|
|
.
|

\
|
=
|
|
.
|

\
|
=
2 1
1 2
2
2
2
1
1
2
1
1
8 16
16 8


L L
L L
L
i
j
for ] 2 , 1 [ , e j i (40)
The determinant of the Hessian matrix of the differential
Lagrangian is then solved using Laplace expansion [19].
D. Formulation of Problem P4
In problem P4, the stationarity equations where ] 2 , 1 [ e i
are as the following:
2 1 1 1 1
1
2
2 2 1
3
1 1
2 1 . 0 2
14 42 2 4 4 ) , (

x x
x x x x x x L
i
+ +
+ + = V
(41)
2 2 2 1 1 2
2 2 1
2
1
3
2 2
2 5 5 2
22 26 4 2 4 ) , (

x x
x x x x x x L
i
+ +
+ + = V
(42)
Similar to problem P4, for , ] 2 , 1 [ e j
1
and
2
can be
calculated directly since the necessary condition of optimality
is that the Lagrangian is conserved (stationary condition).
Similar to the previous problems, the Hessian matrix of the
differential Lagrangian is generated as the following:
|
|
.
|

\
|
=
2
2
2
1
1
2
1
1
L L
L L
L
i
j
for ] 2 , 1 [ , e j i (43)
The determinant off the Hessian matrix of the differential
Lagrangian is then solved using Laplace expansion [19].
The fitness criterion in the PSO algorithm is enhanced
using the KT conditions and the Hessian matrix. The PSO
algorithm runs through the search space then identifies the
local minimum point with the aid of the Hessian. If these
points satisfy the KT condition, then the solution is printed.
This procedure is repeated until the position and the velocity
of the particles in the swarm converge. The detail mechanism
of the program is represented in the algorithm below:
Step 1: Initialize number of particles, i and the algorithm
parameters w, c
1
,c
2
, r
1
, r
2
,n
o

Step 2: Set initial position x
i
(n) and velocity v
i
(n)
Step 3: Compute individual and social influence

Step 4: Compute position x
i
(n+1) and velocity v
i
(n+1) at next
iteration
Step 5: Compute
i

Step 6: Evaluate fitness function:
IF the candidate solutions are locally convex
( 0 ) , x ( (
i j
> VL H ) and the KT conditions holds
explore convex neighborhood for optimal solutions.
Then go to step 7.
83

ELSE update position x
i
and velocity v
i
and go to
Step 3.
Step 7: Print solutions.
IF the particles velocity and position have
converged, halt.
ELSE update position x
i
and velocity v
i
and go to
Step 3.
where n is the number of iterations.

IV. COMPUTATIONAL RESULTS & ANALYSIS

The HPSO algorithm developed in this work is
programmed using the C++ programming language using a
personal computer (PC) with an Intel dual core processor
running at 2 GHz on a Windows Vista operating system. A
Visual C++ compiler was used in this work.
A. Computational Results for Problem P1
In problem P1, the objective function f(x), the design
variables, number of iterations and the execution time obtained
by the HPSO algorithm is compared against the modified PSO
algorithm, [18]. In problem P1, a better optimal value for the
objective function, f(x) is achieved by HPSO algorithm as
compared to the PSO [18] algorithm. In terms of constraint
satisfaction, the HPSO algorithm performed very well.
The progression of the objective function, f(x) with respect
to the number of iterations, n for the HPSO algorithm is as in
Fig. 1. In Fig. 1, the objective function, f(x) obtained using the
HPSO algorithm has the maximum value of 5703.96 at the 1
st

iteration and the minimum value of 2019.53 at the 31
st

iteration. The progression of the objective function with respect
to the iteration using the HPSO algorithm for problem P1 is
stable and convergent.

Figure 1: The objective function, f(x) with respect to the number of iterations, n
for the HPSO algorithm for P1
The computational results are as in Table I:
TABLE I: COMPARISON OF THE OPTIMIZATION RESULTS
FOR P1
P1 HPSO PSO[18]
f(x) 2019.53 6059.71
x
1
0.9594 0.8125
x
2
4.4807 0.4375
x
3
11.5579 42.0984
x
4
75.8150 176.637
No. of
iterations 31 -
Execution time
(secs) 0.09 -

B. Computational Results for Problem P2
For problem P2, the objective function f(x), the design
variables, number of iterations and the execution time obtained
by the HPSO algorithm is compared against the modified PSO
algorithm, [18].
The progression of the objective function, f(x) with respect
to the number of iterations, n for the HPSO algorithm is as in
Fig. 2:

Figure 2: The objective function, f(x) with respect to the number of iterations, n
for the HPSO algorithm for P2
In Fig. 2, the objective function, f(x) obtained using the HPSO
algorithm has the maximum value of 0.0105989 at the 1
st

iteration and the minimum value of 0.00820813 at the 13
th

iteration. The optimal value is reached at the 13
th
iteration with
good constraint satisfaction. The computational results are as
in Table II.
TABLE II. COMPARISON OF THE OPTIMIZATION RESULTS
FOR P2
P2 HPSO PSO[15]
f(x) 0.0082 0.0127
x
1
0.0611 0.0517
x
2
0.3880 0.3568
x
3
3.663 11.2871
No. of iterations 13 -
84

Execution time (secs) 0.106 -
C. Computational Results for Problem P3
The objective function f(x), the design variables, number of
iterations and the execution time obtained by the HPSO
algorithm in problem P3 was compared against the modified
genetic algorithm (GA) approach, [15].The objective function,
f(x) with respect to the number of iterations, n for the HPSO
algorithm is as in Fig. 3:

Figure 3: The objective function, f(x) with respect to the number of
iterations, n for the HPSO algorithm for P3

In Fig. 3, the objective function, f(x) obtained using the HPSO
algorithm has the maximum value of 23.8422 at the 1
st

iteration and the minimum value of 0.736292 at the 56
th

iteration. The optimal value is reached at the 28
th
iteration with
good constraint satisfaction. The computational results are as
in Table III:
TABLE III. COMPARISON OF THE OPTIMIZATION RESULTS
FOR P3
P3 HPSO GA[15]
f(x) 0.7363 0.74178
x
1
0.1315 0.1294
x
2
0.4732 0.4830
No. of iterations 56 -
Execution time
(secs) 0.1671 -

C. Computational Results for Problem P4
The objective function f(x), the design variables, number of
iterations and the execution time obtained by the HPSO
algorithm in problem P4 is compared against the improved
harmony search (IHS) approach, [17].The progression of the
objective function, f(x) with respect to the number of iterations,
n for the HPSO algorithm is as in Fig. 4. In Fig. 4, the
objective function, f(x) obtained using the HPSO algorithm has
the maximum value of 50835.3 at the 1
st
iteration and the
minimum value of 12.9691 at the 70
th
iteration.


Figure 4: The objective function, f(x) with respect to the number of
iterations, n for the HPSO algorithm for P4
The optimal value is reached at the 34
th
iteration with good
constraint satisfaction. The computational results are as in
Table IV:
TABLE IV. COMPARISON OF THE OPTIMIZATION RESULTS
FOR P4
P4 HPSO IHS[17]
f(x) 12.9691 13.5908
x1 2.2858 2.2468
x2 2.4862 2.3819
No. of iterations 70 -
Execution time
(secs) 0.2113 -


V. DISCUSSION ON COMPUTATIONAL EXPERIMENT

The objective function, f(x) obtained using the HPSO
algorithm is be observed to decline in a decay fashion with
respect to the iterations for problem P1. However in problem
P1, as mentioned previously, the objective function decreases
parabolically until convergence. The HPSO algorithm
performs stable computation without divergent solutions.
From the results, it can be seen that the HPSO approach
performs better in finding the optimal solution as compared
to the other approaches in [16],[17] and [18]. In terms of
implementation, a pure PSO or meta-heuristic method is easier
to develop as compared to the HPSO approach due to the
analytical component that may include lengthy derivations.
In P1 the PSO [15] algorithm is seen to have broken the
constraint g
1
(x) to a very negligible degree. However, the
HPSO algorithm abides all constraints in P1, P2, P3 and P4
satisfactorily. Thus, it can be said that the HPSO algorithm
performs well in terms of feasibility for this two test
problems. In problem P1, P2, P3 and P4, the HPSO algorithm
is a better minimizer as compared to the other algorithms in
[16],[17] and [18].. The PSO algorithm may have slight
difficulties in handling the nonlinearities existing in the
problem. However, the HPSO algorithm handles the
nonlinearities very well. The HPSO algorithm also takes very
85

little computational time approximately a fraction of a second
(see Table I-IV).
Due to the employment of a more accurate fitness criterion
(the KT conditions and the Hessian matrix) which were
derived analytically based on the problem statement, the
adaptability of the HPSO algorithm to the problem is better
as compared to other algorithms. This in effect minimizes the
objective function further. Therefore, the analytical derivation
of the fitness function improves the robustness of the
algorithm. Hybridizing the HPSO method with algorithms like
Tabu search (see [20], [21] and [22]) may provide it with a
more efficient system for handling situations with multiple
constraints for large scale problems and thus pave the way for
obtaining a solution closer to the global optimum. From this
work, it can also be inferred that the implementation of more
analytical methods to improve the fitness function would
provide a better generic algorithm for solving a wide range of
problems. Due to the PSO component in the HPSO algorithm,
the stability and the convergence of the computations are
assured as seen in Fig. 1-4. A global solution with better
computational time may be obtained by strengthening the
analytical methods employed in the fitness function in the
HPSO algorithm. The stopping criteria for the HPSO
algorithm is set such that, if the KT conditions are satisfied,
the determinant of the Hessian matrix is positive definite and
position/velocity of the particles converge; the program is then
set to halt.

VI. CONCLUSIONS & RECOMMENDATIONS
In this work, a new local minimum was reached for the
objective functions of all the test problems using the HPSO
algorithm. The HPSO algorithm developed in this work is an
enhancement of the PSO algorithm in terms of the capacity to
handle highly non-convex problems. The KT conditions and
the Hessian matrix improve the PSO algorithm in terms of its
accuracy to search for optimal solutions. More numerical
experiments (similar to this work) with different analytically
enhanced intelligent techniques such as Genetic Algorithm
(GA) [23], Genetic Programming (GP) [24], Tabu Search (TS)
[20],[21], [22] and hybrid Neuro-GP [25] should be done.

ACKNOWLEDGMENT
The authors would like to thank Universiti Teknologi
PETRONAS for supporting this work through a STIRF grant
(STIRF CODE NO: 90/10.11).
REFERENCES
[1] J. V. Outrata, M. Kocvara, and J. Zowe, Nonsmooth Approach to
Optimization Problems with Equilibrium Constraints, Nonconvex
Optimization and its Applications, Kluwer Academic Publishers,
Dordrecht, The Netherlands, 1998.
[2] D. P. Bertsekas, Nonlinear Programming, Athena Scientific, Belmont,
Massachusetts, second ed, 1999.
[3] Y. Chen and M. Florian, The nonlinear bilevel programming problem:
Formulations, regularity and optimality conditions, Optimization, Vol.
32, pp. 193209, 1995.
[4] J.H. Holland, Adaptation in Natural and Artificial Systems: An
Introductory Analysis with Applications to Biology, Control and
Artificial Intelligence, MIT Press, USA. 1992.
[5] J.R. Koza, Genetic Programming: On the Programming of Computers
by means of Natural Selection, MIT Press, USA 1992..
[6] J. Kennedy and R. Eberhart, Particle Swarm Optimization, IEEE
Proceedings of the International Conference on Neural Networks:
Perth, Australia, 1995.
[7] N. Phuangpornpitak, W. Prommee, S.Tia and W. Phuangpornpitak, A
Study of Particle Swarm Technique for Renewable Energy Power
Systems, PEA-AIT International Conference on Energy and
Sustainable Development: Issues and Strategies,Thailand, pp.1-7, 2010.
[8] Vo Ngoc Dieu and Weerakorn Ongsakul, Economic Dispatch with
Emission and Transmission Constraints by Augmented Lagrange
Hopfield Network, Transaction in Power System
Optimization, www.pcoglobal.com/gjto.htm, pp. 77-83, 2010.
[9] V. N. Dieu and W. Ongsakul, Enhanced merit order and augmented
Lagrange Hopfield network for ramp rate constrained unit
commitment, Proc. of IEEE Power System Society General Meeting,
Canada, 2006.
[10] Y.Huang and C.Yu, Improved Lagrange Nonlinear Programming
Neural Networks for Inequality constraints, Proceedings of
International Joint Conference on Neural Networks, Orlando, Florida,
USA, 2007.
[11] H.W.Kuhn, and A.W.Tucker, Nonlinear programming". Proceedings
of 2nd Berkeley Symposium. Berkeley: University of California Press.
pp. 481492, 1951.
[12] Binmore & Davies, Calculus Concepts and Methods, Cambridge
University Press, pp.190, 2007
[13] E. Sandgren, Nonlinear Integer and Discrete Programming in
Mechanical Design Optimization. Journal of Mechanical Deigns - T.
ASME, Vol.112, No.2, pp. 223229, 1990.
[14] A. Belegundu., A Study of Mathematical Programming Methods for
Structural Optimization, PhD thesis, Department of Civil
Environmental Engineering, University of Iowa, Iowa, 1982.
[15] Swaney, R.E. Global solution of algebraic nonlinear programs.
AIChE Annl. Mtg., Chicago, IL. 1990.
[16] C.Cao, J.Gu, B.Jiao, Z.Xin and X.Gu, Optimizing Constrained Non-
convex NLP Problems in Chemical Engineering Field by a Novel
Modified Goal Programming Genetic Algorithm, GEC 2009,
Shanghai, China, June 1214, 2009.
[17] M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony
search algorithm for solving optimization problems, Applied
Mathematics and Computation, Vol. 188, pp 15671579, 2007.
[18] C.A. Coello Coello., Solving Engineering Optimization Problems with
Simple Constrained Particle Swarm Optimizer, Informatica, Vol 32,
pp. 319-32, 2008.
[19] Harvey E. Rose, Linear Algebra. A Pure Mathematical Approach.
Springer,pp. 57-60, 2002.
[20] F. Glover, Tabu Search, Part I, ORSA Journal on Computing, Vol.1,
No.3, pp. 190-206, 1989.
[21] J.A.Bland and G.P.Dawson,Tabu Search and Design Optimization
Computer aided design. Vol.23, No.3, pp.195-201, 1991.
[22] Arene Thesen, Design and Evaluation of Tabu Search Algorithms for
Multiprocessor Scheuling Journal of Hueristics, 4:141-160, 1998.
[23] J.H. Holland, Adaptation in Natural and Artificial Systems: An
Introductory Analysis with Applications to Biology, Control and
Artificial Intelligence, MIT Press, USA., 1992.
86

[24] J.R. Koza, Genetic Programming: On the Programming of Computers
by means of Natural Selection, MIT Press, USA, 1992.
[25] T. Ganesan
,
P. Vasant and I. Elamvazuthi,Optimization of nonlinear
geological structure mapping using hybrid neuro-genetic
techniques, Mathematical and Computer Modelling,
doi:10.1016/j.mcm.2011.07.012, 2011.




87

You might also like