You are on page 1of 22

Generalized Reduced Gradient Method

Session: 2016-17

Submitted By:
Submitted To: KM Soni (1724)
Dr. Rajiv Kumar Dohare Harshit Bajpai (1727)
(Assistant Prof. Rajat K. Agrawal (1730)
Chemical Engineering Komal Dhanda (1744)
Dept.)
Pritam Agarwala (1747)
Introduction
The generalized reduced gradient method
algorithm was first developed in the late
1960s by Jean Abadie and since then refined
by several other researchers
It is based on extending methods for linear
constraints to apply to nonlinear constraints.
Algorithm
Compute the gradient of f(x) at the current point x c, giving
f(xc).
If the current point xc is close enough to being optimal, stop.
Compute a search direction dc using the gradient f(x c) and
perhaps other information such as the previous search
direction.
Determine how far to move along the current search
direction dc, starting from the current point xc. This distance
c is most often an approximation of the value of that
minimizes the objective function f(xc+ dc) and is used to
determine the next point xn = (xc+ dc).
Replace the current point xc by the next point xn and return
to step 1.
Generalized Reduced Gradient Method
ALGORITHM
: Cost Function y y x1 , x2 ,..., xn
: Constraints 1 x1 , x2 ,..., xn 0
2 x1 , x2 ,..., xn 0
...
m x1 , x2 ,..., xn 0

n : Number of variables
n-m : Number of decision Variables
m : Number of constraints
Assumed: n=4 , m=2

We want to optimize cost function

y y x1 , x2 , x3 , x4
Subject to
1 x1 , x2 , x3 , x4 0
2 x1 , x2 , x3 , x4 0

.Choose x3 and x4 two decision variables

5
1 1 1 1
1 x1 x2 x3 x 4 0
x1 x2 x3 x4
2 2 2 2
2 x1 x2 x3 x4 0
x1 x2 x3 x4
i
Where
x j , Therefore
ij

11 12 x1 13 14 x3

21 22 mm
x2 m1

23 24 m n m
x4 n m 1
Thus

x1 13 14
mm k 3 x3 k 4 x4
1 1
J mm x
3 4 x J
x2 23 24
6
y
n
ynew yold xold xi
i 1 xi

x1 x3
ynew yold y1 y2 y3 y4
x2 x4

With substituting for x1 and x2 from previous part, we


have

ynew yold y 1
y2 J 1 k3 y3 x3
y 1 y J k
2
1
4 y x
4 4

7
.Therefore we determine Generalized Reduced Gradient


GRGi y1 ... ym 1m J m1m kim1 yi

for
i n m,..., n
n
ynew yold GRG .x
i nm
i i

GRGi 0If xi choose


for minimization, 0
GRGi If
0 xi choose
for minimization, 0
8
Choose decision variables (n-m) and their step sizes xi

Initialize decision variables

Move towards constraints

Calculate GRGi
GRGi 0 GRGi 0

i i 1 xi 0 xi 0 i i 1
xi 1 xi xi

Move towards constraints Decreased xi


GRGi n GRGi
yi 1 yi
yi 1 yi GRGi .xi
inm yi 1 yi
GRGi

xOptimum xi 1
9
EXAMPLE:Minimize
f ( x1 , x2 , x3 ) ( x1 x2 ) 2 ( x2 x3 ) 4
subject
g1 ( X ) x1 (1 x22 ) x34 3 0
3 xi 3
i 1,2,3...

STEP1:
We choose arbitrarily the independent and dependent variables as
y1 x1
Y , Z z1 x3
y
2 x
2

2.6

Let the starting vector be
X 1 2
2
with f (X1) = 21.16

10
STEP 2: compute GRG at X1
f
2( x1 x2 )
x1
f
2( x1 x2 ) 4( x2 x3 ) 3
x2
f
4( x2 x3 ) 3
x3
g
1 x22
x1
g
2 x1 x2
x2
g
4 x33
x3

We find, at X1
f
x 2( 2.6 2) 9 .2
1
Y f
f 2( 2.6 2) 4( 2 2)
3
9 .2
x2 X1

11
f
z f
x

4( x2 x3 ) 3 x1 0
3 X1

g g
C 5 10.4
x1 x2 X1

g
D 32
x3 X1

D 1
1
, D 1
C
1
5 10.4 0.15625 0.325
32 32
1

GR Y f D C z f
T
9.2 0.15625
9 . 2


0 . 325
( 0 )
9.2

9 . 2

STEP 3: Since the components of G are not zero, the point X1 is


not optimum, and hence we go to step 4.

12
Step 4: We use the steepest descent method
and take the search direction as
S = -G
9.2
R =
9.2
Step 5: We find the optimal step length along S.
(a) Considering the design variables, we use
eq (7.111) to obtain :
For y1 = x1 : 3 ( 2) 0.6087
9.2

For y2 = x2 : 3 (2)
9.2
Thus the smaller value gives 1 =0.5435.

9.2
T= -([D] -1
[C])S = -(0.15625-0.325)

9. 2
=

= -4.4275
And hence : 3 (2)
4.4275
For z1 = x3 : = = 1.1293

Thus, 2 = 1.1293
(b) The upper bound on is given by the smaller of
1 and 2, which is equal to .5435. By expressing
Y S
X
Z T
We obtain

x1 2.6 9.2 2.6 9.2



X x 2 2 9.2 2 9.2
x3 2 4.4275 2 4.4275

And hence
f()=f(X)=(9-2.6+9.2-2+9.2)2+(2-9.2-
2+4.4275)4
= 518.78064+338.56-169.28+21.16
df/d=0 gives
2075.1225 3 +677.12 -169.28 =0
from which we can find the root as *0.22.
Since * is less than the upper bound value
0.5435, we use *.
(c) The new vector Xnew is given by
Yold dY
Xnew
Zold dZ

2.6 0.22(9.2) .576


Yold
*

*
2 0.22(9.2) 0.024
Zold 2 0.22(4.4275) 1.02595

2.024
dY dZ {0.97405}
2.024
Now, we need to check whether this vector is
feasible. Since g1(Xnew)=(-0.576)[1+(-
0.024)2]+(1.02595)4-3
=-2.46840
The vector Xnew is infeasible. Hence, we hold Ynew
constant and modify Znew using Newtons method as
dZ=[D]-1[-g(X)-[C]dY]
Since
dg1 3
[ D] [ 4 x3 ] [ 4(1 .02595) 3
] [4.319551]
dz1

g1 ( X ) {2.4684}
C
g1 g1
2 0.576 0.024 2 0.576 0.024 4 0.024 1.02595 2

y1 y 2
1.104 3.5258

1 2.024
dz 2.4684 1.104 3.5258. 0.5633
4.3195551 2.204

We have

z new zold dz 2 0.5633 1.4367


0.576
Yold dY
X new 0.024
Z old dZ 1.4367

The constraint becomes

g1 (0.576)(1 (0.024) 2 (1 (0.024) 2 (1.4367) 4 3 0.0642 0


Since this Xnew is infeasible we need to apply
Newtons method at the current Xnew. In the
present case instead of repeating Newtons
iteration, we can find the value of Z new={x3}new
by satisfying the constraint as 2 4
g1 ( x) (0.576)(1 (0.024) ) x3 3 0

x3 (2.4237) 0.25
1.2477
0.576

X new 0.024
1.2477

f ( X new ) (0.576 0.024) 2 (0.024 1.2477) 4 2.920

Next we go to step 1.
STEP1:
We do not have to change the set of independent
and dependent variables and hence we go to the next
step.

STEP 2:
We compute the GRG at the current X
f
x 2(0.576 0.024)
1
Y f
f 2(0.576 0.024) 4(0.024 1.2477)
3

x2

1.104

7. 1225

f f
z f 3

4(0.024 1.2477) 8.2265
z1 x3

g g
C
(1 (0.024) )2(0.576)(0.024)
2

x1 x2

1.000576 0.027648
g
D 4 x3
3

4 (1 .2477 ) 3

7.7694
x3

1
D 1 C 1.000576 0.027648 0.128784 0.003558
7.7694

GR Y f D 1
C
T
Y f

1.104 0.128784 2.1634



(8 . 2265)
7.1225 0.003558 7.1518

Since G 0, we need to proceed to the next step.


It can be seen that the value of the objective function
reduced from an initial value of 21.16 to 2.9201 in one
iteration.

You might also like