You are on page 1of 1

26 Stochastic Process Optimization using Aspen Plus®

TABLE 2.2
Gradient-Based Optimization Methods
Steepest Descent Conjugate Gradient

S = −∇f (x )
0 0
S 0 = −∇f (x 0 )
Solve
∇T f (x 0 )S 0
(S 0 )T ∇f (x 0 + α 0 S 0 ) α0 = −
(S 0 )T H ( x 0 )S 0

k +1 k +1
S k +1 = −∇f (x k +1 ) S k +1 = −∇f (x k +1 ) + S k ∇ f (x )∇f (x )
T

∇ f (x )∇f (x k )
T k

Solve
∇T f (x k +1 )S k +1
(S k +1 )T ∇f (x k +1 + α k +1S k +1 ) α k +1 = −
k +1 T
(S ) H ( x k +1 )S k +1

Newtonʼs method

S 0 = −[ H (x 0 )]−1 ∇f (x 0 )
α0 = 1
−1
( )
S k +1 = −  H x k +1  ∇f x k +1
  ( )
α k +1 = 1

optimize   Z = z( x ) 

s.t. (2.19)

h (x ) = 0
Here,the main concern is to obtain an optimal solution for z ( x ), which also
complies with the set of equality constraints. Two strategies to ensure that
are presented here: the method of Lagrange multipliers and the generalized
reduced gradient method.

2.5.1 Method of Lagrange Multipliers


In this method, the optimization problem presented in Equation 2.19 is refor-
mulated to obtain an objective function that involves the original objective,
z ( x ) , and the entire set of equality constraints, hi ( x ) = 0, where i = 1, 2, …, n. The
resultant expression is known as the Lagrangian function and is expressed
as follows:
m

optimize  L = z ( x ) + ∑λ h (x) i i (2.20)


i =1

You might also like