You are on page 1of 21

NonlinearProgramming

LineSearchTechniques
&
GradientBasedMethods
Line Search: LineSearch:
Line search techniques are in essence optimization
l ith f di i l i i i ti bl algorithms for one-dimensional minimization problems.
They are often regarded as the backbones of nonlinear
optimization algorithms optimization algorithms.
Typically, these techniques search a bracketed interval.
Oft i d lit i d Often, unimodality is assumed.
Unimodality is a term used in several contexts in
mathematics mathematics.
Originally, it relates to possessing a unique mode.
2 AnilKumar|Nonlinearoptimization
Unimodal function
A unimodal function is one that has only one peak in a
given interval.
Thus a function of one variable is said to be unimodal on
a given interval [a, b] if it has either unique minimum or
i [ b] maximum on [a, b].
Mathematically:
let x* be a minimum point of the function f(x) which is
unimodal on [a, b], if for x
1
< x
2
(i) f( ) f( *) f * d (i) f(x
1
) > f(x*) for x
1
< x* and
(ii) f(x*) < f(x
2
) for x
2
> x*.
3 AnilKumar|Nonlinearoptimization
Direct Search Method/Line Search DirectSearchMethod/LineSearch
The idea of direct search methods is to identify the The idea of direct search methods is to identify the
interval of uncertainty that is known to include the
optimum solution point.
Measure of effectiveness: Let
L
1
: width of the interval of uncertainty and
1
y
L
n
: width of interval of uncertainty after n
experiments.
Then measure of effectiveness of any search technique is
defined as
1
n
L
s
4
1
1
n
L
o = s
AnilKumar|Nonlinearoptimization
The procedure locates the optimum by iteratively
narrowing the interval of uncertainty to any desired g y y
level of accuracy.
Two algorithms to study: g y
Fibonacci Search Method
Golden Section Method.
5 AnilKumar|Nonlinearoptimization
Fibonacci Search Method Fibonacci Search Method
Search interval is reduced according to Fibonacci
b numbers.
Fibonacci numbers are calculated as follows:
h 2 1 d 1
First few Fibonacci numbers are
1 2 0 1
where 2, 1 and 1
n n n
F F F n F F

= + > = =
The property of Fibonacci numbers is used to create a
0 1 2 3 4 5 6 7 8
1, 1, 2, 3, 5, 8, 13, 21, 34, F F F F F F F F F = = = = = = = = = .
search algorithm that requires only one function
evaluation at each iteration.
6 AnilKumar|Nonlinearoptimization
Fibonacci Search Method Fibonacci Search Method
Let L
1
= b a, then LetL
1
b a,then
choosex
1
andx
2
in
L
1
suchthat
a b x x
F
a b x
1
x
2
x
2
1 1
N
N
F
x a L
F

= +
1 2
1
N
N
F
x a L
F

= +
7
N
F
AnilKumar|Nonlinearoptimization
Example 1 Example1
Find minimum of x
2
- 2x, 0 x 1.5 within the Find minimum of x 2x, 0 x 1.5 within the
interval of uncertainty 0.25 L
0
, where L
0
is the
original interval of uncertainty.
8 AnilKumar|Nonlinearoptimization
Example 2: Example2:
Find minimum of (x-1)(x-2)(x-3), 1 x 3 within the Find minimum of (x 1)(x 2)(x 3), 1 x 3 within the
interval of uncertainty 0.13 L
0
, where L
0
is the original
interval of uncertainty.
9 AnilKumar|Nonlinearoptimization
Golden Section Search Method GoldenSectionSearchMethod
For large N, Fibonacci fraction | converges to golden
i i (0 618034 ) section ratio (0.618034):
N
N
F
F
1
JustliketheFibonaccimethod,thismethodmaintainsa
uniformreductionstrategy:
I I I +
1 2
1
Define
k k k
k
I I I
I
I
t
+ +
+
= +
=
=0.618Thisnumberisreferredtoasthegolden
k
I
sectionratio.
10 AnilKumar|Nonlinearoptimization
Golden Section Search Method GoldenSectionSearchMethod
LetL
1
=b a,then
choosex
1
andx
2
in
a b x x
L
1
suchthat
a b x
1
x
2
x
(1 )L b L
1 1 1
2 1
1
1 2
(1 ) or
or
x a L b L
x L a L x a
x t t
t t
= + =
= + = +
11
2 1 1 2
AnilKumar|Nonlinearoptimization
Unconstrained Gradient Based optimization
M th d Method
Gradient based techniques are motivated by the fact Gradient based techniques are motivated by the fact
that f decreases most rapidly at a point P in R
n
in the
direction of .
( ) f P V
The iterations which converge to the minimizer
x* of f are computed by an iterative procedure of the
( )
{ }
k
x
form
) ( ) ( ) ( ) 1 ( k k k k
d x x o + =
+
( ) k
At each step k, minimizes the scalar valued
function
( ) k
o
( ) ( ) ( ) ( )
( ) ( - ( ))
k k k k
g f x f x o o = V
12
( ) ( ( )) g f x f x o o V
AnilKumar|Nonlinearoptimization
Unconstrained Gradient Based
ti i ti M th d optimization Method
h h d d Threemethodtostudy:
SteepestDescentMethod
ConjugateGradientMethod
NewtonMethod
13 AnilKumar|Nonlinearoptimization
St t D t Steepest Descent
) ( ) ( k k
) (
) ( ) ( k k
x f d V =
0 ) (
) ( ) ( ) ( ) ( ) 1 (
> V =
+ k k k k k
with x f x x o o
(k)
( ) ( ) ( ) ( ) (k)
To determine ,
consider ( ) ( - ( )) is a function of .
k k k k
g f x f x
o
o o o = V
14
AnilKumar|Nonlinearoptimization
Steepest Descent
Possibilities for choosing
(1) Constant step size i e
) ( k
o
(1) Constant step size i.e.
( )
constant
k
k o o = =
) (
) ( ) ( ) 1 ( k k k
x f x x V =
+
o
Advantage : simple
Disadvantage : no idea of which value of to choose
If is too large : diverge
If is too small : very slow
(2) Variable step size
15 AnilKumar|Nonlinearoptimization
NewtonMethod
Startingwithk=0,theminimizerx*off arecomputed
by the following iterative procedure: bythefollowingiterativeprocedure:
( 1) ( ) ( ) 1 ( )
[ ( )] ( )
k k k k
x x H x f x
+
= V
Advantage/Disadvantage:
G d i (f ) if d Good properties (fast convergence) if started near
solution.
H d difi i if d f f However, needs modifications if started far away from
solution.
Al (i ) H i i i t l l t Also, (inverse) Hessian is expensive to calculate.
16 AnilKumar|Nonlinearoptimization
ConjugateGradientMethod
Conjugate direction methods can be regarded as
somewhat in between steepest descent and Newton's p
method, having the positive features of both of them.
Motivation: Desire to accelerate slow convergence of
d b id i l i steepest descent, but avoid expensive evaluation,
storage, and inversion of Hessian.
Application: Conjugate direction methods are Application: Conjugate direction methods are
invariably invented and solved for the quadratic
problem:
Minimizef(x)=()x
T
Qx b
T
x
Note: Condition for optimality is Vf = Qx b = 0 or Qx = b (linear equation)
17
Note: Condition for optimality is Vf = Qx - b = 0 or Qx = b (linear equation)
AnilKumar|Nonlinearoptimization
Conjugate Gradient Method ConjugateGradientMethod
Definition: Given a symmetric matrix Q two vectors d and d Definition: Given a symmetric matrix Q, two vectors d
1
and d
2
are said to be Q orthogonal or Q conjugate (with respect to Q) if
d
1
T
Qd
2
= 0.
Note that orthogonal vectors (d
1
T
d
2
= 0) are a special case of
conjugate vectors
If nonzero vectors d
0
, d
1
,,d
n
are conjugate with respect to a
symmetric and positive definite n n matrix Q, then they are
linearly independent.
18 AnilKumar|Nonlinearoptimization
ConjugateGradientMethod
The conjugate gradient method is the conjugate direction
method that is obtained by selecting the successive direction
t j t i f th i di t vectors as a conjugate version of the successive gradients
obtained as the method progresses.
k k k k 1 1
h di i @i i k
You generate the conjugate directions as you go along.
k k k k
d g d | + =
+ + 1 1
Search direction @ iteration k:
Three advantages:
1) Gradient is always nonzero and linearly independent of all
previous direction vectors previous direction vectors.
2) Simple formula to determine the new direction. Only slightly
more complicated than steepest descent.
3) Process makes good progress because it is based on gradients.
19 AnilKumar|Nonlinearoptimization
ConjugateGradientMethod(Algorithm)
1. Starting at any x
0
. Find g
0
= = b - Q x
0
and
set d
0
= - g
0
0
) ( f x V
set d g .
1. Using d
k
, calculate the new point g p
1
wher and ) e (
T
T
k k
k k k k k k k
k k
g d
x d
d Qd
x g f x o o
+
+ = V = =
2. Calculate the new conjugate gradient direction d
k+1
, as:
d Qd
1
1 1
where Find
T
T
k k
k k k k k
k k
g Qd
d g d
d Qd
| |
+
+ +
= + =
20 AnilKumar|Nonlinearoptimization
Conjugate Gradient Method (Algorithm) ConjugateGradientMethod(Algorithm)
Theorem:
Let f(x) be a quadratic function with symmetric positive
definite Hessian matrix. If we successively take optimal
steps d
0
, d
1
,,d
n
, we reach the minimum point in exactly
n iterations n iterations.
N t N Q d ti C j t G di t M th d Note: NonQuadratic Conjugate Gradient Methods:
Fletcher Reeves Method
21 AnilKumar|Nonlinearoptimization

You might also like