Professional Documents
Culture Documents
OPTIMISATION PROBLEM
NEED IDENTIFICATION
CHOOSE DESIGN VARIABLES
FORMULATE CONSTRAINTS
FORMULATE OBJECTIVE FUNCTION
SET UP VARIABLE BOUNDS
CHOOSE OPTIMISATION ALGORITHM
OBTAIN SOLUTION(S)
Using Symmetry
yc
E=200 Gpa
In AB 2P csc
In BC +2P csc
+P
In AC 2P cot
( cot + cot )
In BD 2
Stress Constraints
P cos ec
S yc
2 A1
P cot
S yt
2 A2
P cos ec
S yt
2 A3
P
( cot + cot ) S yc
2 A4
Stability consideration
Buckling of the compression members AB, BD, and DE
Euler buckling conditions
EA12
P
2sin 1.281l 2
EA42
P
( cot + cot )
2
5.76l 2
Stiffness Constraint
Maximum vertical deflection at C
max = 2mm
Thus
Pl 0.566 0.500 2.236 2.700
+
+
+
max
E A1
A2
A3
A4
Objective Function
Hence, Minimize
Bounds of Variables
Set some lower and upper bounds for the
four cross sectional areas
Say, all four areas lie between 10 and 500
mm2
10x10-6 A1, A2, A3, A4, 500 x 10-6
Optimization Problem
Formulation
Minimize
1.132 A1l + 2 A2l + 1.789 A3l + 1.2 A4l
subjectS to P 0,
yc
EA12
2 A1 sin
S yt
P
0,
2 A2 cot
S yt
P
0,
2 A3 sin
S yc
P
( cot + cot ) 0,
2 A4
P
0,
1.281l 2sin
EA42 P
( cot + cot ) 0,
5.76l 2 2
Pl 0.566 0.500 2.236 2.700
+
+
+
max
0,
E A1
A2
A3
A4
2
Constraints
( x) = const
-Equality
( x) y
-Inequality
Objective functions
-Maximize or Minimize
-principle of duality can be used
to convert maximize problem into minimize
problem and vice versa.
Minimize
f (x)
Subject to
j= 1,2,3J
g j (x) 0
hk ( x ) = 0
k=1,2,3..K
xiL xi xiu
i=1,2,3N
q3
r k f
kr
f
s
q1
q4
mr
mf
kr
kf
F2 = k f d 2 ,
F3 = f d&2 ,
F4 = k r d 4 ,
F5 = r d&4 ,
F6 = k r d 3
d 4 = q2 l2 q3 q4
Constraint
say max jerk
max &q&&2 (t ) 18
Objective function
Minimize
max abs q2 (t )
A
Where A = road excitation amplitude
Limits
0 k f , k r 2kg / mm
s
0 f , r 300kg /(m / s )
Subjected to
18 max(&q&&2 (t )) 0
CLASSICAL OPTIMIZATION
TECHNIQUES
SINGLE-VARIABLE OPTIMIZATION
MULTI-VARIABLE OPTIMIZATION
- WITH NO CONSTRAINTS
- WITH EQUALITY CONSTRAINTS
- WITH INEQUALITY CONSTRAINTS
f (x )
FUNCTION FOR THE DIFFERENT VALUES OF (x )CAN
HAVE
- RELATIVE OR LOCAL MINIMUM
- RELATIVE OR LOCAL MAXIMUM
- ABSOLUTE OR GLOBAL MINUMUM
- ABSOLUTE OR GLOBAL MAXIMUM
at x = x * if f ( x*) f ( x * + h)
- RELATIVE OR LOCAL MAXIMUM
at
x = x * if f ( x*) f ( x * + h)
at
x * if
f ( x*) f ( x )
at
x * if
f ( x*) f ( x )
f (x )
10
UNIMODAL FUNCTION
A unimodal function is one that has only one peak (maximum) or
valley (minimum) in a given interval.
Mathematically,
A function f (x) is unimodal if
(i) x1 < x2 < x* implies that f (x2) <f (x1), and
(ii) x2 >x1 >x* implies that f (x1) < f (x2), where x* is the minimum
point.
(
)
(
)
1 2
11
.....
a x1 x2 x3
Terminate;
a
x1
xm
L
x2
12
EXAMPLE
Minimize
f ( x) = x 2 + 54
Step 3 IS f ( x1 ) < f ( xm ) ?
NO.
Iteration continues
13
14
a1 =
( f 2 f1 )
( x2 x1 )
; a2 =
( f 3 f1 )
1
a
( x3 x2 ) ( x3 x1 ) 1
Minima lies at
_
x=
x1 + x2 a1
2
2 a2
Algorithm
S1: Let x1 be the initial point, be the step size,
x2 = x1 +
S2: Compute f(x1) and f(x2)
S3: If f(x1) > f(x2) Let x3=x1+2 else x3=x1- ,
Compute f(x3)
S4: Determine _fmin = min(f1,f2,f3) and Xmin
S5: Compute x
S 6: If f f ( x) and X x are small, optima is best of
all known points
S7: Save the best point and neighbouring points.
Goto 4.
_
min
min
15
BracketingMethodbasedonunimodal propertyofobjectivefunction
1) Assume an interval [a,b] with a minima in the range
2) Consider x1 and x2 within the interval.
3) Find out values of f at x=x1,x2 and Compare (x1) and (x2)
a) If (x1) < (x2) then eliminate [x2,b] and set new interval [a,x2]
b) If (x1) > (x2) then eliminate [a,x1] and set new interval [x1,b]
c) If (x1) = (x2) then eliminate [a,x1] & [x2,b] and set new interval [x1,x2]
f(x)
f(x)
f2
f1
f(x)
f1
f2
f1
a
x1 x2
f2
x1 x2
x
1
x
2
FIBONACCI METHOD
APPLICATION
To find minimum of a function of one variable even if function is
not continuous
LIMITATIONS:
The initial interval of uncertainty i.e. range in which the optimum
lies, has to be known.
The function being optimized has to be unimodal in the initial
interval of uncertainty.
The exact optimum cannot be located in this method. Only an
interval known as the
Final Interval of uncertainty will be known. The final interval of
uncertainty can be made
16
AdvantageofFibonacciSeries
FibonacciSeries:F0=F1=1;Fn=Fn1+Fn1
Hence:1,1,2,3,5,8,13,21,34,
LkLk*=Lk+1*
Henceonepoint
isalwayspre
calculated
FibonacciSearchAlgorithm
Step1:L=ba;k=2;Deciden;
Step2: L = F F L x = a + L x2 = b L*k
Step3:
*
k
n k +1
*
k
n +1
Computeeitherf(x1)orf(x2)(whicheverisnot
computedearlier)
Useregioneliminationrule
Setnewaandb
Ifk=n,TERMINATEelsek=k+1andGOTOStep
2
17
THE PROCEDURE
Sequence of Fibonacci numbers is defined as:
F0=F1=1; & Fn = Fn1 + Fn2
i.e. 1,1,2,3,5,8,13,21,34,
Let initial interval L0 defined by [a, b]
a
n=total no. of experiments be known.
Each function evaluation is termed as an experiment
L0 = ba
L2
x1
x2
L2
Define
The first two experiment points are x1 & x2
New search interval obtained contains one previous expt. point at a distance L2* from one
end (old end)
or
from new end
L2
Again discard interval based on unimodal property
New interval of search obtained in L3 where
The ratio
will permit us to determine n the required number of experiments
to achieve the desired accuracy in locating the optimum point.
After conducting n1 experiments the remaining interval will contain one experiment point
precisely at its middle point. The nth experiment point has also to be placed there. Since no
new information can be obtained by placing point there, the nth experiment point is place
very close to the remaining valid experiment point. This enables to obtain final interval of
uncertainty to within 0.5*Ln 1 .
18
EnterN,A,B
FLOWCHARTOFTHEALGORITHM
F0 =F1 =1
k=2
Thisisdone
sothatx1
alwayslie
leftofx2
Isk=N?
k=k+1
No
L2*=(Fn2/Fn )*(BA)
J=2
No
Lj =BA
x1 =A+L2*
x2 =BL2*
A=x1
No
f1>f2
FindnewL2* =
Fnj*(L1)/Fn(j2)
IsL2*>(Lj/2)?
Calculate
f1=f(x1)
f2=f(x2)
Comparef1withf2
f1=f2
A=x1,B=x2
L2*=Fnj*(BA)/Fn(j2)
Yes
Yes
IsJ=N
x1 =BL2*
x2 =AL2*
PrintA,B
LN =BA
f2>f1
B=x1
Stop
FindnewL2* =
Fnj*(L1)/Fn(j2)
J=J+1
J=J+1
GoldenSectionMethod
APPLICATION
Tofindminimumofafunctionofonevariableeveniffunctionisnotcontinuous
ThegoldensectionmethodissamestheFibonaccimethodexceptthatInFibonacci
method
1. theratioisnotconstantineveryiteration
2. totalnumberofexperimentstobeconductedarespecifiedinthebeginningwhereasin
thegoldensectionmethoditisnotrequired
TheprocedureissameastheFibonaccimethod
L2
exceptthatthelocationofthefirsttwo
experimentsisgivenbyL2* =L0/(phi)2 =0.382*L0
Thedesiredaccuracycanbespecifiedtostopthe
x1
x2
L2
procedure
19
HistoricalbackgroundofGoldenSection
Thevaluephihashistoricalbackground.
ItwasbelievedbyancientGreeksthatabuildinghavingsidesd,bsuchthat
(d+b)/d=d/b=phiwillbehavingmostpleasingproperties.
Or1+(1/phi)=phi
Thisgivesphi=(1+50.5)/2=1.618
ItisalsofoundinEuclidgeometrythatthedivisionoflineintotwounequalpartssothat
theratioofwholetothelargerpartequalstheratiooflargertosmallerpart,beingknown
asthegoldensectionorgoldenmean.
Ineveryiterationifyoureducerangebyafactorr
Originalrange AD;reducedtoACin1st iteration(pointsBandCarealreadycalculated)
Ifthenextrangeiniteration2hastobeAB,AB=r*AC;
HENCE1r=r2 SOLUTIONISGOLDENNUMBER
1r
B
C
D
A
r
1r
GoldenSectionSearchAlgorithm
Step1:L=ba;k=1;Decide;mapa,btoa=0;b=1
Step2:Lw=ba
w1=a+0.618Lw
w2=b0.618Lw
Step3:
Computeeitherf(w1)orf(w2)(whicheverisnotcomputed
earlier)
Useregioneliminationrule
Setnewaandb
If|Lw| TERMINATEelsek=k+1andGOTOStep2
20
AnExample GoldenSection
Functionf(x)is
0.65[0.75/(1+x2)] 0.65*x*tan1(1/x)isminimizedusinggoldensectionmethodwithn=6
A=0B=3
Thelocationoffirsttwoexpt.PointsaredefinedbyL2* =0.382L0 =(0.382)(3.0)=1.146
X1=1.1460
x2=3.01.1460=1.854
with
f1=0.208
f2=0.115
Sincef1<f2
delete[x2,3.0]basedonassumptionofunimodality andnewintervalof
uncertaintyobtainedis[0,x2]=[0,1.854]
Thethirdexperimentpointisplacedatx3=0+(x2x1)=1.854 1.146=0.708.
f3=0.288943
f1=0.208
Sincef3<f1deleteinterval[x1,x2].
Thenewintervalofuncertaintyis[0,x1]=[0,1.146]
AnExample GoldenSection
Thefourthexperimentpointisplacedatx4=0+(x1x3)=0.438.
f4=0.308951
f3=0.288943
Sincef4<f3deleteinterval[x3,x1].
Thenewintervalofuncertaintyis[0,x3]=[0,0.7080]
Thefifthexperimentpointisplacedatx5=0+(x3x4)=0.27.
f4=0.308951
f5=0.278
Sincef4<f5deleteinterval[0,x5].
Thenewintervalofuncertaintyis[x5,x3]=[0.27,0.7080]
Thelastexperimentpointisplacedatx6=x5+(x3x4)=0.54.
f4=0.308951
f6=0.308234
Sincef4<f6deleteinterval[x6,x3].
Thenewintervalofuncertaintyis[x5,x6]=[0.27,0.54]
Ratiooffinalintervaltoinitialintervalis0.09.
21
BoundingPhaseMethod(forbracketingtheoptima)
Step1
ChooseaninitialguessX0andanincrementD.SetK=0
Step2
Iff(X0 |D|)>f(X0)>f(X0+|D|)thenDis+ve
f(X0 |D|)<f(X0)<f(X0+|D|)thenDisve
elsegoto step1
Step3
SetXK+1 =XK +2K*D
Step4
Iff(XK+1 )<f(XK).SetK=K+1andgoto step3
else,theminimumliesintheinterval(XK1 ,XK+1 )andterminate
IfDislarge,accuracyispoor.
BoundingPhaseMethod Anexample
Minimizef(x)=x2 +54/x;
Step1 ChooseaninitialguessX0 =0.6andanincrementD=0.5.SetK=0
Step2 Calculate
f(X0 +|D|)=f(1.1)=50.301
Weobservef(0.1)>f(0.6)>f(1.1)thereforeDis+ve =0.5
X1 =1.1
+20*D
22
23
Proof :
It is given that
f ( x ) = lim
h 0
f ( x * + h) f ( x*)
....................1
h
if
h>0
if
h<0
24
h n (n)
+
f ( x * +h)
n!
Since
h2
h n 1
f ( n 1) ( x*)
f ( x*) + ....... +
2!
(n 1)!
for
0 < <1
h n (n)
f ( x * +h)
(n)!
25
26
EXAMPLES
27
28
p (k 1)/ k p (k 1)/ k
W = c pT1 2
2
+ 3
p1
p2
Where cp is the specific heat of the gas at constant pressure, k
is the ratio of specific heat at constant pressure to that at
constant volume of the gas, and T1is the temperature at which
the gas enters the compressor. Find the pressure, p2, at which
inter-cooling should be done to minimize the work input to
the compressor. Also determine the minimum work done on
the compressor.
k 1
k 1 1/ k
dW
(k 1) / k k +1
( k 1) / k
( p2 ) + p3
( p2 ) = 0
= cpT1
k
k 1 p1
k
dp2
Which yields
p2 = ( p1 p2 )
1/ 2
d 2W
k 1
1
(1+ k ) / k
(k 1)/ k 2k + 1
(13k ) / k
(
)
(
)
=
c
T
p
p
p
+
p 1
2
3
2
2
k 1 p1
k
k
dp2
29
k 1
2c pT1
d 2W
d 2W
>0
2
dp2
at
p2 = ( p1 p3 )
1/ 2
Wmin
( k 1) / 2 k
k p3
1
= c pT1
k 1 p1
30
GradientbasedMethods
Algorithmsrequirederivativeinformation
Manyrealworldproblems,difficulttoobtain
informationaboutderivatives
Computationsinvolved
Natureofproblem
Stillgradientmethodsareeffectiveandpopular
Recommendedtouseinproblemswherederivative
informationavailable
Globaloptimum occurswheregradientiszero
thesearchprocessterminateswheregradientiszero
Overviewofmethods
Methods
NewtonRaphsonmethod
Bisectionmethod
Secantmethod
Cubicsearchmethod
31
NewtonRaphsonMethod
Goalofunconstrainedoptimization reachas
smalladerivativeaspossible
Alinearapproximationoffirstderivativeof
functionatapointisexpressedusingTaylors
seriesexpansion.
Equatetozerotofindnextguess
Ifthecurrentpointatiterationtis x(t),the
nextiterationisgovernedbyfollowing
expression (t +1) (t ) f ' ( x (t ) )
x
=x
f ' ' ( x (t ) )
NewtonRaphsonMethod
Algorithm
Step1:Chooseinitialguessx(1) andasmallnumber
.Setk=1.computef(x(1))
Step2Computef(x(k))
Step3Calculate
f ' ( x(k ) )
( k +1)
(k )
x
=x
(1)
(k )
f ''(x )
Step4If|f(x(k+1))|<,Terminate;
Elsesetk=k+1andgotoStep2
32
NewtonRaphsonMethod
Convergencedependsontheinitialpointandthe
natureofobjectivefunction
InPractice,gradientshavetobecomputed
numerically
Atapointx(t),usingcentraldifferencemethod:
f ( x (t ) + x (t ) ) f ( x (t ) x (t ) )
2 x (t )
f ( x (t ) + x (t ) ) 2 f ( x (t ) ) + f ( x (t ) x (t ) )
f ' ' ( x (t ) ) =
( x (t ) ) 2
f ' ( x (t ) ) =
(2)
(3)
NewtonRaphsonMethod
Theparameterx(t) isusuallytakentobe
small
(t )
(4)
Thefirstderivativerequirestwofunction
evaluationsandthesecondderivativerequires
threefunctionevaluations
33
NewtonRaphsonMethod
Considerminimizationproblem
f ( x) = x 2 + 54 / x
Step1:Wechooseaninitialguessx(1) =1,a
terminationfactorof =103,andaniteration
counterk=1.computingderivativeusing(2).The
smallincrementusing(4) is0.01.Thecomputed
derivativeis52.005,whereastheexact
derivativeatx(1) isfoundtobe52.Itisaccepted
andproceedingtoStep2
NewtonRaphsonMethod
Step2:Theexactsecondderivativeoffunction
x(1)=1isfoundtobe110.Thesecond
derivativecomputedusing(3) is110.011,
whichisclosetoexactvalue.
Step3:Wecomputethenextguess,
f ' ( x (1 ) )
x = x
f ' ' ( x (1 ) )
= 1 ( 52 . 005 ) /(110 . 011 )
= 1 . 473
(2)
(1 )
34
NewtonRaphsonMethod
Thederivativecomputedusing(2)atthispointis
foundtobef(x(2))=21.944
Step4:Since|f(x(2))|notlessthan ,we
incrementkto2andgotoStep2.Thiscompletes
oneiterationoftheNewtonRaphson method.
Step2: (SecondIteration)f(x(2))=35.796
Step3:(SecondIteration)Thenextguessx(3)=
2.086andf(x(3))=8.239
Step4:(SecondIteration)Since|f(x(3))|notless
than ,weincrementkto3andgotoStep2.
NewtonRaphsonMethod
Step2: (ThirdIteration)Thesecondderivativeatthe
pointf(x(3))=13.899
Step3: (ThirdIteration)Thenewpointiscalculatedas
f(x(4))=2.167.(Ninefunctionevaluations)
Step4: (ThirdIteration)Sincetheabsolutevalueofthis
derivativenotsmallerthan,thesearchproceedsto
Step2.
Afterthreemoreiterationsx(7)=3.0001andf(x(7))=4(10)
8,smallenoughtoterminatealgorithm
Sinceateveryiterationfirstandsecondorderderivatives
areevaluated,atotalofthreefunctionvaluesare
evaluatedforeveryiteration
35
BisectionMethod
Computationofsecondderivativeisavoided;
onlyfirstderivativeisused.
Bothfunctionandthevalueandthesignofthe
firstderivativeattwopointsisusedtoeliminatea
certainportionofthesearchspace.
Methodsimilartotheregionelimination
methodsdiscussed.
Algorithmassumesunimodality ofthefunction.
BisectionMethod
Usingderivativeinformation,theminimum,is
saidtobebracketedintheinterval(a,b)iftwo
conditions f(a)<0andf(b)>0aresatisfied.
Requirestwoinitialboundarypointsbracketing
theminimum.
Derivativesattwoboundarypointsandatthe
middlepointarecalculatedandcompared.
Ofthreepoints,twoconsecutivepointswith
derivativeshavingoppositesignsarechosenfor
nextiteration
36
BisectionMethod
Algorithm:
Step1: Choosetwopointsaandbsuchthat
f(a)<0andf(b)>0.Alsochooseasmall
number.Setx1 =aandx2 =b.
Step2: Calculatez=(x2+x1)/2andevaluatef(z)
Step3: If|f(z)|,TERMINATE;
Elseiff(z)<0setx1 =zandgotoStep2
Elseiff(z)>0setx2 =zandgotoStep2
BisectionMethod
Thesignoffirstderivativeatthemidpointof
thecurrentsearchregionisusedtoeliminate
halfofthesearchregion.
Ifderivativeve,minimumcannotlieinlefthalfof
searchregion
Ifderivative+ve,theminimumcannotlieonthe
righthalfofthesearchspace.
37
BisectionMethod
Consideragainthefunction:
f ( x) = x 2 + 54 / x
Step1: Choosetwopointsa=2andb=5suchthatf(a)=
9.501andf(b)=7.841areofoppositesign. =103
Step2: Calculateaquantityz=(x1+x2)/2=3.5and
computef(z)=2.591.
Step3: Sincef(z)>0,therighthalfofthesearchspaceis
eliminated.x1 issetas2andx2=x=3.5thusone
iterationcompleted.
Ateachiteration,onehalfofthesearchregionis
eliminatedbutherethedecisionaboutwhichhalfto
eliminatedependsonthederivativesatthemidpointof
theinterval
BisectionMethod
Step2: (SecondIteration)z=(2+3.5)/2=2.750and
f(z)=1.641.
Step3: (SecondIteration)sincef(z)<0,x1 =2.750
andx2 =3.50.
Step2: (ThirdIteration)z=3.125andf(z)=0.720
Step3: (ThirdIteration)Since|f(z)|notlessthan
,iterationcontinued.
Attheendof10functionevaluationsintervals
(2.750,3.125),bracketingofminimumpointx*=
3.0.Theguessofminimumpointisobtainedat
intervalofx=2.938.Thisprocesscontinuesuntilwe
findavanishingderivative.
38
SecantMethod
Bothmagnitudeandsignofderivativestocreatea
newpoint
Derivativeoffunctionassumedtovarylinearly
betweentwochosenboundarypoints.
Iftwopointsx1 andx2,thequantityf(x1)f(x2)0,
thelinearapproximationofthederivativex1 andx2
willhaveazeroderivativeatapointzgivenby
z = x2
f ' ( x2 )
( f ' ( x2 ) f ' ( x1 )) /( x2 x1 )
(5)
SecantMethod
Inthismethod,inoneiteration,morethanhalf
thesearchspacemaybeeliminateddepending
onthegradientvaluesatthechosenpoints.
Algorithm:
Step1: Algorithmsameasbisectionmethodexcept
thatStep2
Step2: Calculatenewpointzusing(5) and
evaluatef(z).
Thisalgorithmalsorequiresonlyonegradient
evaluationateveryiteration.Thus,onlytwo
functionvaluesarerequiredperiteration.
39
SecantMethod
Consideringtheproblem
f ( x) = x 2 + 54 / x
Step1: Initialpointsa=2,b=5havingderivatives
f(a)=9.501andf(b)=7.841withopposite
signs.Choosing =103 andsetx1 =2andx2 =
5.
Step2: Calculatingnewpointusingequation(5):
f ' (5)
z = 5
= 3.644
( f ' (5) f ' (2)) /(5 2)
SecantMethod
Step3: Sincef(z)>0,weeliminatetherightpart(
theregion(z,b))oftheoriginalsearchregion.The
amountofeliminatedsearchspaceis(bz)=
1.356,whichislessthanhalfthesearchspace(b
a)/2=2.5.Here,x1 =2andx2 =3.644andproceed
withnextiteration.
Step2: (SecondIteration)z=3.228andf(z)=1.127
Step3: (SecondIteration)Rightpartofsearchspace
eliminatedsincef(z)>0.x1 =2andx2 =3.228for
nextiteration
40
SecantMethod
Step2: (ThirdIteration)z=3.101andf(z)=0.586
Step3: (ThirdIteration)since|f(z)|notlessthan,
nextiterationproceeded.
Atendof10functionevaluations,trueminimumis
computedx=3.037.Itisclosertobisectionmethod.
CubicSearchMethod
Similartosuccessivequadraticpointestimation
methodexceptthatthederivativesareusedto
reducethenumberofrequiredinitialpoints.For
ex.
f ( x) = a0 + a1 ( x x1 ) + a2 ( x x1 )( x x2 ) + a3 ( x x1 ) 2 ( x x2 )
a0,a1,a2,a3 beunknownsthereforerequiresat
leastfourpointstodeterminethefunction.
Functioncanalsobedeterminedbyspecifying
thefunctionvalueaswellasderivativeattwo
points
41
CubicSearchMethod
((x1,f1,f1),(x2,f2,f2))andbysettingequationto
zero.
x2 , if = 0
(6)
x = x2 ( x2 x1 ), if 0 1
x , if > 1
1
Where
z=
3( f1 f 2 )
+ f '1 + f ' 2
x 2 x1
w=
x 2 x1
x 2 x1
f '2 + w z
f ' 2 f '1 + 2 w
( z 2 f '1 f ' 2 )
CubicSearchMethod
SimilartoPowellssuccessivequadratic
estimationmethod,estimationoff(x)canbe
usedtoestimatetrueminimumofobjective
function.
Estimatewithearliertwopointsareusedtofind
thenextestimateofthetrueminimumpoint.
Theproductoftheirderivativeoftwopointsis
negative.
Thisprocedureiscontinuedtilldesiredaccuracy
42
CubicSearchMethod
Algorithm:
Step1:
Chooseinitialpointx(0),astepsize andtwo
terminationparameters1 and2.
Computef(x(0)).
Iff(x(0))>0,set =.Setk=0.
Step2:Computex(k+1)=x(k) +2k
Step3: Evaluatef(x(k+1))
Iff(x(k+1))f(x(k))0,setx1=x(k),x2=x(k+1),andgoto
Step4
Elsesetk=k+1andgotoStep2.
CubicSearchMethod
Step4: Calculatethepointusing(6)
f ( x) f ( x )
Step5: If,gotoStep6
f ( x) f ( x )
x = x (( x x ) / 2)
Elsesetuntilisachieved
_
_
_
and|(x
Step6:
Computef().If|f()|
x
x
_
1
1)/
x
x |2 ,Terminate
_
_
x
x
Elseiff()f(x
1)<0,setx2 =;
_
Elsesetx1 = x
GotoStep4
_
43
CubicSearchMethod
Methodismosteffectiveiftheexactderivativeis
available
Bracketingofminimumpointachievedinfirstthree
steps
Bracketingalgorithmsimilartoboundingphase
method
Exceptfirstiteration,functionvalueaswellasfirst
derivativearecalculatedonlyatonenewpoint Only
twonewfunctionevaluationsrequired.
Firstiterationrequiresrepetitiveexecutionofsteps2
and3toobtainbracketingpoints.
CubicSearchMethod
_
x
Ifnewpointisbetterthanx
1,oneofthetwo
pointsiseliminateddependingonwhich
_
x
bracketsthetrueminimumwith.
Ifnewpointisworsethanx
1,thebesttwo
_
x
amongthepointsandwillbracket.
Excessivederivativecomputationsareavoided
_
x
bysimplymodifyingthepoint.
44
CubicSearchMethod
Consideringthefunction
f ( x) = x 2 + 54 / x
Step1:
Choosinganinitialpointx(0)=1,astepsize=
0.5andterminationparameters1 = 2 =103.
Thederivativeofthefunctionatx(0),f(x(0))=
52.005
f(x(0))<0,hence=0.5,k=0
CubicSearchMethod
x(1)=x(0)+2(0)=1+1(0.5)=1.5
f(x(1))=21.002
Hence,f(x(0))f(x(1))notlessthanorequalto0
Step2: Hence,k=1and
x(2) =1.5+21(0.5)=2.5
Step3:
Derivativef(x(2))=3.641doesnotmakeproduct
negative.
x(3) =1.5+22(0.5)=4.5 f(x(3))=6.333,whichmakes
theproductf(x(2))f(x(3))negative
x1 =x(2) =2.5andx2 =x(3) =4.5
45
CubicSearchMethod
Step4: Estimatedpointiscalculated
z=3.907,w=6.190and =0.735.
x=4.50.735(4.52.5)=3.030
Step5:
f(x)=27.003
sincef(x)<f(x1),gotoStep6.
Step6:
f(x)=0.178 Terminationcriterianotsatisfied
f(x)f(x1)<0,hencex2 =x=3.030
Endofiterationone.andgotoStep4
CubicSearchMethod
Step4: (SecondIteration)
x1 =2.5,f(x1)=3.641,x2 =3.030
f(x2)=0.178
x=2.999
Step5: (SecondIteration)
f(x)=27.00<f(x1)
Step6: (SecondIteration)
f(x)=0.007 x2 =3.030andx1 =2.999forthenext
iteration
MethodisfasterthanPowellsquadraticsincederivativesare
employed.Powellmethodpreferredifthederivativesareto
beevaluatednumerically
46