Professional Documents
Culture Documents
MATLAB Exercises
Assoc. Prof. Dr. Pelin GÜNDEŞ
Introduction to MATLAB
Introduction to MATLAB
Introduction to MATLAB
• % All text after the % sign is a comment. MATLAB
ignores anything to the right of the % sign.
• help command_name
• == Within a if construct
• + Addition
• - Subtraction
• * Multiplication
•
• / Division
• ^ Power
c=
4
e=
6
% why did only c and e echo on the screen?
Exercise
>> who % lists all the variables on the screen
Your variables are:
a b c d e
A=
1.5000
a=
A=
1.5000
Exercise
>> one=a;two=b;three=c;
>> % assigning values to new variables
ans =
2 3
AA1 =
2 3
Exercise
>> size(AA1) % AA1 is a one by two matrix
ans =
1 2
ans =
2 5
3 6
4 7
Exercise
>> B1=A1‟ % the transpose of matrix A1is assigned to B1. B1 is a
% three by two matrix
B1 =
2 5
3 6
4 7
C1 =
29 56
56 110
Exercise
>> C2=B1*A1
C2 =
29 36 43
36 45 54
43 54 65
C1 =
29 56
56 110
C3 =
29 56 1
56 110 2
Exercise
>> C2
C2 =
29 36 43
36 45 54
43 54 65
C3 =
29 56 1
56 110 2
43 54 65
Exercise
>> C4=C2*C3
C4 =
4706 7906 2896
5886 9882 3636
7066 11858 4376
C5 =
841 2016 43
2016 4950 108
1849 2916 4225
Exercise
>> C6=inverse(C2)
??? Undefined function or variable 'inverse'.
>> lookfor inverse % this command will find all files where it comes
% across the word “inverse” in the initial comment lines. The
% command we need appears to be INV which says inverse of a
% matrix. The actual command is in lower case. To find out how to use
% it:
>> help inv
inv(C2) % inverse of C2
Exercise
>> for i=1:20
f(i)=i^2;
end % The for loop is terminated with “end”
>> plot(sin(0.01*f)',cos(0.03*f))
>> xlabel('sin(0.01f)')
>> ylabel('cos(0.03*f)')
>> legend('Example')
>> title('A Plot Example')
>> grid
>> exit % finished with MATLAB
Graphical optimization
Minimize f(x1,x2)= (x1-3)2 + (x2-2)2
g1(x1,x2): x1 + x2 ≤ 7
[C1,h1] = contour(x1,x2,ineq1,[7,7],'r-');
clabel(C1,h1);
set(h1,'LineWidth',2)
% ineq1 is plotted [at the contour value of 8]
hold on % allows multiple plots
k1 = gtext('g1');
set(k1,'FontName','Times','FontWeight','bold','FontSize',14,'Color','red')
% will place the string 'g1' on the lot where mouse is clicked
Example 1 cont‟d
[C2,h2] = contour(x1,x2,ineq2,[0,0],'r--');
clabel(C2,h2);
set(h2,'LineWidth',2)
k2 = gtext('g2');
set(k2,'FontName','Times','FontWeight','bold','FontSize',14,'Color','red')
[C3,h3] = contour(x1,x2,eq1,[8,8],'b-');
clabel(C3,h3);
set(h3,'LineWidth',2)
k3 = gtext('h1');
set(k3,'FontName','Times','FontWeight','bold','FontSize',14,'Color','blue')
% will place the string 'g1' on the lot where mouse is clicked
[C4,h4] = contour(x1,x2,eq2,[4,4],'b--');
clabel(C4,h4);
set(h4,'LineWidth',2)
k4 = gtext('h2');
set(k4,'FontName','Times','FontWeight','bold','FontSize',14,'Color','blue')
Example 1 cont‟d
[C,h] = contour(x1,x2,f1,'g');
clabel(C,h);
set(h,'LineWidth',1)
% the equality and inequality constraints are not written with 0 on the right hand
side. If you do write
% them that way you would have to include [0,0] in the contour commands
xlabel(' x_1 values','FontName','times','FontSize',12,'FontWeight','bold');
% label for x-axes
ylabel(' x_2 values','FontName','times','FontSize',12,'FontWeight','bold');
set(gca,'xtick',[0 2 4 6 8 10])
set(gca,'ytick',[0 2.5 5.0 7.5 10])
k5 = gtext({'Chapter 2: Example 1','pretty graphical display'})
set(k5,'FontName','Times','FontSize',12,'FontWeight','bold')
clear C C1 C2 C3 C4 h h1 h2 h3 h4 k1 k2 k3 k4 k5
grid
hold off
Example 1 cont‟d
Objective function
Subject to:
g1 ( X) : 0.4 x1 0.6 x2 8.5
g 2 ( X) : 3 x1 x2 25
g 3 ( X) : 3 x1 6 x2 70
x1 0; x2 0
Subject to:
g1 ( X) : 0.4 x1 0.6 x2 x3 8.5
g 2 ( X) : 3 x1 x2 x4 25
g 3 ( X) : 3 x1 6 x2 x5 70
x1 0; x2 0; x3 0; x4 0; x5 0
f ( x) 12 ( x 1) 2 ( x 2)( x 3)
g1 ( x, y ) : 20 x 15 y 30
g 2 ( x, y ) x 2 / 4 y 2 1
Nonlinear programming
x=sym('x') %defining x as a single symbolic object
x=
f=
12+(x-1)^2*(x-2)*(x-3)
ans =
2*(x-1)*(x-2)*(x-3)+(x-1)^2*(x-3)+(x-1)^2*(x-2)
ans =
2*(x-2)*(x-3)+4*(x-1)*(x-3)+4*(x-1)*(x-2)+2*(x-1)^2
ans =
24*x-42
g1=20*x+15*y-30 % define g1
g1 =
20*x+15*y-30
Nonlinear programming
g2=0.25*x+y-1; % define g2
% g1,g2 can only have partial derivatives
% independent variables have to be identified
diff(g1,x) % partial derivative
ans =
20
ans =
15
Nonlinear programming
g=[g1;g2] % g column vector based on g1, g2
g=
[ 20*x+15*y-30]
[ 1/4*x+y-1]
[ 20, 15]
[ 1/4, 1]
Nonlinear programming
ezplot(f) % a plot of f for -2pi x 2pi (default)
ezplot(f,[0,4]) % a plot between 0<=x <=4
df=diff(f);
hold on
ezplot(df,[0,4]) %plotting function and derivative
%combine with MATLAB graphics- draw a line
line([0 4],[0 0],'Color','r')
g
g=
[ 20*x+15*y-30]
[ 1/4*x+y-1]
•
Nonlinear programming
•
% to evaluate g at x=1,y=2.5
subs(g,{x,y},{1,2.5})
ans =
27.5000
1.7500
diary off
Nonlinear programming
tangent.m
% Illustration of the derivative
% Optimization Using MATLAB
% Dr. P.Venkataraman
%
% section 4.2.2
% This example illustrates the limiting process
% in the definition of the derivative
% In the figure animation
% 1. note the scales
% 2. as the displacement gets smaller the function
% and the straight line coincide suggesting the
% the line is tangent t the curve at the point
%
•
Nonlinear programming
tangent.m
•
syms x f deriv % symboli variables
f=12+(x-1)*(x-1)*(x-2)*(x-3); % definition of f(x)
deriv=diff(f); % computing the derivative
f
x f f
T
f
f x y
y
Gradient and tangent line at a point
Gradient of the function
x=0:.05:3;
y=0:0.05:3;
[X Y]=meshgrid(x,y); % X,Y are matrices
f ( x1 , x2 ) : ( x1 1) 2 ( x2 1) 2 x1 x2
0 x1 3; 0 x2 3
Unconstrained problem
x1=0:.05:4;
x2=0:0.05:4;
[X1 X2]=meshgrid(x1,x2); % X,Y are matrices
Minimize f ( x1 , x2 ) : x1 x2
Subject to h1 ( x1 , x2 ) : x12 / 4 x22 1
0 x1 3; 0 x2 3
Example-Lagrange method
Minimize
F ( x1 , x2 , 1 ) x1 x2 1 ( x12 / 4 x22 1)
Subject to:
h1 ( x1 , x2 ) : x12 / 4 x22 1
0 x1 3; 0 x2 3
The necessary conditions are obtained as:
F f h
1 1 0
x1 x1 x1
F f h
1 1 0
x2 x2 x2
F
h1 0
1
Example-Lagrange method
Applying the necessary conditions to the problem, we obtain:
F x
x1 1 1 0
x1 2
F
x2 21 x2 0
x2
F x12
h1 x22 1 0
1 4
In MATLAB, there are two ways to solve the above equations. The first
is by using symbolic support functions or using the numerical support
functions. The symbolic function is solve and the numerical function is
fsolve. The numerical technique is an iterative one and requires you to
choose an initial guess to start the procedure.
Lagrange method-Example
% Necessary/Sufficient coonditions for
% Equality constrained problem
% Minimize f(x1,x2) = -x1x2
%
%-------------------------
% symbolic procedure
%------------------------
% define symbolic variables
format compact
syms x1 x2 lam1 h1 F
% define F
F = -x1*x2 + lam1*(x1*x1/4 + x2*x2 -1);
h1 = x1*x1/4 +x2*x2 -1;
Lagrange method-Example
%the gradient of F
sym grad;
grad1 = diff(F,x1);
grad2 = diff(F,x2);
% optimal values satisfaction of necessary conditions
[lams1 xs1 xs2] = solve(grad1,grad2,h1,'x1,x2,lam1');
% the solution is returned as a vector of the three unknowns in case of
multiple
% solutions lams1 is the solution vector for lam1 etc.
% IMPORTANT: the results are sorted alphabetically fprint is used to
print a
% string in the command window disp is used to print values of matrix
f = -xs1.*xs2;
fprintf('The solution (x1*,x2*,lam1*, f*):\n'), ...
disp(double([xs1 xs2 lams1 f]))
Lagrange method-Example
%------------------------------
% Numerical procedure
%----------------------------
% solution to non-linear system using fsolve see help fsolve
% the unknowns have to be defined as a vector
% the functions have to be set up in a m-file
0 x1 3; 0 x2 3
% x is a vector
% x(1) = x1, x(2) = x2, x(3) = lam1
ret=[(-x(2) + 0.5*x(1)*x(3)), ...
(-x(1) + 2*x(2)*x(3)), ...
(0.25*x(1)*x(1) + x(2)*x(2) -1)];
Scaling-Example
Minimize
• Numerical calculations are driven by larger magnitudes. The second inequality will
be ignored in relation to the other functions even though the graphical solution
indicates that g3 is active.
• In practice, this is also extended to the variables. This is referred to as scaling the
variables and scaling the functions. Many current software will scale the problem
without user intervention.
Scaling-Example
Scaling variables: The presence of side constraints in problem
formulation allows a natural definition of scaled variables. The user
defined upper and lower bounds are used to scale each variable between 0
and 1. Therefore,
~ x x l
~
xi ui il ; xi scaled ith variable
xi xi
xi ~
xi ( xiu xil ) xil
In the original problem, the above equations is used to substitute for the
original variables after which the problem can be expressed in terms of
scaled variable.
Scaling-Example
• An alternate formulation is to use only the upper value of the side constraint to
scale the design variable:
xˆi xi / xiu
x xu ~
i i x i
• While the above option limits the higher scaled value to 1, it does not set the
lower scaled value to zero. For the example of this section, there is no necessity
for scaling the design variables since their order of magnitude is one, which is
exactly what scaling attempts to achieve.
x ~
i x ( xu xl ) xl
i i i i
xˆi xi / xiu
x xu ~
i i x i
>> syms g1 x1 x2
>> g1=7.4969*10^5*x1*x1+40000*x1-9.7418*10^6*(x1^4-x2^4)
g1 =
749690*x1^2+40000*x1-9741800*x1^4+9741800*x2^4
>> subs(g1,{x1,x2},{0.6,0.6})
ans =
2.9389e+005
Scaling-Example
>> syms g2 x1 x2
>> g2=(5000+1.4994*10^5*x1)*(x1*x1+x1*x2+x2*x2)-
1.7083*10^7*(x1^4-x2^4)
g2 =
(5000+149940*x1)*(x1^2+x1*x2+x2^2)-
17083000*x1^4+17083000*x2^4
>> subs(g2,{x1,x2},{0.6,0.6})
ans =
1.0256e+005
Scaling-Example
>> syms g3 x1 x2
>> g3=1.9091*10^(-3)*x1+6.1116*10^(-4)-0.05*(x1^4-x2^4)
g3 =
8804169777779727/4611686018427387904*x1+56369560
54044165/9223372036854775808-1/20*x1^4+1/20*x2^4
>> subs(g3,{x1,x2},{0.6,0.6})
ans =
0.0018
Scaling-Example
• The scaling constants for the equations are calculated as:
g~10 293888.4; g~20 102561.12;
g~30 1.7570E 03; g~20 1
• The first three constraints are divided through by their scaling constants. The last
equation is unchanged. The objective function has a coefficient of one. The scaled
problem is:
~
f x12 x22
subject to
~
g1 : 2.5509x12 0.1361x1 33.148( x14 x24 ) 0
g~ : (0.0488 1.4619x )( x 2 x x x 2 ) 166.5641( x 4 x 4 ) 0
2 1 1 1 2 2 1 2
fs = x1s*x1s -x2s*x2s;
Fs = fs + b1s*g1s+ b2s*g2s + b3s*g3s +b4s*g4s;
Scaling-Example
% the gradient of F
syms grad1s grad2s
grad1s = diff(Fs,x1s);
grad2s = diff(Fs,x2s);
% solution
[xs1 xs2] = solve(g1s,g3s,'x1s,x2s');
fss = xs1.^2 - xs2.^2;
gs1 = 2.5509*xs1.*xs1 +0.1361*xs1-33.148*(xs1.^4-xs2.^4);
gs2 = (.0488+ 1.4619*xs1).*(xs1.*xs1+xs1.*xs2+xs2.*xs2) ...
-166.5641*(xs1.^4-xs2.^4);
gs3 = 1.0868*xs1+0.3482 -28.4641*(xs1.^4-xs2.^4);
gs4 = xs2-xs1 +0.001;
Scaling-Example
fprintf('\n\nThe solution *** Case a ***(x1*,x2*, f*, g1, g2 g3
g4):\n'), ...
disp(double([xs1 xs2 fss gs1 gs2 gs3 gs4]))
%unlike the previous case all the solutions are displayed
%
x1s=double(xs1(1));
fprintf('\n x1 = '),disp(x1s)
x2s=double(xs2(1));
fprintf('\n x2 = '),disp(x2s)
Scaling-Example
fprintf('\nConstraint:')
fprintf('\ng1: '),disp(subs(g1s))
fprintf('\ng2: '),disp(subs(g2s))
fprintf('\ng3: '),disp(subs(g3s))
fprintf('\ng4: '),disp(subs(g4s))
b2s=0.0; b4s = 0.0;
[b1s b3s]=solve(subs(grad1s),subs(grad2s),'b1s,b3s');
There are n+l+m unknowns. The same number of equations are required to solve the
problem. These are provided by the FOC or the Kuhn-Tucker conditions.
Scaling-Example
• n equations are obtained as:
F f h h g g
1 1 l l 1 1 m m 0; i 1,2,, n
xi xi xi xi xi xi
hk ( x1 , x2 ,, xn ) 0; k 1,2,, l
• m equations are applied through the 2m cases. This implies that there are 2m
possible solutions. Each case sets the multiplier j or the corresponding inequality
constraint gj to zero. If the multiplier is set to zero, then the corresponding
constraint must be feasible for an acceptable solution. If the constraint is set to
zero (active constraint), then the corresponding multiplier must be positive for a
minimum.
Scaling-Example
• With this in mind, the m equations can be expressed as:
j g j 0 if j 0 then g j 0
if g j 0 then j 0
• If the above equations are not met, the design is not acceptable.
• In our example, n=2 and m=4 so there are 24 = 16 cases that must be investigated as
part of the Kuhn-Tucker conditions. It can also be identified that g1 and g3 are
active constraints. If g1 and g3 are active constraints, then the multipliers 1 and 3
must be positive. By the same reasoning, the multipliers associated with the inactive
constraints g2 and g4, that is 2 and 4 must be set to zero. This information on the
active constraints can be used to solve for x1* and x2* as this is a system of two
equations in two unknowns.
Newton-Raphson Method
Minimize
f ( ) ( 1) 2 ( 2)( 3)
Subject to
g ( ) : 0.75 * 2 1.5 * 1 0
0 4
Solution:
syms al f g phi phial
f = (al-1)^2*(al-2)*(al-3);
g = -1 - 1.5*al + 0.75*al*al; f ( i )
i 1 i
phi = diff(f); f ( i )
phial = diff(phi);
ezplot(f,[0 4])
l1 =line([0 4],[0 0]);
Newton-Raphson Method
set(l1,'Color','k','LineWidth',1,'LineStyle','-')
hold on
ezplot(phi,[0 4])
grid
hold off
xlabel('\alpha')
ylabel('f(\alpha), \phi(\alpha)')
title('Example 5.1')
axis([0 4 -2 10])
alpha(1) = 0.5;
fprintf('iterations alpha phi(i) d(alpha) phi(i+1) f\n')
Newton-Raphson Method
for i = 1:20;
index(i) = i;
al = alpha(i);
phicur(i)=subs(phi);
delalpha(i) = -subs(phi)/subs(phial);
al = alpha(i)+delalpha(i);
phinext(i)=subs(phi);
fun(i)=subs(f);
if (i > 1)
l1=line([alpha(i-1) alpha(i)], [phicur(i-1),phicur(i)]);
set(l1,'Color','r','LineWidth',2)
pause(2)
end
if (abs(phinext(i)) <= 1.0e-08) % the convergence or the stopping criterion
disp([index' alpha' phicur' delalpha' phinext' fun'])
return
else
alpha(i+1)=al;
end
end
i
Newton-Raphson Method
• The optimal value of the variable that minimizes the polynomial is then
considered to approximate the optimal value of the variable for the original
function.
Subject to 0 ≤ ≤ 4
Solution: Two elements need to be understood prior to the following discussion. The first
concerns the evaluation of the polynomial, and the second concerns the inclusion of
the expression
Subject to 0 ≤ ≤ 4
Example
• The polynomial is completely defined if the coefficients b 0, b1, b2 are known. To
determine them, three data points [(1,f1), (2,f2),(3,f3)] are generated from equation
f ( ) ( 1) 2 ( 2)( 3)
• This sets up a linear system of three equations in three unknowns by requiring that the
values of the function and the polynomial must be the same at the three points. The
solution of this system of equations is the values of the coefficients. The consideration of
the expression
Subject to 0 ≤ ≤ 4
0; f (0) 6; 1 0; f1 6
1; f (1) 0; 2 1; f2 0
2; f (2) 0; this can not be 2 as the minimum is not yet trapped.
4; f (4) 18; 3 4; f 3 18
Scanning procedure
• A process such as that illustrated is essential as it is both
indifferent to the problem being solved and can be
programmed easily. The important requirement of any such
process is to ensure that the minimum lies between the limits
established by the procedure. This procedure is developed as
MATLAB m-file below.
format compact
% ntrials are used to bisect/double values of da
if (ns ~= 0) ntrials = ns;
else ntrials = 20; % default
end
return;
else
das = 0.5*das;
end
end
Scanning procedure
fprintf('\ncannot decrease function value \n')
• The input to the function is the name of the function m-file, for example
‘Example5_1’, the start value for the scan (a0), the scanning interval (da), and the
number of scanning steps (ns). The function outputs a vector of two values.The first
is the value of the variable and the secodn is the corresponding value of the
function.
• The function referenced by the code must be a MATLAB m file in the same
directory (Example5_1.m)
PolyApproximation_1Var(functname,order,lowbound,intvl
step,ntrials)
format compact
lowval = 0.0
up =
UpperBound_1Var(functname,lowbound,intvlstep,ntrials)
upval=up(1)
PolyApprox_1Var.m
if (order == 2)
val1 = lowval + (upval -lowval)*.5
f1 = feval(functname,lowval)
f2 = feval(functname,val1)
f3 = feval(functname,upval)
• The input to the function is the name of the function; the order (2 or 3) of the
approximation;lowbound-the start value of the scan passed to
UpperBound_1Var.m; intvlstep-the scanning interval passed to
UpperBound_1Var.m; intrials: the number of scanning steps passed to
UpperBound_1Var.m.
• The output of the program is a vector of two values. The first element of the
vector is the location of the minimum of the approximating polynomial, and
the second is the function value at this location.
• The function referenced by the code must be a MATLAB m-file., in the same
directory (Example5_1.m). The input for example Example5_1 is the value
at which the function needs to be computed, and its output is the value of
the function.
• Usage: Value=PolyApprox_1Var(„Example5_1‟,2,0,1,10)
Golden Section Method
Golden section method
• If f2 > f1
Golden section method
Golden section method
GoldSection_1Var.m
• The code translatesthe algorithm for the golden section method into
MATLAB code.
• Usage: Value=GoldSection_1Var(„Example5_1‟,0.001,0,1,10)
GoldSection_1Var.m
% Numerical Techniques - 1 D optimization
% Generic Golden Section Method - Single Variable
% copyright (code) Dr. P.Venkataraman
% An m-file to apply the Golden Section Method
%************************************
% requires: UpperBound_1Var.m
%***************************************
%the following information are passed to the function
% the name of the function
'functname'
% this function should be available as a function m-file
and shoukd return the value of the function
%% the tolerance 0.001
GoldSection_1Var.m
% following needed for UpperBound_1Var
• The number of iterations depends on the tolerance expected in the final result and is
known prior to the start of the iterations.
• This is a significant improvement relative to the Newton Raphson method where the
number of iterations can not be predicted a priori.
Example 2: Minimize
f ( x1 , x2 , x3 ) ( x1 x2 ) 2 2( x2 x3 ) 2 3( x3 x1 ) 2
Golden Section Method
Extension for the multivariable case
Usage: UpperBound_1Var(„Example5_1‟,0,1,10)
Usage: UpperBound_nVar(„Example5_2‟,x,s,0,1,10)
The code is run from the command window using the following listing:
x=[0 0 0];
s=[0 0 6];
Value=GoldSection_nVar(‘Example5_2’,0.001,x,s,0,1,10)
The result is:
Value =
0.1000 1.2000 0 0 0.5973
1=0.1; f()=1.2; x1=0; x2=0; x3=0.5973
GoldSection_nVar.m
% Ch 5: Numerical Techniques - 1 D optimization
% Golden Section Method - many variables
% copyright (code) Dr. P.Venkataraman
% An m-file to apply the Golden Section Method
%************************************
% requires: UpperBound_nVar.m
%***************************************
%the following information are passed to the function
% the name of the function 'functname'
% this function should be available as a function m-file
% and should return the value of the function
% corresponding to a design vector given a vector
% the tolerance: 0.001
% following needed for UpperBound_nVar
% the current position vector x
% the current search direction s
% the initial value lowbound
% the incremental value intvl
% the number of scanning steps ntrials
GoldSection_nVar.m
% the function returns a row vector of the following
% alpha(1),f(alpha1), design variables at alpha(1)
% for the last iteration
% sample callng statement
% GoldSection_nVar('Example5_2',0.001,[0 0 0 ],[0 0 6],0,0.1,10)
%function ReturnValue = ...
GoldSection_nVar(functname,tol,x,s,lowbound,intvl,ntrials)
format compact;
% find upper bound
upval = UpperBound_nVar(functname,x,s,lowbound,intvl,ntrials);
au=upval(1); fau = upval(2);
if (tol == 0) tol = 0.0001; %default
end
eps1 = tol/(au - lowbound);
tau = 0.38197;
GoldSection_nVar.m
nmax = round(-2.078*log(eps1)); % no. of iterations
aL = lowbound; xL = x + aL*s; faL =feval(functname,xL);;
a1 = (1-tau)*aL + tau*au; x1 = x + a1*s; fa1 = feval(functname,x1);
a2 = tau*aL + (1 - tau)*au; x2 = x + a2*s; fa2 = feval(functname,x2);
% storing all the four values for printing
% remember to suppress printing after debugging
fprintf('start \n')
fprintf('alphal(low) alpha(1) alpha(2) alpha{up) \n')
avec = [aL a1 a2 au;faL fa1 fa2 fau];
disp([avec])
for i = 1:nmax
if fa1 >= fa2
aL = a1; faL = fa1;
a1 = a2; fa1 = fa2;
a2 = tau*aL + (1 - tau)*au; x2 = x + a2*s;
fa2 = feval(functname,x2);
au = au; fau = fau; % not necessary -just for clarity
GoldSection_nVar.m
fprintf('\niteration '),disp(i)
fprintf('alphal(low) alpha(1) alpha(2) alpha{up) \n')
avec = [aL a1 a2 au;faL fa1 fa2 fau];
disp([avec])
else
au = a2; fau = fa2;
a2 = a1; fa2 = fa1;
a1 = (1-tau)*aL + tau*au; x1 = x + a1*s;
fa1 = feval(functname,x1);
aL = aL; faL = faL; % not necessary
fprintf('\niteration '),disp(i)
fprintf('alphal(low) alpha(1) alpha(2) alpha{up) \n')
avec = [aL a1 a2 au;faL fa1 fa2 fau];
disp([avec])
end
end
% returns the value at the last iteration
ReturnValue =[a1 fa1 x1];
Pattern Search
• The pattern search method is a minor modification to the Univariate method
with a major impact. In the univariate method, each design variable
(considered a cordinate) provides a search direction.
• The SOC are hardly ever applied. One of the reasons is that it would
involve the computation of an n x n second derivative matrix which is
considered computationally expensive, particularly if the evaluation
of the objective function requires a call to a finite element method for
generating required information.
• Another reason for not calculating the Hessian is that the existence
of the second derivative in a real design problem is not certain even
though it is computationally possible or feasible. For problems that
can be described by symbolic calculations, MATLAB should be able
to handle computation of second derivative at the possible solution
and its eigenvalues.
Gradient Based Methods
• Without SOC, these methods require user‟s vigilence to ensure that the
solution obtained is a minimum rather than a maximum or a saddle point. A
simple way to verify this is to perturb the objective function through
perturbation in the design variables at the solution and verify it is a local
minimum.
• This brings up an important property of these methods- they only find local
minimums. Usually, this will be close to the design where the iterations are
begun. Before concluding the design exploration, it is necessary to execute
the method from several starting points to discover if the minimums exist
and select the best one by head to head comparison. The bulk of the
existing unconstrained and constrained optimization methods belong to this
category.
• Four methods are presented. The first is the Steepest Descent Method.
While this method is not used in practice, it provides an excellent example
for understanding the algorithmic principles for the gradient-based
techniques. The second is the conjugate gradient technique which is a
classical workhorse particularly in industry usage. The third and the fourth
belong to category of Variable Metric Methods, or Quasi-Newton methods
as they are also called.
Steepest Descent Method
• The gradient of a function at a point is the
direction of the most rapid increase in the value
of the function at that point. The descent
direction can be obtained reversing the gradient
(or multiplying it by -1). The next step would be
to regard the descent vector as a search
direction., after all we are attempting to decrease
the function through successive iterations.
• SteepestDescent.m:
SteepestDescent.m
• For two variables it will draw the contour plot.
• For two variables, the design vector changes can be seen
graphically in slow motion with steps in different colour.
• The design variables, the function value, and the square of the
length of the gradient vector (called the KT value) at each iteration
are displayed in the Command window at completion of the number
of iterations.
• The gradient of the function is numerically computed using first
forward finite difference. The gradient computation is therefore
automatic.
%***************************************
% complete the other stopping criteria
%****************************************
end
len=length(as);
%for kk = 1:nvar
designvar=xs(length(as),:);