Professional Documents
Culture Documents
• In linearization of the function f about the current value of design Xp, only
the first variation is used. The neighboring value of the function can be
expressed as:
~
f ( X p)
Direct methods for constrained
optimization
• This is illustrated in the figure
using the objective function
expanded about the current
design x1=3 and x2=2. The
curved lines f (X) are the
contours of the objective function
below. The straight lines
~
f ( X p)
are the contours of the linearized
function (lines of constant value
of the function) obtained through
the following:
Direct methods for constrained
optimization
• The quadratic expansion of
the function can be
obtained by using
f1 = subs(f,{x1,x2},{xb(1),xb(2)})
g1 = subs(g,{x1,x2},{xb(1),xb(2)})
h1 = subs(h,{x1,x2},{xb(1),xb(2)})
gf1 = double(subs(gradf1,{x1,x2},{xb(1),xb(2)}));
gf2 = double(subs(gradf2,{x1,x2},{xb(1),xb(2)}));
gh1 = double(subs(gradh1,{x1,x2},{xb(1),xb(2)}));
gh2 = double(subs(gradh2,{x1,x2},{xb(1),xb(2)}));
gg1 = double(subs(gradg1,{x1,x2},{xb(1),xb(2)}));
gg2 = double(subs(gradg2,{x1,x2},{xb(1),xb(2)}));
Direct methods for constrained
optimization
% choose values for rh and rg
x11 = -5:.2:5;
x22 = -5:.2:5;
x1len = length(x11);
x2len = length(x22);
for i = 1:x1len;
for j = 1:x2len;
xbv = [x11(i) x22(j)];
fnew(j,i) = f1 + [gf1 gf2]*xbv';
hnew(j,i) = h1 + [gh1 gh2]*xbv';
gnew(j,i) = g1 + [gg1 gg2]*xbv';
end
end
Direct methods for constrained
optimization
minf = min(min(fnew));
maxf = max(max(fnew));
mm = (maxf - minf)/5.0;
mvect=[(minf+mm) (minf + 1.5*mm) (minf + 2*mm) (minf+ 2.5*mm)
(minf + 3.0*mm) (minf +4*mm) (minf + 4.5*mm)];
[c1,fc] = contour(x11,x22,fnew,mvect,'b'); clabel(c1);
set(fc,'LineWidth',2)
Grid
xlabel('\delta x_1')
ylabel('\delta x_2')
title('Example 7.1: Sequential Linear Programming')
hold on
Direct methods for constrained
optimization
[c2,hc]=contour(x11,x22,hnew,[0,0],'g');
set(hc,'LineWidth',2,'LineStyle','--')
grid [c3,gc]=contour(x11,x22,gnew,[0,0],'r');
set(gc,'LineWidth',2,'LineStyle',':')
contour(x11,x22,gnew,[1,1],'k')
grid
hold off
Indirect methods for constrained
optimization
• These methods were developed to take advantage of
codes that solve unconstrained optimization problems.
They are also referred to as Sequential Unconstrained
Minimization Techniques (SUMT).
• The idea behind the approach is to repeatedly call the
unconstrained optimization algorithm using the solution
of the previous iteration. The unconstrained algorithm
itself executes many iterations. This would require a
robust unconstrained minimizer to handle a large class
of problems. The BFGS method is robust and impressive
over a large class of problems.
Indirect methods for constrained
optimization
• A preprocessing task involves transforming the constrained problem into an
unconstrained problem. This is largely accomplished by augmenting the
objective function with additional functions that reflect the violation of the
constraints very significantly. These functions are referred to as the penalty
functions.
• There was significant activity in this area at one time which led to entire
families of different types of penalty function methods.The first of these was
the Exterior Penalty Function Method. The first version of ANSYS that
incorporated optimization in its finite element program relied on EFP. The
EFP had several short comings.
• In this chapter, only the EPF is addressed largely due to academic interest.
In view of the excellent performance of the direct methods, these methods
will probably not be used today for continuous problems. They are once
again important in global optimization techniques for constrained problems.
Indirect methods for constrained
optimization
• The second method presented in this section, the AugmentedLagrange
Method (ALM) is the best of the Sequential Unconstrained Minimization
Techniques. Its exceedingly simple implementaion, its quality of solution,
and its ability to generate information on the Lagrange multipliers allow it
to seriously challenge the direct techniques.
Exterior Penalty Function Method
• The transformation of the optimization problem
where P(X, rh, rg) is the penalty function, rh and rg are penalty constants (also called
multipliers).
Exterior Penalty Function Method
• The penalty function is expressed as:
In the above equation, if the equality constraints are not zero, their value gets
squared and multiplied by the penalty parameter and then gets added to the
objective function. If the inequality constraint is in violation, it too gets squared and
added to the objective function after being amplified by the penalty multipliers. In a
sense if the constraints are not satisfied, then they are penalized, hence the
function’s name.
• It can be shown that the transformed unconstrained problem solves the original
constrained problem as the multipliers rh and rg approach .
• The solution returned from DFP, will solve the above equation for a known
value of the multipliers.The solution returned from the DFP can be
considered as a function of the multiplier and can be thought of as:
X*=X*(rh ,rg )
• One reason for the term Exterior Penalty is that at the end of each
SUMT iteration, the design will be infeasible (until the solution is
obtained). This implies that the method determines design values
that are approaching the feasible region from the outside. This is a
serious drawback if the method fails prematurely, as it will often do.
The information generated so far is valueless as the designs were
never feasible.
Exterior Penalty Function Method
• As seen in the example below, the EPF severely increases the nonlinearity
of the problem creating conditions for the method to fail.
• It is expected that the increase in nonlinearity is balanced by a closer
starting value for the design, as each SUMT iteration starts closer to the
solution than the previous one.
• In the following slides, the EFP method is applied to an example through a
series of calculations rather than through the translation of the algorithm
into MATLAB code.
• There are a couple of changes with respect to algorithm A7.2. To resolve
the penalty function with respect to the inequality constraint, the constraint
is assumed to always be in violation so that the return from the max
function is the constraint function itself. This will drive the inequality
constraint to be active which we know to be true for this example.
• Numerical implementation as outlined in the algorithm should allow the
determination of inactive constraints.
• Instead of the numerical implementation of the unconstrained problem, an
analytical solution is determined using MATLAB symbolic computation.
Exterior Penalty Function Method
Exterior Penalty Function Method
• The figure below is the contour plot of the original graphical solution for
the example.
Exterior Penalty Function Method
• The figure below is the contour plot of the transformed unconstrained
function for values of rh=1 and rg=1 . The increase in nonlinearity is
readily apparent.
Exterior Penalty Function Method
• The figure below is the plot of rh=5 and rg=5 . This and the previous figure
suggest several points that satisfy first order conditions. Their closeness
makes it difficult for any numerical technique to find the optimum. It is
clear that the EPF severely increases the nonlinearity of the problem.
Exterior Penalty Function Method
• The code below uses symbolic manipulation for the exterior
penalty function method. Since the code uses symbolic
manipulation, actually drawing the plot takes some time.
Evaluating the data numerically will make a big difference.
• The code is an m file that will calculate the solution and the
values of the function for a predetermined set of values of the
penalty multipliers.
Exterior Penalty Function Method
• The code requires two inputs from the user at different stages. The first
input is the values for the multipliers for which the solution should be
obtained.
• The list of solution (there are nine for this problem) is displayed in the
Command Window.
• The user finds the solution that satisfies the side constraints which must be
entered at the prompt. (note: it must be entered as a vector). The program
then prints out the values of the various functions involved in the example).
Exterior Penalty Function Method
% % Optimization with MATAB; Dr P.Venkataramana=
% Chapter 7 Section 7.2.1
% External Penalty Function Method % Example 7.1
% Symbolic Calculations and plotting
format compact
syms x1 x2 rh rg f g h F grad1 grad2
f = x1^4 - 2*x1*x1*x2 + x1*x1 + x1*x2*x2 - 2*x1 + 4;
h = x1*x1 + x2*x2 - 2;
g = 0.25*x1*x1 +0.75*x2*x2 -1;
%F = f + rh*h*h + rg*g*g;
%grad1 = diff(F,x1);
%grad2 = diff(F,x2);
% choose values for rh and rg
rh = 5; rg = 5;
Exterior Penalty Function Method
x11 = -2:.2:5;
x22 = -2:.2:5;
x1len = length(x11);
x2len = length(x22);
for i = 1:x1len;
for j = 1:x2len;
gval = subs(g,{x1 x2},{x11(i) x22(j)});
if gval < 0 gval = 0;
end
hval = subs(h,{x1 x2},{x11(i) x22(j)});
Fval(j,i) = subs(f,{x1 x2},{x11(i) x22(j)}) ... + rg*gval*gval + rh*hval*hval; end
end
c1 = contour(x11,x22,Fval,[3.1 4 5 6 10 20 50 100 200 500]); clabel(c1);
grid xlabel('x_1')
ylabel('x_2')
strng = strcat('Example 7.1: ','r_h = ', num2str(rh),'r_g = ',num2str(rg));
title(strng)
Exterior Penalty Function Method
• The following is posted from the command window for both of the penalty
multipliers set to 25.
Exterior Penalty Function Method
• In the above run, the equality and the inequality constraints are not
satisfied. Another m file for implementing the Exterior Penalty Function
method is pasted below:
X*=X*(, , rh , rg )
• At the end of the SUMT iteration, the values of the multipliers and penalty
constants are updated. The latter are usually geometrically scaled but unlike
EPF do not have to be driven to for convergence.
Augmented Lagrangian Method
ALM.m code