Professional Documents
Culture Documents
MEL 806
Thermal System Simulation (2-0-2)
Dr. Prabal Talukdar
Associate Professor
Department of Mechanical Engineering
IIT Delhi
Introduction
Univariate Search
An univariate search involves optimizing the
objective function with respect to one variable at
a time. Therefore, the multivariable problem is
reduced to a series of single-variable
optimization problems, with the process
converging
i tto th
the optimum
ti
as th
the variables
i bl are
alternated
Graphical
p
p
presentation
The method
A starting point is chosen based on available information on the system
or as a point away from the boundaries of the region
region.
First, one of the variables, say x, is held constant and the function is
optimized with respect to the other variable y.
Point A represents the optimum thus obtained. Then y is held constant
at the value at point A and the function is optimized with respect to x to
obtain the optimum
p
g
given by
yp
point B.
Again, x is held constant at the value at point B and y is varied to obtain
the optimum, given by point C.
This process is continued, alternating the variable, which is changed
while keeping the others constant, until the optimum is attained.
y the change
g in the objective
j
function,, from one step
p
This is indicated by
to the next, becoming less than a chosen convergence criterion or
tolerance
Example
p
The objective
j
function U,, which represents
p
the cost of a
fan and duct system, is given in terms of the design
variables x and y, where x represents the fan capacity
and y the duct length
length, as
Calculations
x
0.5
1.944161
2.437957
2 531677
2.531677
2.547644
2.550314
2.55076
2.550834
2.550847
y
1.632993
0.828139
0.739531
0 725714
0.725714
0.723436
0.723057
0.722994
0.722983
0.722982
u
9.839626
5.598794
5.427791
5 422513
5.422513
5.422363
5.422359
5.422359
5.422359
5.422359
Steepest
p Ascent/Descent Method
The steepest
p
ascent/descent method is a very
y efficient
search method for multivariable optimization and is
widely used for a variety of applications, including
thermal systems.
systems
It is a hill-climbing technique in that it attempts to move
toward the peak, for maximizing the objective function, or
toward the valley, for minimizing the objective function,
over the shortest possible path.
The method is termed steepest ascent in the former
case and steepest descent in the latter.
Steepest ascent method, shown in terms of (a) the climb toward the peak
of a hill and (b) in terms of constant U contours.
At each trial p
point,, the gradient
g
vector is determined and
the search is moved along this vector, the direction being
chosen so that U increases if a maximum is sought, or U
decreases if a minimum is of interest
interest.
The direction represented by the gradient vector is given
by the relationship between the changes in the
independent variables. Denoting these by x1, x2 , --xn , we have
First approach
pp
Choose a starting
gp
point. Select x. Calculate the
derivatives.
Decide the direction of movement, i.e., whether x is
positive
iti or negative.
ti
C
Calculate
l l t y.
Obtain
Obt i the
th new values
l
of x, y, and U.
Calculate the derivatives again at this point. Repeat
previous steps to attain new point.
This procedure is continued until the change in the
variables
i bl b
between
t
ttwo consecutive
ti ititerations
ti
iis within
ithi a
desired convergence criterion.
Second Approach
pp
Choose a starting point.
Calculate the derivatives.
Decide the direction of movement, i.e., whether x must
increase or decrease
decrease.
Vary x, using a chosen step size x and calculating the
corresponding
p
g y.
y Continue to varyy x until the optimum
p
in U is reached.
Obtain the new values of x, y, and U. Calculate the
derivatives again at this point and move in the direction
given by the derivatives.
This p
procedure is continued until the change
g in the
variables from one trial point to the next is within a
desired amount.
Example
p Problem
Consider the simple problem discussed
before and apply the two approaches just
discussed for the steepest ascent/descent
method to obtain the minimum cost U.
The starting
gp
point is taken as x = y = 0.5. The results
obtained for different values of x are
Multivariable Constrained
Optimization
We now come to the problem of constrained
optimization, which is much more involved than
the various unconstrained optimization cases
considered thus far.
The number of independent variables must be
larger than the number of equality constraints;
otherwise, these constraints may simply be used
to determine the variables and no optimization is
possible
Penalty
y Function Method
The basic approach
pp
of this method is to convert the
constrained problem into an unconstrained one by
constructing a composite function using the objective
function and the constraints
Let us consider the optimization problem given by the
equations
Example
p
Example
p Problem
In a two-component system, the cost is the objective function given by the
expression
U(x y) = 2x2 + 5y
U(x,
where x and y represent the specifications of the two components. These
variables are also linked by mass conservation to yield the constraint
( y) = xyy -12 = 0
G(x,
Solve this problem by the penalty function method to obtain minimum cost.
The new objective function V(x, y), consisting of the objective function and
th constraint,
the
t i t is
i d
defined
fi d as