You are on page 1of 109

OPERATIONS RESEARCH Prof. M. P. Biswal Department of Mathematics IIT Kharagpur Kharagpur-721302 E-mail: mpbiswal@maths.iitkgp.ernet.

in

Optimization is an act of obtaining best results under given restrictions. In several engineering design problems, engineers have to take many technological and managerial decisions at several stages. The objective of such decisions is to either minimize the eort required or to maximize the desired benet.

The optimum seeking methods are known as Optimization Techniques. It is a part of Operations Research (OR). OR is a branch of Mathematics concerned with some techniques for nding best solutions.

SOME APPLICATIONS: 1. Optimal Design of Solar Systems, 2. Electrical Network Design, 3. Energy Model and Planning, 4. Optimal Design of Components of a System,

5. Planning and Analysis of Existing Operations, 6. Optimal Design of Motors, Generators and Transformers, 7. Design of Aircraft for Minimum Weight, 8. Optimal Design of Bridge and Building.

Optimization Techniques are divided into two dierent types, namely Linear Models and Non-Linear Models. At rst we shall discuss about all the Linear Models. Later we shall discuss about NonLinear Models. Mathematical statement of a linear model is stated as follows:

Find x1, x2, x3, . . . , xn so as to


n

max : Z =
j =1

c j xj

(1)

subject to
n

aij xj bi , i = 1, 2, 3, . . . , m
j =1

(2) (3)

xj 0, j = 1, 2, 3, . . . , n

Linear Models are known as Linear Programming Problem (LPP).


n

(LPP-I): subject to
n

max : Z =
j =1

c j xj

(4)

aij xj (, =, )bi , i = 1, 2, 3, . . . , m (5)


j =1

xj 0, j = 1, 2, 3, . . . , n

(6)

(LPP-II): subject to
n

min : Z =
j =1

c j xj

(7)

aij xj (, =, )bi , i = 1, 2, 3, . . . , m (8)


j =1

xj 0, j = 1, 2, 3, . . . , n

(9)

After introducing slack, surplus and articial variables a LPP can be put in standard form. (1.) Add a slack variable xn+i , for
n

aij xj bi , bi 0
j =1 n j =1

aij xj + xn+i = bi ,

xn +i 0

(2.) Subtract a surplus variable xn+i and add an articial variable xn+i +1, where xn+i , xn+i +1 0 for
n

aij xj bi , bi 0
j =1 n j =1

aij xj xn+i + xn+i +1 = bi

(3.)Add an articial variable xn+i , for


n

aij xj = bi , bi 0
j =1 n j =1

aij xj + xn+i = bi ,

xn+i 0.

After introducing slack, surplus and articial variables a LPP can be put in standard form.

(LPP-I): subject to
N

max : Z =
j =1

cj xj (10)

aij xj = bi , i = 1, 2, 3, . . . , m
j =1

(11) (12)

xj 0, j = 1, 2, 3, . . . , N

(LPP-II): subject to
N

min : Z =
j =1

cj xj (13)

aij xj = bi , i = 1, 2, 3, . . . , m
j =1

(14) (15)

xj 0, j = 1, 2, 3, . . . , N

SOLUTION PROCEDURES: A LPP can be solved by the following methods: 1. Graphical Method (Only for 2-variable problems), 2. Simplex Method,

3. Big-M Method/ Charnes Penalty Method, 4. Two-Phase Simplex Method, 5. Revised Simplex Method, 6. Dual Simplex Method, 7. Primal-Dual Simplex Method, 8. Interior Point Method.

BASIC SOLUTION: Given a system AX = b of m linear equations in n variables (n > m), the system is consistent and the solutions are innite if r (A) = m, m < n

i.e. Rank of A is m where m < n.

We may select any m variables out of n variables. Set the remaining (n m) variables to zero. The system AX = b becomes BXB = b where |B | = 0. If it has a solution then XB = B 1b . XB is called basic solution. Maximum n = n . possible basic solutions:(m ) (nm)

EXAMPLE: Find the Basic Solutions: x1 + x2 + x3 x1 + 4 x2 = 10

+ x4 = 16

Sl. Non-Basic Variables Basic Variables 1. 2. 3. x1 = 0, x2 = 0 x1 = 0, x3 = 0 x1 = 0, x4 = 0 x3 = 10, x4 = 6 x2 = 10, x4 = 24 x2 = 4, x3 = 6

4. x2 = 0, x3 = 0 x1 = 10, x4 = 6 5. x2 = 0, x4 = 0 x1 = 16, x3 = 6 6. x3 = 0, x4 = 0 x1 = 8, x2 = 2 There are six Basic Solutions. Only four are Basic Feasible Solutions. Sl. No. (2) and (5) are not Basic Feasible Solutions (B.F.S.).

Some Denitions and Theorems: Point in n-dimensional space: A point X = (x1, x2, x3, . . . , xn)T has n coordinates xi , i = 1, 2, 3, . . . , n. Each of them are real numbers.

Line Segment in n-dimensions: Let X1 be the coordinates of A and X2 be the coordinates of B . The line segment joining these two points is given by X () i.e. L = {X ()|X () = X1+(1)X2, 0 1}

Hyper-plane: A hyper-plane H is dened as: H = {X |C T X = b } =b A hyper-plane has (n 1)-dimensions in an n-dimensional space. In 2-dimensional space hyper-plane is a line.
c 1 x1 + c 2 x2 + . . . + c n xn

In 3-dimensional space it is a plane. A hyper-plane divides the n-dimensional space into two closed half spaces as: (i) c1x1 + c2x2 + . . . + cnxn b (ii) c1x1 + c2x2 + . . . + cnxn b

Convex Set: A convex set S is a collection of points such that if X1 and X2 are any two points in the set, the line segment joining them is also in the set S . Let X = X1 + (1 )X2, 0 1 If X1, X2 S , then X S .

Convex Polyhedron and Polytope: A convex polyhedron is a set S (a set of points) which is common to one or more half spaces. A convex polyhedron that is bounded is called a convex polytope.

Extreme Point: It is a point in the convex set S which does not lie on a line segment joining two other points of the set. Feasible Solution: In a LPP any solution X which satisfy AX = b and X 0 is called a feasible solution.

Basic Solution: This is a solution in which (n m) variables are set equal to zero in AX = b . It has m equations and n unknowns n > m. Basis: The collection of variables which are not set equal to zero to obtain the basic solution is the basis.

Basic Feasible Solution (B.F.S.): The basic solution which satisfy the conditions X 0 is called B.F.S. Non-Degenerate B.F.S.: It is a B.F.S. which has exactly m positive xi out of n. Optimal Solution: B.F.S. which optimizes( Max / Min ) the objective function is called an optimal solution.

Theorem 1: The intersection of any number of convex sets is also convex. Proof: Let R1, R2, . . . , Rk be convex sets and their intersection be R i.e.
k

R=
i =1

Ri

R R1 X1 R2

X2
?

R1 R2

Let X1 and X2 R . Then X1 +(1 )X2 R , where 0 1, X = X1 + (1 )X2. Thus X Ri , i = 1, 2, . . . , k . Hence


k

X R =
i =1

Ri

Theorem 2:The feasible region of a LPP forms a convex set. Proof: The feasible region of LPP is dened as: S = {X |AX = b , X
0}

Let the points X1 and X2 be in the feasible set S so that AX1 = b , X1 0;

AX2 = b , X2 0. Let X = X1 + (1 )X2. Now we have: A[X1 + (1 )X2] = b + (1 )b = b


AX = b .

Thus the point X satises the constraints if 0 1 i.e. 0, 1 0, X 0.

Theorem 3: In general a LPP has either one optimal solution or no optimal solution or innite number of optimal solutions. Any local minimum/maximum solution is a global minimum/maximum solution of a LPP.

(LPP I ) max : Z = C T X (16) subject to AX = b (17) X 0 (18) X is a maximizing point of the LPP.

(LPP II ) min : Z = C T X subject to AX = b

(19) (20)

X 0 (21) X is a minimizing point of the LPP.

Theorem 4: Every B.F.S. is an extreme point of the convex set of the feasible region. Proof: T Let X = (x1, x2, x3, . . . , xm, xm+1, xm+2, . . . , xn) be a BFS of the LPP where x1, x2, x3, . . . , xm 1, x2 = are basic variables. Now x1 = b 2, x3 = b 3, . . . , xm = b b m , x1, x2, . . . , xm 0.

This feasible region forms a convex set. To show X is an extreme point, we must show that there do not exist feasible solutions Y and Z such that X = Y + (1 )Z , 0 1 Let Y = (y1, y2, y3, . . . , ym, ym+1, ym+2, . . . , yn) T and Z = (z1, z2, z3, . . . , zm, zm+1, zm+2, . . . , zn)
T

Last (n m) components gives: yj + (1 )zj = 0, j = m + 1, m + 2, . . . , n . Since 0, 1 0, yj 0, zj 0, it gives yj = zj = 0, j = m + 1, m + 2, . . . , n. This shows that Y = Z = X . So, X is an extreme point by contradiction.

Theorem 5: Let S be a closed bounded convex polyhedron with p number of extreme points Xi , i = 1, 2, . . . , p . Then any vector X S can be written as:
p p i Xi , i =1 i =1 i

X=

= 1,

i 0

Theorem 6: Let S be a closed convex polyhedron. Then the minimum of a linear function over S is attained at an extreme point of S . Proof: Suppose X minimizes the objective function Z = C T X over S and minimimum does not occur at an extreme point.

From the denition of minimum C T X < C T Xi , i = 1, 2, . . . , p with p number of extreme points. For 0 i 1, i C T X < i C T Xi , i = 1, 2, . . . , p
p i C T X < i =1 i =1 p

C T i Xi , i = 1, 2, . . . , p .

Now taking
p i = = , X i i =1 p 0, X , i i i i =1 p TX C i i i =1 p i =1

Thus
i =1

TX = CTX < C i

p CTX < CT( i Xi )

i =1 C T X < C T X .

which is a contradiction. Hence minimum occurs at an extreme point only. Similarly, maximum occurs at an extreme point only.

Graphical Methods for a LPP (Only for 2-Variable Problems): Step 1: Dene the coordinate system and plot the axes. Associate each axis with a variable. Step 2: Plot all the constraints. A constraint represents either a line or a region.

Step 3: Identify the solution space (feasible region). Feasible region is the intersection of all the constraints. If there is no feasible region the problem is infeasible. Step 4: Identify the extreme points of the feasible region.

Step 5: For each extreme point determine the value of the objective function. The point that maximizes/minimizes this value is optimal.

EXAMPLE: max : Z = x1 + 3x2 subject to x1 + x2 10 x1 + 4x2 16 x1 , x2 0

x2 (0, 10) 6

C (0, 4) B (8, 2)

O (0, 0)

A(10, 0)

x1

Extreme points of the Feasible region are O, A, B, and C. At O (0, 0), Z = 0 A (10, 0), Z = 10 B (8, 2), Z = 14 C (0, 4), Z = 12 Maximum value of the objective function is 14. Maximizing point is B (8, 2).

We apply Simplex Method to solve a standard LPP in the form:


n

max : z =
j =1 n

c j xj + d

subject to :
j =1

aij xj bi , i = 1, 2, . . . , m x1 , x2 . . . , xn 0

This problem can be reformulated as:


n

max : z =
j =1

c j xj + d

subject to
n j =1

aij xj + bi = zi , i = 1, 2, . . . , m x1 , x2 . . . , xn 0

z1, z2 . . . , zm 0 (SlackVariables )

To solve the problem, we present the problem in a tabular form called Simplex Tableau.
x1 x2

... a11 a12 . . . a21 a22 . . . . . . . . .

. . . xn 1 a1v . . . a1n b1 = z1 a2v . . . a2n b2 =z2 . . . . . . . . . .

xv

x1 x2

au 1 . . am1 c1

au 2 . . am2 c2

... ... . . ... ...

xv

auv . . amv cv

... ... . . ... ...

xn

aun . . amn cn

1 bu =zu . . . . bm =zm d =z

Simplex Tableau: The point x1 = x2 = . . . = xn = 0 becomes an extreme point. The value of the nonbasic variables: x1, x2 . . . , xn are zero. The values of the basic variables z1 = b1, z2 = b2, . . . , zm = bm. The value of the objective function z = d at x1 = x2 = . . . = xn = 0.

Steps of the Simplex Algorithm: Step 1: Select the most negative element in the last row of the simplex tableau. If no negative element exists, then the maximum value of the LPP is d and a maximizing point is x1 = x2 = . . . = xn = 0. Stop the method.

Step 2: Suppose Step 1 gives the element -cv at the bottom of the v -th column. Form all positive ratios of the element in the last column to corresponding elements in the v -th column. That is form ratios bi /aiv for which aiv > 0. The element say auv which produces the smallest ratio bi /auv is called pivotal element.

If the elements of the v -th column are all negative or zero the problem is called unbounded. Stop else go to Step 3. Step 3: Form a new Simplex Tableau using the following rules: (a) Interchange the role of xv and zu . That is relabel the row and column of the pivotal element while keeping other labels unchanged.

(b) Replace the pivotal element (p > 0) by its reciprocal 1/p i.e. auv by 1/auv . (c) Replace the other elements of the row of the pivotal element by the (row elements/pivotal element). (d) Replace the other elements of the column of the pivotal element by the (negative of the column elements/ pivotal element).

(e) Replace all other elements ( say s ) of the Tableau by the elements of the form: ps qr s = p s r q p

where p is the pivotal element and q and r are the Tableau elements for which p , q , r , s form a rectangle. (Step 3: leads to a new Tableau that presents an equivalent LPP) Step 4: Go to Step 1.

EXAMPLE-1: max : z = x1 + 3x2 subject to x1 + x2 100 x1 + 2x2 110 x1 + 4x2 160 x1 , x2 0

Adding slack variables z1, z2, z3 0, we express the constraints as: x1 + x2 + z1 = 100 x1 x2 + 100 = z1 x1 + 2x2 + z2 = 110 x1 2x2 + 110 = z2 x1 + 4x2 + z3 = 160 x1 4x2 + 160 = z3 Now the problem can be put in Tabular form with z = x1 + 3x2, d = 0.

Initial Simplex Tableau: x1 x2 1 1 1 100 = z1 1 2 110 =z2 1 4 160 =z3 1 3 0 =z

Table-1
x1 z3 1 3 1 60 4 4 2 - 2 30 4 4 1 1 40 4 4 3 120 1 4 4

= z1 =z2 =x2 =z

Table-2(OPTIMAL TABLEAU)
z2 z3 1 3 2 2

2 -1 2
1 2

1 15 -1 60 1 25 2 1 135 2

= z1 =x1 =x2 =z

= 60, x = 25, where z = 135, x1 2 = 15, z = 0, z = 0. z1 2 3

EXAMPLE-2: max : z = 2x1 + 3x2 subject to 2x1 + 3x2 = 12 2x1 + x2 8 x1 , x2 0

We express 2x1 + 3x2 = 12 as: 2x1 + 3x2 + z1 = 12, z1 0. z1 is an articial variable and 2x1 + x2 8 as: 2x1 + x2 x3 + z2 = 8

where z2 0 is an articial variable and x3 is a surplus variable. We reformulate the new objective function as: max : z = 2x1 + 3x2 M (z1 + z2) = 2x1 + 3x2 M (20 4x1 4x2 + x3) = x1(2 + 4M ) + x2(3 + 4M ) Mx3 20M where M is a very large positive number. This method is called Big-M method.

To solve the problem we transform the problem into a Tabular form. Initial Simplex Tableau: x1 x2 x3 1 2 3 0 12 = z1 2 1 -1 8 = z2 2 4M 3 4M M 20M = z

x1 2 3 4 3 4M 3

Table-1 z1 x3 1 1 0 4 3 1 3 - 1 4 3+4M M 12 4M 3

= x2 = z2 =z

Table -2(OPTIMAL) z2 z1 x3 1 1 1 2 2 = x2 4 2 2 1 3 3 3 =x1 4 4 4 M M + 1 0 12 = z


= 3, x = 2 Optimal: z = 12, x1 2

Since there is a zero in the last row of the Tableau further iteration is possible. Table- 3 (ALTERNATE OPTIMAL SOLUTION) z2 z1 x2 1 1 1 2 4 = x3 1 3 0 6 = x1 2 2 =z M M +1 0 12

This is also an optimal Tableau. = 6, x = 0 where z = 12, x1 2 So this LPP has several optimal solutions: X = X 1 + (1 )X 2, where 0 1 3 + (1 ) 6 2 0 6 3 X = ,0 1 2 This problem also can be solved by Two-Phase Simplex Method. X =

Two-Phase Simplex Method: Phase-I: In this Phase, an articial objective function f is used where we minimize the sum of articial variables to zero. We try to drive out the all the articial variables from the basis to make them zero. If they can not be removed i.e. all can not be made zero, we conclude that the problem is infeasible.

If all the articial variables are zero in Phase-I, we go to Phase-II. Phase-II: In this Phase-II, we replace the articial objective function f by the original objective function z using the last Tableau of Phase-I. Then we apply usual Simplex Method until an Optimal solution X is reached.

If an articial variable is there in the basis at zero value at the end of Phase-I, we modify the departing variable rule. An articial variable must not become positive from zero. So, we allow an articial variable with negative yij value to depart. It is an important point to note.

Example:Two-Phase Simplex Method: max : z = 5x1 + 4x2 subject to x1 + x2 2 5x1 + 4x2 20 x1 , x2 0

Introduce surplus and articial variables: x1 + x2 2 as: x1 + x2 x3 + z1 = 2, x3 0, z1 0 z1 is an articial variable ( basic variable) and x3 is a surplus variable. It can be written as: z1 = 2 x 1 x 2 + x 3

Introduce slack variable: 5x1 + 4x2 20 as: 5x1 + 4x2 + z2 = 20, z2 0 z2 is a slack variable. It can be written as: z2 = 20 5x1 4x2 where z is a basic variable.

For Phase-I method we formulate an articial objective function f for minimum. i.e. min : f = z1 It is equivalent to: max : f = z1 max : f = x1 + x2 x3 2

Phase-I Problem: max : f = x1 + x2 x3 2 subject to


x1 x2 + x3 + 2 = z1 5x1 4x2 + 20 = z2

x 1 , x 2 , x 3 , z1 , z2 0

We start with Phase-I procedure with an articial objective function f . Phase-I: Initial Simplex Tableau: x1 x2 x3 1 1 1 1 2 = z1 5 4 0 20 =z2 = f 1 1 1 2

Phase-I: Table-1: z1 x2 x3 1 1 1 1 2 = x1 5 1 5 10 =z2 1 0 0 0 = f Optimal Phase-I Solution: z1 = 0 ( Articial variable) f = 0 ( Articial Objective Function)

Phase-II: Formulation Set z1 column elements to zero. Then we replace the articial objective function with the original objective function z . z = 5x1 + 4x2 = 5(x2 + x3 + 2) + 4x2 = x2 + 5x3 + 10

Phase-II: Initial Simplex Tableau z1 x2 x3 1 0 1 1 2 = x1 0 1 5 10 =z2 0 1 5 10 = z There is a negative element in the last row of the Simplex Tableu.

Phase-II: Optimal Simplex z1 x2 z2 1 0 4/5 1/5 4 0 1/5 1/5 2 0 0 1 20


= 4, Optimal: x1 , x = 0, 2

Tableau = x1 = x3 =z

z = 20

Phase-II: Alternate Optimal Solution z1 x1 z2 1 0 5/4 1/4 5 = x2 0 1/4 1/4 3 = x3 0 0 1 20 = z


= 0, Optimal: x1 , x = 5, 2

z = 20

This problem has Innite number of optimal solutions: X = X 1 + (1 )X 2, where 0 1 4 + (1 ) 0 0 5 4 X = ,0 1 5 5 X =

Duality Theory for LPP: Primal Program (P):


n

max : z =
j =1

c j xj

subject to
n

aij xj bi , i = 1, 2, 3, . . . , m
j =1

0,

j = 1, 2, 3, . . . , n

With respect to the above Primal Problem (P ) we nd a Dual Problem (D ) as: Dual Program (D):
m

min : z =
i =1

bi yi

subject to
m

aij yi cj , j = 1, 2, 3, . . . , n
i =1

yi 0, i = 1, 2, 3, . . . , m

Example-1: Primal Program (P): max : z = x1 + 3x2 Subject to x1 + x2 100 x1 + 2x2 110 x1 + 4x2 160 x1 , x2 0

Dual Program (D): min : z = 100y1 + 110y2 + 160y3 Subject to y1 + y2 + y3 1 y1 + 2y2 + 4y3 3 y1 , y2 , y3 0

Example-2: Primal Program (P): max : z = x1 + 3x2 subject to x1 + x2 100 x1 + 2x2 = 110 x1 + 4x2 = 160 x1 , x2 0

Dual Program (D): min : z = 100y1 + 110y2 + 160y3 Subject to y1 + y2 + y3 1 y1 + 2y2 + 4y3 3 y1 0, y2, y3 are free .

Example-3: Primal Program (P): min : z = x1 + 3x2 Subject to x1 + x2 100 x1 + 2x2 110 x1 + 4x2 160 x1 , x2 0

Dual Program (D): max : z = 100y1 + 110y2 + 160y3 Subject to y1 + y2 + y3 1 y1 + 2 y2 + 4 y3 3 y1 , y2 , y3 0

Example-4: Primal Program (P): min : z = x1 + 3x2 subject to x1 + x2 100 x1 + 2x2 110 x1 + 4x2 = 160 x1 , x2 0

Dual Program (D): max : z = 100y1 + 110y2 + 160y3 Subject to y1 + y3 + y3 1 y1 + 2 y2 + 4 y3 3 y1 , y2 0, y3 is free .

Example-5: Primal Program (P): max : z = 10x1 + 20x2 + 30x3 subject to x1 + x2 + x3 = 60 x1 + 5x2 + 10x3 = 410 x1 , x2 , x3 0

Dual Program (D): min : z = 60y1 + 410y2 Subject to y1 + y2 10 y1 + 5y2 20 y1 + 10y2 30 y1, y2, are free .

Theorem 1: LPP Primal (P ) is consistent and has a maximum value MP if and only if its Dual (D ) is consistent and has a maximum value MD . Moreover MP = MD .

Theorem 2: If X satises the constraints of the Primal Program (P ) and Y satises the constraints of the Dual Program (D ), then
m n

bi yi
i =1 j =1

c j xj

Equality holds if and only if m Either xj =0 or a y = cj , j = 1, 2, . . . , n i =1 ij i n Either yi =0 or a x = bi , i = 1, 2, . . . , m j =1 ij j To solve the dual program we may use Dual Simplex Method.

Some Discrete Models:


n

max / min : z =
j =1

c j xj

subject to
n

aij xj = bi , i = 1, 2, 3, . . . , m
j =1

xj = 0, 1, 2, 3, . . . , for all j

To Solve this discrete LPP we use two dierent methods: 1. Gomory Cutting Plane Method 2. Branch and Bound Method Further if the decision variables are Binary (0/1) additive algorithm may be used to solve the problem.

Text Book References: 1. Optimization, Theory and Applications, By S.S. Rao Wiley Eastern Ltd. New Delhi, 1984. 2. Engineering Optimization: Theory and Practice, By S.S. Rao New Age International Publishers, New Delhi,2001.

You might also like