Professional Documents
Culture Documents
INSTITUTE OF TECHNOLOGY
FACULTY OF MECHANICAL AND INDUSTRIAL ENGINEERING
INDUSTRIAL ENGINEERING PROGRAM
OPERATIONS RESEARCH
Compiled Notes
November, 2015
Table of Contents
CHAPTER 1..................................................................................................... 6
INTRODUCTION ........................................................................................... 6
1.1 History of Operations Research .................................................................................................... 6
1.2 Definitions of Operations Research .............................................................................................. 7
1.3 Applications of Operations Research ............................................................................................ 8
1.4 Model and Modeling in Operations Research............................................................................. 10
1.4.1 Different Models .................................................................................................................. 12
1.4.2 Design Optimization ............................................................................................................ 16
CHAPTER 2................................................................................................... 19
LINEAR PROGRAMMING ......................................................................... 19
2.1 Linear programming Model ........................................................................................................ 21
2.2 Solving linear programming ....................................................................................................... 26
2.2.1 Graphical Method ................................................................................................................ 26
2.2.2 Simplex Method ................................................................................................................... 32
2.2.3 Big M Method ...................................................................................................................... 35
2.3 Duality and Sensitivity ................................................................................................................ 37
2.3.1 Primal and Duality ............................................................................................................... 37
2.3.2 Economic Interpretation of the Dual Problem ..................................................................... 39
2.3.3 Sensitivity Analysis.............................................................................................................. 40
CHAPTER 3................................................................................................... 43
INTEGER PROGRAMMING ...................................................................... 43
3.1 Introduction ................................................................................................................................. 43
3.2 Integer Programming Models ..................................................................................................... 43
3.3 Methods of solving Integer Programming ............................................................................ 46
3.3.1 Gomory's All Integer Programming Algorithm .................................................................. 46
CHAPTER 4 ........................................................................................................................... 54
TRANSPORTATION AND ASSIGNMENT MODEL ...................................................... 54
4.1 Introduction ................................................................................................................................. 54
4.2 Solution Mechanism for Transportation Problems ..................................................................... 56
4.3 Special cases in transportation .................................................................................................... 58
4.4 Assignment problem ................................................................................................................... 59
4.4.1 Solving Methods for Assignment Problem .......................................................................... 61
CHAPTER 5................................................................................................... 63
DECISION ANALYSIS................................................................................. 63
5.1 Introduction ................................................................................................................................. 63
5.2 Types of Decision Making Environments .................................................................................. 66
5.2.1 Type 1 Decision Making under Certainty ............................................................................ 66
5.2.2 Type 2 Decision Making under Risk ................................................................................... 66
5.2.3 Type 3 Decision Making under Uncertainty ........................................................................ 66
5.3 Decision Analysis with Additional Information - Bayesian Analysis ........................................ 75
CHAPTER 6................................................................................................... 83
GAME PROGRAMMING ............................................................................ 83
6.1 Introduction ................................................................................................................................. 83
6.2 Two-Persons-Zero-Sum Game ................................................................................................... 85
6.3 Pure Strategies ............................................................................................................................ 88
6.4 Mixed Strategy ............................................................................................................................ 90
6.4.1 Rule of Dominance .............................................................................................................. 90
6.5 Methods of Solving Mixed strategies ......................................................................................... 93
6.5.1 Algebraic Method ................................................................................................................ 93
6.5.2 Graphical Method ................................................................................................................ 94
6.5.3 Linear programming (LP) Method ....................................................................................... 95
Amare Matebu Kassa (Dr.-Ing) Page 3
CHAPTER 7 ........................................................................................................................... 99
MARKOV ANALYSIS .......................................................................................................... 99
7.1 Introduction ................................................................................................................................. 99
7.2 State and Transition Probabilities ............................................................................................. 101
7.3 Multi –Period Transition Probabilities and the Transition Matrix ............................................ 107
7.3.1 Steady- State Probability (Future State prediction) ........................................................... 109
7.4 Special Cases in Markov Chains ............................................................................................... 110
There are three important factors behind the rapid development in the use of OR approach:
1. The economic and industrial boom after World War II resulted in continuous
mechanization and automation.
2. Many Operations Researchers continued their research after World War II.
3. Analytic power was made available by high-speed computers.
During 1950s, there was substantial progress in the application of OR techniques for Civilian
activities along with great interest in the professional development and education in OR. In
1948, OR club was formed in England which later changed its name to the Operational
Research Society of UK.
During OR foundation, its primary applications were: to support military operations: such as
to support radar systems, against submarine, etc. Following the war, numerous peacetime
applications emerged, leading to the use of OR and management science in many industries
Operations research is the application of scientific methods, techniques and tools to problems
involving the operations of systems so as to provide those in control of the operations with optimum
solutions to the problems. OR is the application of the scientific method to the study of the operations
of large, complex organizations or activities. OR is the application of the scientific method to the
analysis and solution of managerial decision problems.
OR is the application of the methods of science to complex problems in the direction and management
of large systems of men, machines, materials, and money in industry, business, government and
defense (OR Society, UK).
Operations research can be applied in every business organizations (both profit making and non-profit
making organizations), Some of the application areas are listed as follows.
Manufacturing
- Aggregate production planning, assembly line, blending, inventory control
- Employment, training, layoffs and quality control
- Transportation, planning and scheduling
Facilities planning
- Location and size of warehouse or new plant
- Logistics, layout and engineering design
- Transportation, planning and scheduling
Finance and accounting
- Capital budgeting, cost allocation and control, and financial planning
Marketing
- Sales effort allocation and assignment
- Predicting customer loyalty
Purchasing, procurement and Exploration
- Optimal buying and reordering with or without price quantity discount
Optimization is everywhere.
It is embedded in language, and part of the way we think.
firms want to maximize value to shareholders
people want to make the best choices
We want the highest quality at the lowest price
When playing games, we want the best strategy
When we have too much to do, we want to optimize the use of our time.
The scientific approach to decision making requires the use of one or more mathematical
models. A mathematical model is a mathematical representation of the actual situation that
may be used to make better decisions or clarify the situation.
Benefits of Modeling
Economy - it is often less costly to analyze decision problems using models.
Timeliness - models often deliver needed information more quickly than their real-world
counterparts.
Feasibility - models can be used to do things that would be impossible.
Models give us insight and understanding that improves decision making.
Feedback
Figure 1.2 General framework of OR application
Dynamic Programming
A DP model describes a process in terms of states, decisions, transitions and returns.
The process begins in some initial state where a decision is made.
Markov Chains
A stochastic process that can be observed at regular intervals such as every day or
every week can be described by a matrix which gives the probabilities of moving to
each state from every other state in one time interval.
Simulation
It is often difficult to obtain a closed form expression for the behavior of a stochastic
system.
Simulation is a very general technique for estimating statistical measures of complex
systems.
A system is modeled as if the random variables were known.
Example: 1
A crop farmer has to decide which crops to grow yearly on the 6 hec of land. There are two
crops; corn and potato. The yearly return per hectare of corn is 20,000 Birr and that of potato
is 10,000 Birr. Every hectare grown with corn requires 20 hours of labor and every hectare
potato requires 40 hours. The available number of hours is 200. Environmental requirements
impose that at maximum two-third of the available surface can be used for growing corn, and
crop rotation requirements tell us that only half of the area can be occupied by potato. The
question is how much corn and potato are to grow, taking the restrictions into account, such
that the return is as high as possible?
Design a beer can to hold at least the specified amount of beer and meet other design
requirement. The cans will be produced in billions, so it is desirable to minimize the cost of
manufacturing. Since the cost can be related directly to the surface area of the sheet metal
used, it is reasonable to minimize the sheet metal required to fabricate the can.
Fabrication, handling, aesthetic, shipping considerations and customer needs impose the
following restrictions on the size of the can:
1. The diameter of the can should be no more than 8 cm. Also, it should not be less than
3.5 cm.
2. The height of the can should be no more than 18 cm and no less than 8 cm.
3. The can is required to hold at least 400 ml of fluid.
(Non-linear)
The constraints must be formulated in terms of design variables.
The first constraint is that the can must hold at least 400 ml of fluid.
(Non-linear)
(Linear)
The problem has two independent design variable and five explicit constraints. The objective
function and first constraint are nonlinear in design variable whereas the remaining
constraints are linear.
The development of linear programming has been ranked among the most important
scientific advances of the mid-20th century, and we must agree with this assessment. Its
impact since just 1950 has been extraordinary. Today it is a standard tool that has saved many
thousands or millions of dollars for most companies or businesses of even moderate size in
the various industrialized countries of the world; and its use in other sectors of society has
been spreading rapidly. A major proportion of all scientific computation on computers is
devoted to the use of linear programming. Dozens of textbooks have been written about
linear programming, and published articles describing important applications now number in
the hundreds.
What is the nature of this remarkable tool, and what kinds of problems does it address? You
will gain insight into this topic as you work through subsequent examples. However, a verbal
summary may help provide perspective. Briefly, the most common type of application
involves the general problem of allocating limited resources among competing activities in a
best possible (i.e., optimal) way. More precisely, this problem involves selecting the level of
certain activities that compete for scarce resources that are necessary to perform those
activities. The choice of activity levels then dictates how much of each resource will be
consumed by each activity. The variety of situations to which this description applies is
diverse, indeed, ranging from the allocation of production facilities to products to the
allocation of national resources to domestic needs, from portfolio selection to the selection of
shipping patterns, from agricultural planning to the design of radiation therapy, and so on.
However, the one common ingredient in each of these situations is the necessity for
allocating resources to activities by choosing the levels of those activities.
Linear programming uses a mathematical model to describe the problem of concern. The
adjective linear means that all the mathematical functions in this model are required to be
linear functions. The word programming does not refer here to computer programming;
rather, it is essentially a synonym for planning. Thus, linear programming involves the
planning of activities to obtain an optimal result, i.e., a result that reaches the specified goal
best (according to the mathematical model) among all feasible alternatives. Although
Amare Matebu Kassa (Dr.-Ing) Page 19
allocating resources to activities is the most common type of application, linear programming
has numerous other important applications as well. In fact, any problem whose mathematical
model fits the very general format for the linear programming model is a linear programming
problem. Furthermore, a remarkably efficient solution procedure, called the simplex method,
is available for solving linear programming problems of even enormous size. These are some
of the reasons for the tremendous impact of linear programming in recent decades.
Linear programming is used for facility location Decisions. Linear programming is used as
“What-If” Tool. It is used to find the optimal shortest path in the network locations.
LP-based techniques can be used to locate
manufacturing facilities,
distribution centres,
warehouse/storage facilities etc.
Formulation of a LP Model
1. Identify the decision variables and express them in algebraic symbols. (like X1, X2,
etc..)
2. Identify all the constraints or limitations and express as equations (scares resources
like time, labor, raw materials etc …)
3. Identify the Objective Function and express it as a linear function (the decision maker
want to achieve it).
Requirements of a LP Problem
Example 1
The KADISCO Company owns a small paint factory that produces both interior and exterior
house paints for wholesale distribution. Two basic raw materials, A and B, are used to
manufacture the paints. The maximum availability of A is 6 tons a day; that of B is 8 tons a
day. The daily requirements of the raw materials per ton of interior and exterior paints are
summarized in the following table.
Tons of Raw Material per Ton of Paint
Exterior Interior Maximum Availability (tons)
Raw Material A 1 2 6
Raw Material B 2 1 8
The wholesale price per kg is $3000 for exterior paint and $2000 per interior paint. How
much interior and exterior paint should the company produce daily to maximize gross
income?
Define
XE = Tons of exterior paint to be produced
XI = Tons of interior paint to be produced
Maximize Z = 3000XE + 2000XI
Subject to:
XE + 2XI ≤ 6 (1) (availability of raw material A)
2XE + XI ≤ 8 (2) (availability of raw material B)
_XE + XI ≤ 1 (3) (Restriction in production)
XI ≤ 2 (4) (demand restriction)
XE, XI ≥ 0
Example 3: Advertisement
Dorian makes luxury cars and jeeps for high-income men and women. It wishes to advertise
with 1 minute spots in comedy shows and football games. Each comedy spot costs $50 and is
seen by 7M high-income women and 2M high-income men. Each football spot costs $100
and is seen by 2M high-income women and 12M high-income men. How can Dorian reach
28M high-income women and 24M high-income men at the least cost?
Solution:
The decision variables:
X1 = the number of comedy spots
X2 = the number of football spots
Staff 17 13 15 19 14 16 11
Needed
Solution:
Solution:
The problem is to find a diet (a choice of the numbers of units of the two foods) that meets all
minimum nutritional requirements at minimal cost. Formulate the problem LP.
3X1 + X2 ≥8
4X1 + 3X2 ≥ 19
X1 + 3X2 ≥7
X1 , X2 ≥0
Bryant's Pizza, Inc. is a producer of frozen pizza products. The company makes a net income
of $1.00 for each regular pizza and $1.50 for each deluxe pizza produced. The firm currently
has 150 pounds of dough mix and 50 pounds of topping mix. Each regular pizza uses 1 pound
of dough mix and 4 ounces (16 ounces= 1 pound) of topping mix. Each deluxe pizza uses 1
pound of dough mix and 8 ounces of topping mix. Based on the past demand per week,
Bryant can sell at least 50 regular pizzas and at least 25 deluxe pizzas. The problem is to
determine the number of regular and deluxe pizzas the company should make to maximize
net income. Formulate this problem as an LP problem.
Solution:
A Blending Problem
Let X1 and X2 be the number of regular and deluxe pizza, then the LP formulation is:
Maximize: { Z = X1 + 1.5 X2 }
Subject to:
X1 + X2 ≤ 150
0.25 X1 + 0.5 X2 ≤ 50
X1 ≥ 50
X2 ≥ 25
X1 , X2 ≥ 0
Decision Variables:
X1= number of X-pods to be produced
X2= number of BlueBerrys to be produced
Graphical Solution
Can be used when there are two decision variables
1. Plot the constraint equations at their limits by converting each equation to an
equality
2. Identify the feasible solution space
3. Create an iso-profit line based on the objective function
4. Move this line outwards until the optimal point is identified
4X1+3(40) = 240
4X1+120 = 240
X1=30
The optimal value will always be at a corner point
Find the objective function value at each corner point and choose the one with the
highest profit
Point 1: (X1 = 0, X2 = 0) Profit $7(0) + $5(0) = $0
Point 2: (X1 = 0, X2 = 80) Profit $7(0) + $5(80) = $400
Point 4: (X1 = 50, X2 = 0) Profit $7(50) + $5(0) = $350
Point 3: (X1 = 30, X2 = 40) Profit $7(30) + $5(40) = $410
Example 2: Minimization:
X1 = number of tons of black-and-white picture chemical produced
X2 = number of tons of color picture chemical produced
Realistic linear programming problems often have several decision variables and many
constraints. Such problems cannot be solved graphically; instead an algorithm such as the
simplex procedures is used. Simplex method is thus the most effective analytical method of
solving linear programming problems.
Objective Function
Optimize (Max. or Min.) z = Σ c j xj for j = 1..n
Subject to: (Constraints)
Example:
2X1+3X2-4X3+X4 ≤ -50, that gives
-2X1-3X2+4X3-X4 ≤ 50 we multiply both sides by negative
iii. Three types of additional variables, namely
a. Slack Variable (S)
b. Surplus variable (-S), and
c. Artificial variables (A) are added in the given LP problem to convert it into
standard form for two reasons:
To convert an inequality into equation to have a standard form of an LP model,
and
The summery of the extra variables needed to add in the given LP problem to convert it into
standard form is given below.
Types of Extra variables to be Coefficient of extra variables Presence of
constraint added in the objective function variables in the
Max Z Min Z initial solution mix
≤ Add only slack variable 0 0 Yes
≥ Subtract surplus 0 0 No
variable and
Add artificial variable -M +M Yes
= Add artificial variable -M +M Yes
Some Definitions
Solution: pertains to the values of decision variables that satisfies constraints
Feasible solution: Any solution that also satisfies the non negativity restrictions
Basic Solution: For a set of m simultaneous equations in n unknowns (n>m), a
solution obtained by setting n- m of the variables equal to zero and solving the m
equation in m unknowns is called basic solution.
Basic Feasible solution: A feasible solution that is also basic
Optimum Feasible solution: Any basic feasible solution which optimizes the objective
function
Degenerate Solution: when one or more basic variable becomes equal to zero.
2. Test of optimality
i. If all Zj - Cj > 0, then the basic feasible solution is optimal (Maximization case)
ii. If all Zj - Cj < 0, then the basic feasible solution is optimal (Minimization case)
Example: 1
Solve the problem using the simplex approach
Max. Z=300x1 +250x2
Subject to:
2x1 + x2 < 40 (Labor )
x1+3x2 < 45 (Machine)
x1 < 12 (Marketing)
x1, x2 > 0
The ABC printing company is facing a tight financial squeeze and is attempting to cut costs
wherever possible. At present it has only one printing contract, and luckily the book is selling
well in both the hardcover and paper back editions. It has just received a request to print more
copies of this book in either the hardcover or paperback form. The printing cost for hardcover
books is birr 600 per 100 while that for paperback is only birr 500 per 100. Although the
company is attempting to economize, it does not wish to lay off any employee. Therefore, it
feels oblized to run its two printing presses at least 80 and 60 hours per week, respectively.
Press I can produce 100 hardcover books in 2 hours or 100 paperback books in 1 hour. Press
II can produce 100 hardcover books in 1 hour or 100 paperbacks books in 2 hours. Determine
how many books of each type should be printed in order to minimize costs.
Every LP has another LP associated with it, which is called its dual. The first way of starting
a linear problem is called the primal of the problem. The second way of starting the same
problem is called the dual. The optimal solutions for the primal and the dual are equivalent,
but they are derived through alternative procedures.
The dual contains economic information useful to management, and it may also be easier to
solve, in terms of less computation than the primal problem. Corresponding to every LP,
there is another LP. The given problem is called the primal. The related problem to the given
problem is known as the dual.
Primal Dual
Objective is minimization Objective is maximization & vice versa
Example: A Dakota work shop want to produce desk, table, and chair with the available
resource of: Timber, Finishing hours and carpenter hours as revised in the table below. The
selling price and available resources are also revised in the table. Formulate this problem as
Primal and Dual Problem?
The first dual constraint is associated with desks, the second with tables, and the third with
chairs. Decision variable y1 is associated with Timber, y2 with finishing hours, and y3 with
carpentry hours. Suppose an entrepreneur wants to purchase all of Dakota’s resources. The
The total price that should be paid for these resources is 48 y1 + 20y2 + 8y3. Since the cost of
purchasing the resources is to minimized:
Min w = 48y1 + 20y2 + 8y3 is the objective function for Dakota dual.
In setting resource prices, the prices must be high enough to induce Dakota to sell.
For example, the entrepreneur must offer Dakota at least $60 for a combination of resources
that includes 8 board feet of timber, 4 finishing hours, and 2 carpentry hours because Dakota
could, if it wished, use the resources to produce a desk that could be sold for $60. Since the
entrepreneur is offering 8y1 + 4y2 + 2y3 for the resources used to produce a desk, he or she
must chose y1, y2, and y3 to satisfy: 8y1 + 4y2 + 2y3 ≥ 60. Similar reasoning shows that at
least $30 must be paid for the resources used to produce a table.
Thus y1, y2, and y3 must satisfy: 6y1 + 2y2 + 1.5y3 ≥ 30
Likewise, at least $20 must be paid for the combination of resources used to produce one
chair. Thus y1, y2, and y3 must satisfy: y1 + 1.5y2 + 0.5y3 ≥ 20. The solution to the Dakota
dual yields prices for timber, finishing hours, and carpentry hours.
Sensitivity analysis and parametric linear programming are the two techniques that evaluate
the relationship between the optimal solution and the changes in the LP model parameters.
Sensitivity analysis is the study of sensitivity of the optimal solution of an LP problem due to
discrete variations (changes) in its parameters.
The degree of sensitivity of the solution due to these variations can range from no change at
all to a substantial change in the optimal solution of the given LP problem. Thus, in
sensitivity analysis, we determine the range over which the LP model parameters can change
without affecting the current optimal solution.
3.1 Introduction
In linear programming, each of the decision variable as well as slack and /or surplus variable
is allowed to take any fractional value. However, there are certain practical problems in
which the fractional value of the decision variables has no significance. For example it does
not make sense saying 1.5 men working on a project or 1.6 machines in a workshop. The
integer solution to a problem can, however, be obtained by rounding off the optimum value
of the variables to the nearest integer value. This approach can be easy In terms of economy
of effort, time and cost that might be required to derive an integer solution but this solution
may not satisfy all the given constraints. Secondly, the value of the objective function so
obtained may not be optimal value. All such difficulties can be avoided if the given problem,
where an integer solution is required, is solved by integer programming techniques.
There are certain decision problems where decision variables make sense only if they have
integer values in the solution. Capital budgeting, construction scheduling, plant location and
size, routing and shipping schedule, batch size, capacity expansion, fixed charge, etc., are few
problems which demonstrate the areas of application of integer programming.
There are various methods for solving integer programming problems. In this class, we shall
see two methods – namely:
1. Gomory’s cutting plane method
An iterative procedure for the solution of an all integer programming problem by Gomory's
cutting plane method may be summarized in the following steps.
Step 1 Initialization Formulate the standard integer LP problem. If there are any non-integer
coefficients in the constraint equations, convert them into integer coefficients. Solve it by
simplex method, ignoring the integer requirement of variables.
If there are more than one variables with the same largest fraction, then choose the one that
has the smallest contribution to the maximization LP problem or the largest to the
minimization LP problem.
Step 4 Obtain the new solution: add the cutting plan generated in step 3 to the bottom of the
optimal simplex table as obtained in step 3. Find a new optimal solution by using the dual
simplex method, i.e choose a variable to enter into the new solution having the smallest
ratio:{(Cj-Zj)/Yij; Yij < 0)} and return to step 2. The process is repeated until all basic
variables with integer requirements assume non-negative integer value.
The procedure for solving and integer linear programming problem can be explained through
a flow chart shown in figure 3.1 below.
Rounding non-integer solution values up to the nearest integer value can result in an
infeasible solution.
A feasible solution is ensured by rounding down non-integer solution values but may
result in a less than optimal (sub-optimal) solution.
Gomory's cutting-plane method was developed by R.E. Gomory in 1956 to solve integer
linear programming problems using the dual simplex method. It is based on the generation of
a sequence of linear inequalities called a cut. This 'cut' cuts out a part of the feasible region
of the corresponding LP problem while leaving out the feasible region of the integer linear
programming problem. The hyper plane boundary of a cut is called the cutting plane.
Rounding off this solution to Xl = 2, X2 = 2 does not satisfy both the constraints and therefore
the solution is infeasible. The dots in figure 3.1, also referred to as lattice points, represent all
of the integer solutions that lie within the feasible solution space of LP problem. However, it
is difficult to evaluate every such point to determine the value of the objective function.
Figure 3.1 suggests that we can find a solution to the problem when it is formulated as an LP
problem (which by chance could contain integers). It may be noted that the optimal lattice
The optimal integer solution to the given LP problem is: x1 = 0, x2 = 3 and Max Z = 48.
Notice that its lattice point is not even adjacent to the most desirable LP problem solution
corner.
Remark: Reducing the feasible region by adding extra constraints (cut) can never give an
improved objective function value. Usually it makes it worse and if ZIP represents the
minimum value of objective function in an ILP problem and ZLP the minimum value of
objective function in an LP problem, then ZIP ≤ ZLP.
Subject to:
Optimal Solution:
Z = $1,055.56
x1 = 2.22 presses
x2 = 5.55 lathes
Example 3.1 Solve the following integer programming problem using Gomory’s
cutting plane algorithm:
Maximize: Z = X1 + X2
Subjected to the constraints:
3X1 + 2X2 ≤ 5
X2 ≤ 2
and X1 , X2 ≥ 0 and are integers
Example 2 solve the following integer programming problem using Gomory’s cutting
plane algorithm.
Maximize Z = X1 + X2
Subject to the constraints
3X1 + 2X2 ≤ 5
X2 ≤ 2
and X1, X2 ≥ 0 and integer
In table 3.1, since all cj - Zj ≤ 0, the optimal solution is: X1 = 1/3, X2 = 2 and Max Z = 7/2.
Step 2: In the current optimal solution, all basic variables in the basis are not integers and the
solution is not acceptable. Since both decision variables X1 and X2 are assumed to take on
integer value, a pure integer cut is developed under the assumption that all the variables are
integers, as explained in step 3.
Step 3: since X1 is the only basic variable whose value is a non-negative fraction, we shall
consider the first row for generating the Gomory cut. Considering X1-equation as the source
row in Table 3.1, we write:
1/3 = x1 + 0 x2 + 1/3 s1 – 2/3 s2 (X1 – source row)
The factoring of the x1 – source row yields:
(0 + 1/3) = (1 + 0) x1 + (0+1/3) s1 + (-1 +1/3) s2
Notice that each of the non-integer coefficients is factored into integer and fractional parts in
such a manner that the fractional part is strictly positive.
Rearrange the equation so that all of the integer coefficients appear on the left-hand side. This
gives:
1/3 + (s2 – x1) = 1/3 s1 + 1/3 s2
Step 4 Apply the dual simplex method to find the new optimal solution. The key row and key
column are marked in table 3.2. The new solution is shown in table 3.3.
The solution given in table 3.3 is: X1 = 0, X2 = 2, Sgl = 1 and Max Z = 2. This also satisfies
the integer requirements.
The concept behind this method is to divide the entire feasible solution space of LP problem
into smaller parts called sub-problems and then search each of them for an optimal solution.
This approach is useful in those cases where there is a large number of feasible solutions and
enumeration of those becomes economically impractical. The branch and bound method
starts by improving feasible and infeasible upper and/or lower bounds for the decision
variables in each sub-problem. This helps in reducing the number of simplex method to arrive
at the optimal solution, because each sub-problem worse than the current feasible bound is
discarded and only the remaining sub-problems are examined.
4.1 Introduction
One important application of LP is in the area of physical distribution (transportation) of
goods and services from several supply origins to several demand destinations.
Transportation problems are expressed mathematically in terms of LP model and involve
many variables and constraints, which take a long time to solve it.
Transportation problem involves a large number of variables and constraints; it takes a long
time to solve it. The objective is to determine the number of units which should be shipped
from an origin to a destination in order to satisfy the required quantity of goods or services at
each demand destination. The structure of transportation problem involves a large number of
shipping route from several supply origins to several demand destinations. It is used to
determine the optimum transportation schedule that minimizes total transportation cost / time.
Though Transportation problem can be solved using standard LP, however the transportation
algorithm is computationally more efficient.
.. .. .. .. .. ..
Requirements b1 b2 b3 bn
(bi)
Subject to:
n
Xij
j 1
ai ; for i = 1, 2, ……., m (Supply Constraints)
i 1
Xij bj ; for j = 1, 2, ……., n (Demand Constraints)
A necessary and sufficient condition for the existence of a feasible solution to the
transportation problem is:
Total supply = Total demand
m n
That is the total capacity (or supply) must be equal total requirement (or demand).
Transportation Problem
3
F1 50000 W1 6000
2
6 7
7
F2 6000 5 W2 4000
2
3
W3 2000
2 5
4
F3 2500 5 W4 1500
4.2.1 North West Corner Method (NWCM) and Least Cost Method
1. Begin with the upper left hand cell (Left, upper most in the table), & allocate as many
units as possible to that cell. This will be the smaller amount of either the row supply
or the column demand. Adjust the row & column quantities to reflect the allocation.
2. Subtract from the row supply & from the column demand the amount allocated.
3. If the column demand is now zero, move to the cell next to the right, if the row supply
is zero, move down to the cell in the next row. If both are zero, move first to the next
cell on the right then down one cell.
4. Once a cell is identified as per step (3), it becomes a northwest cell. Allocate to it an
amount as per step (1).
5. Repeat the above steps (1) - (4) until all the remaining supply and demand is gone.
Example 4.1
A company has three production facilities S1, S2, and S3 with production capacity of 7,9, and
18 units (in 100s) per week of a product respectively. These units are to be shipped to four
warehouses D1, D2, D3, and D4 with requirements of 5, 6, 7 and 14 units (in 100’s) per week,
respectively. The transportation costs (in Birr) per unit between factories to warehouses are
given in the table below.
D1 D2 D3 D4 Capacity
S1 19 30 50 10 7
S2 70 30 40 60 9
S3 40 8 70 20 18
Demand 5 8 7 14 34
Thus, the problem is how the assignments should be made so as to optimize the given
objective. Some of the problems where assignment technique may be useful are: Assignment
of workers to machines, salesmen to different sales areas, classes to rooms, etc.
Assignment models are special type of transportation models where
Number of sources = Number of destinations
Each capacity and requirement value = 1
It can be solved by using the methods such as:
Enumeration method
Simplex method
Transportation method
Hungarian method
The Hungarian method was developed by Hungarian Mathematician : D. Konig
Given n resources (or facilities) and n activities (or jobs), and effectiveness (in terms of cost,
profit, time, etc), of each resources (facility) for each activity (job), the problem lies in
assigning each resources to one and only one activity so that the given measure of
effectiveness is optimized.
Let Xij denote the Assignment of ith machine to jth job such that:
Minimize Z = CijXij
i 1 j 1
(Objective Function)
n
Subject to:
j 1
Xij 1; for i = 1, 2, …, n (resource availability)
n
for j = 1, 2, …, n, (Activity Requirement)
i 1
Xij 1;
Assignment Problem:
The above problem can be presented as a LP as follows:
MinZ = 20X11 +15X12 + 31X13 +17X21 +16X22 +33X23 +18X31+19X32 +27X33
Subject to the constraints of
a. P constraints:
x11 +x12 +x13 =1 operator one
J1 J2 J3
x21 + x22 + x23 =1 operator two
X31 +X32 +X33 = 1 operator three P1 20 15 31
P2 17 16 33
P3 18 19 27
b. j constraints
x11 + x21 + x31 = 1 job one
x12 + x22 + x32 = 1 job two
x13 + x23 +x33 = 1 job three
Example 4.2
A computer center has three programmers. The center wants three application programs to be
developed. The head of the computer center, after studying carefully the programs to be
developed, estimate the computer time in minutes required by the experts for the application
programs as follows.
1 120 100 80
2 80 90 110
5.1 Introduction
The success or failure that an individual or organization experiences, depends to a large
extent on the ability of making appropriate decisions. Making' of a decision requires an
enumeration of feasible and viable alternatives (courses of action or strategies), the projection
of consequences associated with different alternatives, and a measure of effectiveness (or an
objective) by which the most preferred alternative is identified. Decision theory provides an
analytical and systematic approach to the study of decision-making. In other words, decision
theory provide a method of natural decision-making wherein data concerning –the occurrence
of different outcomes (consequences) may be evaluated to enable the decision-maker to
identify suitable alternative (or course of action).
Decision models useful in helping decision-makers make the best possible decisions are
classified according to the degree of certainty. The scale of certainty can range from
complete certainty to complete uncertainty. The region which falls 'between these two
extreme points corresponds to the decision making under risk (probabilistic problems).
Irrespective of the type of decision model, there are certain essential characteristics which are
common to all as listed below.
Decision alternatives There is a finite number of decision alternatives available with the
decision-maker at each point in time when a decision is made. The number and type of such
alternatives may depend on the previous decisions made and on what has happened
subsequent to those decisions. These alternatives are also called courses of action (actions,
acts or strategies) and are under control and known to the decision-maker. These may be
described numerically such as, stocking 100 units of a particular item, or non-numerically
such as, conducting a market survey to know the likely demand of an item.
State of nature A possible future condition (consequence or event) resulting 'from the choice
of a decision alternative depends upon certain factors beyond the control of the decision-
maker. These factors are called states of nature (future). For example, if the decision is to
The states of nature are mutually exclusive and collectively exhaustive with respect to any
decision problem. The states of nature may be described numerically such as, demand of 100
units of an item or non-numerically such as, employees strike, etc.
Payoff A numerical value .resulting from each possible combination of alternatives and states
of nature is called payoff. The payoff values are always conditional values because of
unknown states of nature.
Solution .Let D1, D2 and D3 be the poor, moderate and high demand, respectively. Then
payoff is given:
Payoff = Sales revenue - Cost
The calculations for payoff (in thousands) for each pair of alternative demand (course of
action) and the types of product (state of nature) are shown below:
Dl A = 3x 25 - 25 - 3 x 12 = 14 D2A = 7 x 25 - 25 - 7 x 12 =66
D1 B = 3 x 25 - 35 - 3 x 9 = 13 D2 B = 7 x 25 - 35 - 7 x 9 = 77
Dl C = 3 x 25 - 53 - 3 x 7= 1 D2 C = 7 x 25 - 53 - 7 x 7 = 73
D3 A = 11 x 25 - 25 - 11 x 12 = 118
D3 B = 11 x 25 - 35 - 11 x 9 = 141
D3 C = 11 x 25 - 53 - 11 x 7 = 145
In the absence of knowledge about the probability of any state of nature (future) occurring,
the decision- maker must arrive at a decision only on the actual conditional payoff values,
together with a policy (attitude). There are several different criteria of decision-making in this
situation. The criteria that we will discuss in this section include:
i) Maximax or Minimin
1. Maxima or Minimin
In this criterion the decision-maker ensures that he/she should not miss the opportunity to
achieve the largest possible profit (maximax) or lowest possible cost (minimin). Thus, he/she
selects the alternative (decision choice or course of action) that represents the maximum of
the maxima (or minimum of the minima) payoffs (consequences or outcomes).
The working method is summarized as follows:
a) Locate the maximum (or minimum) payoff values corresponding to
each alternative (or course of action), then .
b) Select an alternative with best anticipated payoff value (maximum
for profit and minimum for cost).
2. Maximin or Minimax
In this criterion the decision-maker ensures that he/she would earn no less (or pay no more)
than some specified amount. Thus, he/she selects the alternative that represents the maximum
of the minima (or minimum of the maxima in case of loss) payoff in case of profits.
The working method is summarized as follows:
a) Locate the minimum (or maximum in case of profit) payoff value in case of loss' (or
cost) data corresponding to each alternative, then
b) Select an alternative with the best anticipated payoff value (maximum for profit and
minimum for loss or cost).
This criterion is also known as opportunity loss decision criterion or minimax regret decision
criterion because decision-maker feels regret after adopting a wrong course of action (or
Payoff Tables
Payoff table analysis can be applied when:
There is a finite set of discrete decision alternatives.
The outcome of a decision is a function of a single future event.
In a Payoff table
The rows correspond to the possible decision alternatives.
The columns correspond to the possible future events.
Events (states of nature) are mutually exclusive and collectively exhaustive.
The table entries are the payoffs.
In the maximax criterion the decision maker selects the decision that will result in the
maximum of maximum payoffs; an optimistic criterion.
In the maximin criterion the decision maker selects the decision that will reflect the maximum
of the minimum payoffs; a pessimistic criterion.
Regret is the difference between the payoff from the best decision and all other decision
payoffs. The decision maker attempts to avoid regret by selecting the decision alternative that
minimizes the maximum regret.
The Hurwicz criterion is a compromise between the maximax and maximin criterion.
A coefficient of optimism, , is a measure of the decision maker’s optimism.
The Hurwicz criterion multiplies the best payoff by and the worst payoff by 1- .,
for each decision, and the best result is selected.
The equal likelihood (or Laplace) criterion multiplies the decision payoff for each state of
nature by an equal weight, thus assuming that the states of nature are equally likely to occur.
Decision Values
Apartment building $50,000(.5) + 30,000(.5) = 40,000
Office building $100,000(.5) - 40,000(.5) = 30,000
Warehouse $30,000(.5) + 10,000(.5) = 20,000
Decision Making without Probabilities -Summary of Criteria Results
A dominant decision is one that has a better payoff than another decision under each
state of nature.
The appropriate criterion is dependent on the “risk” personality and philosophy of
the decision maker.
Expected value is computed by multiplying each decision outcome under each state of
nature by the probability of its occurrence.
The expected opportunity loss is the expected value of the regret for each decision.
The expected value and expected opportunity loss criterion result in the same
decision.
Knowledge of sample (survey) information can be used to revise the probability estimates
for the states of nature. Prior to obtaining this information, the probability estimates for the
states of nature are called prior probabilities. With knowledge of conditional probabilities for
the outcomes or indicators of the sample or survey information, these prior probabilities can
be revised by employing Bayes' Theorem. The outcomes of this analysis are called posterior
probabilities or branch probabilities for decision trees.
Bayesian analysis uses additional information to alter the marginal probability of the
occurrence of an event.
A conditional probability is the probability that an event will occur given that another
event has already occurred.
Economic analyst provides additional information for real estate investment decision,
forming conditional probabilities:
g = good economic conditions
p = poor economic conditions
P = positive economic report
N = negative economic report
P(Pg) = 0.80 P(Ng) = 0.20
P(Pp) = 0.10 P(Np) = 0.90
Decision should be do not purchase insurance, but people almost always do purchase
insurance.
Utility is a measure of personal satisfaction derived from money.
Utiles are units of subjective measures of utility.
Risk averters forgo a high expected value to avoid a low-probability disaster.
Risk takers take a chance for a bonanza on a very low-probability event in lieu of a
sure thing.
a. Determine the best decision without probabilities using the 5 criteria of decision
making with uncertainty.
b. Determine best decision with probabilities assuming 0.70 probabilities of good
conditions, 0.30 of poor conditions. Use expected value and expected opportunity loss
criteria.
c. Compute expected value of perfect information.
d. Develop a decision tree with expected value at the nodes.
e. Given following, P(Pg) = 0.70, P(Ng) = 0.30, P(Pp) = 20, P(Np) = 0.80,
determine posterior probabilities using Bayes’ rule.
f. Perform a decision tree analysis using the posterior probability obtained in part e.
6.1 Introduction
In business and economics literature, the term 'game' refers to the general situation of conflict
and competition in which two or more competitors (or participants) are involved in decision-
making activities in anticipation of certain outcomes over a period of time. The competitors
are referred as players. A player may be an individual, a group of individuals, or an
organization. Few examples of competitive and conflicting decision environment involving
the interaction between two or more competitors where techniques of theory of games may be
used to resolve them are: (i) pricing of products, where a firm's ultimate sales are determined
not only by the price levels it selects but also by the prices its competitors set, (ii) various TV
networks have found that program success is largely dependent on what the competitors
presents in the same time slot; the outcomes of one networks programming decisions have,
therefore, been increasingly influenced by the corresponding decisions made by other
networks, (iii) success of a business tax strategy depends greatly on the position taken by the
internal revenue service regarding the expenses that may be disallowed, (iv) success of an
advertising/marketing campaign depends. largely on various types of services offered to the
customers, etc.
As an area of academic study, theory of games provides a series of mathematical models that
may be quite useful in explaining interactive decision-making concepts. But as a practical
tool, it is limited in scope. However, such models provide an opportunity to a competitor to
evaluate not only his personal alternatives (courses of action), but also the evaluation of the
opponent's (or competitor's) possible choices in order to win the game.
The models in the theory of games can be classified depending upon the following factors:
Number of players: If a game involves only two players (competitors), then it is called a two-
person game. However, if the number of players is more, the game is referred to as n-person
game.
The particular strategy (or complete plan) by which a player optimizes his gains or losses
without knowing the competitor's strategies is called optimal strategy. The expected outcome
per play when players follow their optimal strategy is called the value of the game:
Generally, two types of strategies are employed by players in a game.
a. Pure Strategy It is the decision rule which is always used by the player to select the
particular strategy (course of action). Thus, each player knows in advance of all strategies
out of which he always selects only one particular strategy regardless of the other player's
strategy, and the objective of the players is to maximize gains or minimize losses.
b. Mixed Strategy Courses of action that are to be selected on a particular occasion with
some fixed probability are called mixed strategies. Thus, there is a probabilistic situation
and objective of the players is to maximize expected gains or to minimize expected losses
by making choice among pure strategies with fixed probabilities.
Two-person zero-sum games play a central role in the development of the game theory.
A game with only two players, say player A and player B is called a two-person zero-sum
game, if one player's gain is equal to the loss of other player so that total sum is zero.
Payoff matrix: The payoffs (a quantitative measure of satisfaction a player gets at the end of
the play) in terms of gains or losses, when players select their particular strategies (courses of
action), can be represented in the form of a matrix, called the payoff matrix. Since the game
is zero-sum, the gain of one player is equal to the loss of other and vice-versa. In other words,
one player's payoff table would contain the same amounts in payoff table of other player with
the sign changed. Thus, it is sufficient to construct payoff table only for one of the players.
If player A has m strategies represented by the subscripted letters: A1, A2, ..., Am and player B
has n strategies represented by the subscripted letters: B1, B2, ... , Bn. The numbers m and n
need not be equal. The total number of possible outcomes is therefore m x n. Here, it is
assumed that each player knows not only his own list of possible courses of action but also of
his opponent. For convenience, it is assumed' that player A is always a gainer whereas player
B a loser. Let aij be the payoff which player A gains from player B if player A chooses
strategy i and player B chooses strategy j. Then the payoff matrix is shown in the table 6.1.
By convention, the rows of the payoff matrix denote player A's strategies and the columns
denote player B's strategies. Since player A is assumed to be the gainer always, so he wishes
to gain as large a payoff aij as possible while player B. will do his best to reach as small a
value of aij as possible. Of course, the gain to player B and loss to A must be - aij.
1. Each player has available to him a finite number of possible strategies (courses of action).
The list may not be the same for each player.
2. Player A attempts to maximize gains and player B minimize losses.
3. The decisions of both players are made individually prior to the play with no
communication between them.
4. The decisions are made simultaneously and also announced simultaneously so that neither
player has an advantage resulting from direct knowledge of the other player's decision.
5. Both the players know not only possible payoffs to themselves but also of each other.
Example:
Consider the following game in which player I has two choices from which to select, and
player II has three alternatives for each choice of player I. The payoff matrix ‘a’ is given
below.
Player II
j1 j2 j3
Player I i1 4 1 3
i2 2 3 4
a = Payoff Matrix
In the payoff matrix, the two rows (i1 and i2) represent the two possible strategies that player I
can employ, and the three columns (j1, j2, and j3) represent the three possible strategies that
player II can employ.
The payoff matrix is oriented to player I, meaning that a positive aij is a gain for player I and
a loss for player II, and a negative aij is a gain for player II and a loss for player I. For
example, if player I uses strategy 2 and player II uses strategy 1, player I receives a 21 = 2
units and player II thus loses 2 units.
Clearly, in our example player II always loses; however, the objective is to minimize the
payoff to player I.
In a game situation it is assumed that the pay-off table is known to all players.
It uses Minimax and Maximin principles (Game with saddle point). If the maximin value
equals the minimax value, then the game is said to have a saddle (equilibrium) point and the
corresponding strategies are called optimal strategies. The amount of payoff, i.e. V at an
equilibrium point is known as the value of the game. A game may have more than one saddle
point. A game with no saddle point is solved by strategies with fixed probabilities.
Example 1
Two companies, A and B, sell two products. Company A advertises in Radio (A1), Television
(A2), and Newspaper (A3). Company B in addition to using Radio (B1), Television (B2), and
Newspaper (B3), also uses mails and brochures (B4). Depending on the effectiveness of each
advertising campaign, one company can capture a portion of the market from the others.
The following matrix (on table 6.1) summarizes the percentage of the market captured or lost
by company A.
Table 6.1
Company B
Strategies B1 B2 B3 B4
Company A A1 8 -2 9 -3
A2 6 5 6 8
A3 -2 4 -9
Determine the optimal strategy for the game, and the value of the game?
The solution of the game is based on the principle of securing the best of the worst for each
player (Maximin and Minimax Principles)
Solution
The rules of dominance are used to reduce the size of the payoff matrix. These rules help in
deleting certain rows and/or columns of the payoff matrix which are inferior (less attractive)
to at least one of the remaining rows and/or columns (strategies) in terms of payoffs to both
the players. Rows and/or columns once deleted will never be used for determining the
optimum strategy for both the players.
The rules of dominance are especially used for the evaluation of two-person zero-sum games
without saddle (equilibrium) point. Certain dominance principles are stated as follows:
1. For player B who is assumed to be the loser, if each element in a column, say Cr, is
greater than or equal to the corresponding element in another column, say Cs, in the
payoff matrix, then the column Cr, is dominated by column Cs, and therefore, column
Cr, can be deleted from the payoff matrix. In other words, player B will lose more by
choosing strategy for Cr, column than by choosing strategy for column Cs therefore, he
will never use strategy corresponding to column Cr.
Remark: Rules (principles) of dominance discussed are used when the payoff matrix
is a profit matrix for the player A and a loss matrix for player B. Otherwise the
principle gets reversed.
The rules of dominance are used to reduce the size of the payoff matrix.
These rules help in deleting certain rows and/or columns of the payoff matrix which
are inferior (less attractive) to at least one of the remaining rows and/or columns
(strategies) in terms of payoffs to both the players.
Rows and/or columns once deleted will never be used for determining the optimum
strategy for both the players.
Consider the following game in which player I has two choices from which to select, and
player II has three alternatives for each choice of player I. The payoff matrix ‘a’ is given
below:
Table 6.2
Player II
Strategies j1 j2 j3
Player I
i1 4 1 3
i2 3 4
1 9 7 2
2 11 8 4
3 4 1 7
Let the values in table are the percentage increase or decrease in market share for company I.
Determine an optimal strategy for company I and II , and also find the value of the game.
Solution
The first step is to check the payoff table for any dominant strategies. Doing so, we find
strategy 2 dominates strategy I, and strategy B dominates strategy A. Thus, strategies 1 and A
can be eliminated from the payoff table.
Table 6.4: Payoff table with Maximin
and Minimax criterion
2 8 4
3 1 7
The graphical method is useful for the game where the payoff matrix is one of the size 2 x n
or m x 2. i.e the game with mixed strategies that has only two undominated pure strategies for
one of the players in the two-person zero-sum game. Optimal strategies for both the players
assign non-zero probabilities to the same number of pure strategies.
Player A Player B
B1 B2 B3 B4
A1 2 2 3 -2
A2 4 3 2 6
Solution
The game does not have a saddle point. If the probability of player A’s playing A1 and A2 in
the strategy matrix is denoted by P1 and P2, respectively, where p2= 1-p1, then the expected
payoff (gain) to player A will be:
For a matrix m x n:
aij = the element in the ith and jth column of game payoff matrix.
m
V Pi a ij , for j 1,2,3, ..., n
i 1
To obtain values of probability Pi, the values of the game to player A for all strategies by
player B must be at least equal to V. Thus to maximize the minimum expected gains, it is
necessary that:
Similarly, player B has a similar problem with the inequalities of the constants reversed, i.e.
minimize the expected loss. Since minimizing of V is equivalent to maximizing 1/V,
therefore, the resulting linear programming problem can be stated as:
Maximize Zq (1/V)= y1 + y2 +…+ Ym
Subject to the constraints
Examples of competitive situations that can be organized into two-person, zero-sum games:
a union negotiating a new contract with management;
two armies participating in a war game;
two politicians in conflict over a proposed legislative, one attempting to secure its
passage and the other attempting to defeat it;
a firm trying to increase its market share with a new product and a competitor
attempting to minimize the firm’s gains; and
A contractor negotiating with a government agent for a contract on a project, etc.
I 20 15 12 35
II 25 14 8 10
III 40 2 10 5
IV -5 4 11 0
Problem – 2
In a game of matching coins with two players, suppose A wins one unit of value when there
are two heads wins nothing when there are two tails and losses ½ unit of value when there is
one head and one tail. Determine the payoff matrix, the best strategies for each player and the
value of the game to A.
Problem – 3
For the following payoff matrix, transform the zero-sum game into an equivalent linear
programming problem and solve it by using simplex method.
Player B
Player A B1 B2 B3
A1 1 -1 3
A2 3 5 -3
A3 6 2 -2
7.1 Introduction
Markov chain models developed by the Russian mathematician Andrei A. Markov in 1905,
are a particular class of probabilistic models known as stochastic processes, in which the
current state of a system depends on all of its previous states. But in Markov process
(sequence of events) the current state of a system depends only on its the immediately
preceding state. For example, consider the following few systems.
In all these examples, the respective system may be in one of several possible states. These
states describe all possible conditions of a system (or process):
i. In marketing example, states may be expressed in terms of the brand that a customer is
presently using.
ii. In production example, a machine can be in one of two states: Working or not at any
point in time.
iii. In financial transaction example, the accounts receivable can fall into one of the states:
Cash sale, credit sale or incollectable money.
iv. A student can specialize in only few management functional areas and not in all areas at
the same time.
The movement of these systems from one state to another is a Markov process because
outcomes are purely random and the probabilities of various outcomes depend only on the
existing state.
Markov chains are classified by their order. The case in which probability occurrence of each
state depends only upon the immediate preceding state, it is said to be first order Markov
chain. In second order Markov chains, it is assumed that the probability of occurrence in the
forthcoming period depends upon the state in the last two periods. Similarly, m the third
order Markov chains, it is assumed that probability of a state in the forthcoming period
depends upon the states in the last three periods.
Predicting future states involves knowing system's likelihood or probability of changing from
one state to another. These probabilities can be collected and placed in a matrix. Such matrix
is also called the matrix of transition probabilities and shows the likelihood that the system
will change from one state time period to the next. This enables us to predict future states (or
conditions) also.
Illustration Let there be three brands A, Band C of a product (such as toothpaste, refined oil,
soap, etc.) satisfying the same need and which may be readily substituted for each other. A
buyer can buy any one of the three brands at any point of time; therefore, there are three
states corresponding to each brand. Thus, at the time of buying, the decision of changing the
brand may result in a change from one state (brand) to another. From marketing research
point of view, it is assumed that numbers of states (brands) are finite and the decision of
change of brand is taken periodically so that such changes will occur over a period of time.
In general, let
Si = finite number of possible outcomes (i = 1, 2... m) of each of the sequence of experiments
or events (In above illustration events are purchases and the possible outcomes are three
brands of the product).
m = number of states
Pij = Conditional probability of outcome Sj for any particular experiment (or event) given that
outcome si occurred for the immediately preceding experiment (or event), that is, probability
of being in state j in the future given the current state i.
The outcomes S1, S2 .. , Sm are called states and the numbers pij are called transition
probabilities. If we assume that the process begins in some particular state, then we can
calculate probability of states relating to the overall sequence of events. Each time a new state
If Pij are assumed to be constant over time, i.e. these do not change from one event to another
of the sequence, then the Markov chain is said to be stationary otherwise said to be non-
stationary or time dependent. All conditional, one step, state probabilities can be represented
as elements of a square matrix of transition probabilities as follows:
In the transition matrix of the Markov chain, Pij = 0 when no transition occurs from state i to
state j; and Pij = 1 when the system is in state i, it can move only to state j at the next
transition. Each row of the transition matrix represents a one-step transition probability
distribution over all states. This means
m
i 1
Pij 1 for all i
and 0 ≤ Pij ≤ 1.
The states of the system and transition probabilities can also be represented by two types of
diagrams:
1. Transition diagram: It shows the transition probabilities (or shifts) that can occur in any
situation. The arrows from each state indicate the possible states to which a system can move
from the given state. The matrix of transition probabilities which corresponds to the figure is
shown below.
Let probabilities of shifting from s1 to s1 itself and s2 as well as from state s2 to s1 and s2 itself
are represented as elements of a transition matrix of the Markov chain as follows:
Consider a discrete time finite-state (S) Markov Chain { Xt , t = 1,2,3,…} with stationary
transition probabilities P[Xt+1 = j/Xt= i] = Pij, i,j ϵ S .
Let ,P = (Pij)denote the matrix of transition probabilities.
The transition probabilities between Xt and Xt+n are denoted P(n)ij and the transition matrix
p(n) = Pn.
Predicting future states involves knowing system’s likelihood or probability of changing from
one state to another.
These probabilities can be collected and placed in a matrix.
Such matrix is called the matrix of transition probabilities (transition matrix)
and 0 ≤ Pij ≤ 1.
Consider the brand switching problem in a gasoline service station, the probability of
customers changing the service station overtime is as given in table below.
NOC OiLibya
NOC OiLibya
One of the purpose of the Markov analysis is to predict the future. The elements of the n-step
transition matrix Pn = [pnij] m x m are obtained by repeatedly multiplying the transition matrix P
by itself:
Pn = P n-1 x P
Let, V = represents the vector transition matrix ( fore example V1= Vector of state
probabilities at period, (t=1), then
Where each row i of Vn represents the state probability distribution after n transitions, given
that the process starts out in state i.
Using transition matrix solve the customer’s brand switching problem for month 3.
NOC OiLibya
V2(month 2) = V1P
Similarly we can determine the probabilities of customer’s brand switching problem for
Month 4, 5, 6, 7, 8, 9, ….
Vi+1 = Vi →
Example, for the Gasoline service probability matrix, determine the Steady-state probabilities
Solution
To determine the steady-state probability for period i+1, we normally do the following
equations.
Steady-State Probability
1 2 3
1 0.4 0.6 0
T= 2 0.3 0.7 0
3 1.0 0 0
3. Absorbing state: a state is said to be absorbing (trapping) state if it does not leave that
state. This situation will occur if any transition probability in the diagonal matrix from
upper left to lower right is equal to 1.
State 3 in this transition matrix is referred to as an absorbing or trapping state. Once state 3 is
achieved, there is a 1.0 probability that it will be achieved in succeeding time periods.
Problem 1
Two manufacturers A and B are competing with each other in a restricted market. Over the
year, A's customers have exhibited a high degree of loyalty as measured by the fact that
customers are using A's product 80 per cent of the time. Also former customers purchasing
the product from B have switched back to A's product 60 per cent of the time.
a) Construct and interpret the state transition matrix in terms of:
Amare Matebu Kassa (Dr.-Ing) Page 111
i) retention and loss, and
ii) retention and gain.
b) Calculate the probability of a customer purchasing A's product at the end of the second
period.
Problem 2
There are three dairies A, B and C in a small town which supply all the milk consumed in
the town. Assume that the initial consumer sample is composed of 1,000 respondents
distributed over three dairies A, B and C. It is known by all the dairies that consumers
switch from one dairy to another due to advertising, price and dissatisfaction. All these
dairies maintain records of the number of their customers and the dairy from which they
obtained each new customer. The following table illustrates the flow of customers over an
observation period of one month. Assume that the matrix of transition probabilities
remains fairly stable and at the beginning of period one, market shares are A = 25%, B =
45% and C = 30%. Construct the state transition probability matrix to analyze the
problem.
A 250 62 50 262
B 450 53 60 443
C 300 50 55 295
1,000 165 165 1,000
Problem 3
The number of units of an item that are withdrawn from inventory on a day-to-day basis
is a Markov chain process in which requirements for tomorrow depend on today's
requirements. A one day transition matrix is given below:
Number of units withdrawn from inventory.
Problem 4
On January 1 (this year), Bakery A had 40 per cent of its local market share while
the other two bakeries B and C had 40 per cent and 20 per cent, respectively, of
the market share. Based upon a study by a marketing research firm, the following
facts were compiled; Bakery A retains 90 per cent of its customers while gaining 5
per cent of B's customers and 10 per cent of C's customers. Bakery B retains 85
per cent of its customers while gaining 5 per cent of A's customers and 7 percent
of C's customers. Bakery C retains 83 per cent if its customers and gains 5 per
cent of A's customers and 10 per cent of B's customers. What will each firm's
share be on January 1 next year and what will each firm's market share be at
equilibrium?
Queuing theory can be applied to a wide variety of operational situations where imperfect
matching between the customers and service facilities is caused by one's inability to predict
accurately the arrival and service times of customers. In particular, it can be used to
determine the level of service (either the service rate or the number of service facilities) that
balances the following two conflicting costs:
Obviously, an increase in the existing service facilities would reduce the customer's waiting
time. Conversely, decreasing the level of service should result in long queue(s). This means
an increase (decrease) in the level of service increases (decreases) the cost of operating
service facilities but decreased (increases) the cost of waiting. Figure 8.1 illustrates both
types of costs as a function of level of service. The optimum service level is one that
minimizes the sum of the two costs.
Of the two types of costs mentioned above, the cost of waiting is difficult to estimate.
However, it is measured in terms of loss of sales or goodwill when the customer is a human
being who has no sympathy with the service. But, if the customer is a machine waiting for
repair, then the cost of waiting is measured in terms of the cost of loss production.
Telephone conversation.
The landing of aircraft.
The unloading and loading of ships.
The scheduling of patients in clinics.
The service in custom offices.
Machine breakdown and repairs.
The timing of traffic lights.
Car washing.
Restaurant service.
Flow in production.
Layout of manufacturing systems.
I
Queuing theory is a mathematical approach to the analysis of systems that involve waiting in
line or queues. When a customer leaves a waiting line, the opportunity to make a profit by
providing the service is lost. The decision maker is now faced with a question of balancing
this opportunity cost against the expense of additional capacity.
Queue
A line of waiting customers who require service from one or more service
providers.
Queuing system
waiting room + customers + service provider
8.3.1Arrival Characteristics
Size of the population
Unlimited (infinite) or limited (finite)
Pattern of arrivals
Amare Matebu Kassa (Dr.-Ing) Page 117
Scheduled or random, often a Poisson distribution
Behavior of arrivals
Wait in the queue and do not switch lines
No balking or reneging
The conventional notation for the characteristics of queuing situation can be given in the
following format, it is also called Lee Kendall’s Notation:
Lee Kendall’s Notation→ (a/b/c) :( d/e/f)
Where
a = Description of the arrivals distribution, Poisson Distribution (M)
b = Description of the departures (service time) distribution, constant (D) ,
Exponential (M)
c = Number of parallel servers; (C)=1, 2,3,4,…
d = Queue discipline; FIFO, LIFO, SIRO, GD (any type)
Amare Matebu Kassa (Dr.-Ing) Page 121
e = Maximum number allowed in the system (in queue plus in service); finite ( Q
numbers), infinite (∞)
f = Size of calling source; finite (N) or infinite (∞)
λ= arrival rate
µ = service rate
Recall the system includes both the queue and the service facility
The relationship between Ns and Ts (also Nq and Tq) is known as little’s formula and is given
as:
The parameter λeff = λ when all arriving customers can join the system. Otherwise, if some
customers cannot join because the system is full, then λ eff < λ.
Multiplying both sides of equation (5) by λ eff which together with Little”s formula give
N N eff
.......... .( 6 )
s q
Thus by definition, the difference between the average number in the system N s, and the
average number in the queue, Nq must equal the average number of busy servers, . This will
give us:
c N N eff
.......... .( 7 )
s q
Let us see the variety of queuing models with the assumption of:
Poisson distribution arrivals
FIFO discipline
A single-service phase
- -
2
( - )
( - )
n
p0 1 1 -
2
Ns Nq Nq
2
single channel
single phase
Poisson arrival rate
exponential service rate
finite or limited queue length , Q customers in the system
Ts Tq
1 λ 1 Q 1 λ μ Q Q λ μ Q 1
Ns
μ μ 1 λ μ [1 λ μ Q 1 ]
multiple channel
single phase
Poisson arrival rate
exponential service rate
unlimited queue length.
c N
Tq P0 ( / ) = q
c(c ! )(1 - /c )
2
n n
P n Po for 0 n c Po
Ns Nq n! n!
n
( / ) c
Nq P0
2 Pn Po ;n c
(c - 1)!(c - ) c ! c nc
single channel
single phase
Poisson arrival rate
exponential service rate
unlimited queue length
finite calling population
1
P0 , n = population size
1 N
N! n
Ts Tq
( )
n 0 ( N n )!
Ns Nq (1 p0 ) Nq N
(1 P0 )
Nq N! n = 1, 2, 3, … N
Tq Pn ( ) n Po ,
( N N s ) ( N n)!
Solution:
Given λ = 4 customers per hour are arrive
(M/M/1): (FIFO/∞/ ∞)
4
p0 1 (1 )
5
= 0.20 probability of no customers in the system
4
Ns 4 customers on average in the queuing system
- 5-4
2
Nq
2
(4)
3.2 customers on average in the queuing line
( - ) 5 (5 - 4)
1
Ts 1 hr on average in the system
- 5-4
4
Tq 0.80 hr (48 min) average time in the queue
( - ) 5 (5 - 4)
(M/M/2): (FIFO/∞/ ∞)
1
P0
c 1 n
c c
n!
c!
c
n0
1
1 4 o
14
1
1 4 2 ( 2 )( 5 )
0! 5
1! 5 2! 5 ( 2 )( 5 ) 4
c N
Tq Po ( / ) = q
c(c ! )(1 - /c )
2
1 1
Ts Tq 0.038
μ 5
P0 0.20 0.249
Example 2
Arrivals at a telephone booth are considered to be Poisson distribution with an average time
of 10minutes between arrivals. The length of the phone call is assumed to be distributed
exponentially with average time per call equal to 3 minutes, then
a) What is the probability that a person arriving at the booth will have to wait?
b) What is the average number of people that will be waiting in the system?
c) The telephone department will install another booth when convinced that a person
will have to wait at least 3 minutes for the phone. By how much must the flow of rate
of arrivals be increased in order to justify a second booth?
Solution:
The model is (M/M/1): (FIFO/∞/ ∞)
Given = 1/10 customers per min.
µ = 1/3 customers per min.
= 1-(1-3/10)= 0.3
1
Tq
( 1 )
Solving for λ1 ,
1 1 10 6 4
6 10 60 60
= 1/15 customers per minute
Problem 1:
A television repairman finds that the time spent on his jobs has an exponential distribution
with a mean of 30 minutes. If he repairs sets in the order in which they came in, and if the
arrival of sets follows a Poisson distribution approximately with an average rate of 10 per 8-
hour day, what is the repairman's expected idle time each day? How many jobs are ahead of
the average set just brought in?
Problem 2
In a railway marshalling yard, goods trains arrive at a rate of 30 trains per day. Assuming that
the inter-arrival time follows an exponential distribution and the service time (the time taken
to hump train) distribution is also exponential with an average of 36 minutes.
Calculate:
a) expected queue size (line length)
b) probability that the queue size exceeds 10.
If the input of trains increases to an average of 33 per day, what will be the change in (a)
and (b)?
Arrivals at telephone booth are considered to be Poisson with an average time of 10 minutes
between one arrival and the next. The length of phone call is assumed to be distributed
exponentially, with mean 3 minutes.
a) What is the probability that a person arriving at the booth will have to wait?
b) The telephone department will install a second booth when convinced that an arrival
would expect waiting for at least 3 minutes for a phone call. By how much should the
flow of arrivals increase in order to justify a second booth?
c) What is the average length of the queue that forms from time to time?
d) What is the probability that it will take him more than 10 minutes altogether to wait for
the phone and complete his call?
Often, we do not find a mathematical technique that can solve a model which has been
constructed with great difficulties. Also, often it does seem possible to simplify the model in
any way without destroying its acceptability as a reasonable representation of 'reality'.
Consequently, the model cannot be manipulated to formulate a decision strategy or to obtain
some appreciation of potential responses when the model is subjected to various situations or
assumptions. However, a model once constructed may permit us to predict what will be the
consequences of taking a certain action. In particular, we could 'experiment' on the model by
'trying' alternative actions or parameters and compare their consequences. This
'experimentation' allows us to answer 'what if 'questions relating the effects of your
assumptions on the model response.
For example, suppose that we have built a model for some problem, but we neither have the
means nor the knowhow to solve it! The model could still be used by simply substituting and
comparing the consequences of using various parameters into the model. Analyzing a model
in this way is referred to as simulating the model.
As the complexity of a model increases, simulation is often the only remaining tool for
analysis. This is the case for problems which exhibit many sources of uncertainty. For such
problems, simulation seeks to replicate the uncertainty in the model and to assess the model's
response to events made to occur (i.e. simulated) with a frequency characterized by pre-
specified probabilities.
Where is simulation used? As implied earlier, it is used in almost all conceivable fields,
restricted only by our imagination and our ability to translate such imagination into a set of
computer directives. The following are some simple examples which will make us appreciate
what simulation is and what we can do with it.
Location of ambulances: In such studies, it is not known exactly where the demand for
ambulance services will arise and, therefore, simulation is required to test alternative
Design of computer systems: In general, it is difficult to specify future needs. Even the best
forecasts are just that, forecasts! Simulation is then useful to highlight the basic
characteristics of a computer system configuration (its speed, size, computing qualities,
memory and so on.) in terms of the equipment used, the management of the computer system,
the costs of such configurations and the resulting computational service which it provides to
users.
Shop floor management: The management of modem shop floor factories is extremely
complex, involving many persons, machines, products parts, robots, material handling
systems, Television and computer aided vision and inspection systems. Even if we were able
to manage some specific aspects of the shop floor, integrating it into a simple mathematical
model is simply very difficult, provided we can construct a model (however complex it may
be) of the shop floor. Simulation is very useful to assess its behavior and how it responds to
certain policies (such as scheduling jobs to be performed on the basis of their due-dates, with
earliest dates jobs performed first, by comparing alternative configurations and routing
control policies, and so. on). It can also be used to test alternative work flows, the
introduction of a new machinery, and in some cases the use of completely new manufacturing
.and management technologies (such as in testing 'Japanese approaches' Just-in-Time
manufacturing, flexible manufacturing, etc.). In such cases, simulation is a proper framework
which provides insights regarding the 'new' shop floor before it is implemented. It can,
thereby, provide an experience and a learning tool which will give us greater confidence in
the productivity and the management of the future system, at a relatively small cost.
Design of a queuing system: In queuing theory, the problem of balancing the cost of
waiting, against the cost of idle time of service facilities in the system arises due to the
probabilistic nature of the inter-arrival times of customers and the time taken to complete the
service to the customer. Thus, instead of trying out in actual manually with data to design a
queuing system, we process the data on computers and obtain the expected value of various
characteristics of the queuing system such as idle time of servers, average waiting time,
queue length, etc.
Other problems, such as aircraft management and routing, scheduling of bank tellers and
location of bank branches, the deployment of fire stations, routing and dispatching when
roads are not secured (where materials sent might not, potentially, reach their destination), the
location and the utilization of recreational facilities (such as parks, public swimming pools,
etc.), and many other problems have been studied or could be studied through simulation. .
Today, a modem game like monopoly simulates the competitive arena of real estate. Many
have played baseball with a deck of cards which has hits, strikeouts and walks with a
cardboard, diamond and plastic chips as runners. The distribution of hits, runs and outs, etc.,
in a deck of cards serves as a realistic reflection of the overall average with which each will
occur in real life. Other games people play, generate experiences, understanding and gain
knowledge which is basically what we will be trying to do through simulation.
What is new in modem simulation? It is the availability of computers which make it possible
for us to deal with an extraordinarily large quantity of details which can be incorporated into
An agreed definition for the world simulation has not been reached so far, however a few
definitions are stated as:
A simulation of a system or an organism is the operation of a model or simulator
which is a representation of the system or organism. The model is amenable to
manipulation which would be impossible, too expensive or unpractical to perform on
the entity it portrays. The operation of the model can be studied and for it, properties
concerning the behaviour of the actual system can be inferred. (by Shubik)
This definition is. broad enough to be applied equally to military war games, business games,
economic models, etc. In this view simulation involves logical and mathematical constructs
that can be manipulated on a digital computer using iterations or successive trials.
Simulation is the process of designing a model of a real system and conducting
experiments with this model for the purpose of understanding the behavior (within the
limits imposed by a criterion or set of criteria) for the operation of the system. (by
Shannon)
For Operations Research, simulation is a problem solving technique which uses a computer-
aided experimental approach to study problems that cannot be analyzed using direct and
formal analytical methods. As a result simulation can be thought of as a last resort technique.
It is not a technique which should be applied in all cases. However, Table 9.1 highlights what
simulation is and what it is not.
In the context of the above defined inventory problem, the demand (consumption rate), lead
time and safety stock are identified as decision variables. These variables shall be responsible
to measure the performance of the system in terms of total inventory cost under the decision
rule-when to order.
Disadvantages
1. Sometimes simulation models are expensive and take a long time to develop. For
example, a corporate planning model may take a long time to develop and prove
expensive also.
2. It is the trial and error approach that produces different solutions in repeated runs. This
means it does not generate optimal solutions to problems.
3. Each application of simulation is ad hoc to a great extent.
4. The simulation model does not produce answers by itself. The user has to provide all the
constraints for the solutions which he wants to examine.
Simulation is an alternative form of analysis when the problem situations are too complex to
be represented by the concise mathematical techniques.
Discrete event simulation has applications in a wide range of sectors including manufacturing
and service sectors.
automotive
healthcare
electronics
pharmaceuticals
food and beverage
packaging
logistics, etc.
To use simulation, it is first necessary that you learn how to generate the sample random
events that make up complex models. Once this is done, it is possible to use the computer to
reproduce the process through which chance is generated in real life. In this manner a
problem can be evaluated that involves many interrelationships for their aggregate behavior
and assess this behavior as a function of a set of given parameters. Thus process generation
(simulating chance processes) and modeling are the two fundamental techniques that we need
in simulation.
The most elementary and important type of process is the random process, which requires for
its simulation the selection of samples (or events) drawn from a given distribution so that
repetition of this selection process will yield a frequency distribution of sample values that
faithfully matches the original distribution. When these samples are generated through some
mechanical or electronic means, they are pseudo random numbers (for they are not really
random since they are generated by a machine). Alternately it is possible to use table of
random numbers where the selection of number in any consistent manner will yield numbers
that behave as if they were drawn from a uniform distribution.
There are several ways of generating random numbers such as: Random numbers generators
(which are inbuilt feature of spread sheets and many computer languages) tables (see
appendix), a roulette wheel, etc.
Amare Matebu Kassa (Dr.-Ing) Page 143
Random numbers between 00 and 99 are used to obtain values of random variables that have
a known discrete probability distribution in which the random variable of interest can assume
one of a finite number of different values. In some applications, however, the random
variables are continuous, that is, they can assume any real value according to a continuous
probability distribution. For example, in queuing theory applications, the amount of time a
server spends with a customer is such a random variable which might follow an exponential
distribution.
One characteristic of some systems that makes them difficult to solve analytically is that they
consist of random variables represented by probability distributions. Thus, a large
proportion of the applications of simulations are for probabilistic models (Stochastic
simulation).
The term Monte Carlo has become synonymous with probabilistic simulation in
recent years.
Monte Carlo is a technique for selecting numbers randomly from a probability
distribution.
Where p and m are positive integers, p < m, rn-1 is a k-digit number and
modulo m means that rn is the remainder when p x rn-1 is divided by m.
This means, rn and p x rn-1 differ by an integer multiple of m.
To start the process of generating random numbers, the first random number (also called
seed) ro is specified by the user. Then using above recurrence relation a sequence of k-digit
random number with period h <m at which point the number ro occurs again can be
generated.
For illustration,
let p = 35, m = 100 and arbitrarily start with ro = 57.
Since m - 1 = 99 is the 2-digit number, therefore,
it will generate 2-digit random numbers:
r1 = p x ro (modulo m) = 35 x 57 (modulo 100)
= 1,995/100 = 95, remainder
r2 = p x r1 (modulo m) = 35 x 95 (modulo 100)
=3,325/100 = 25, remainder
r3 = p x r2 (modulo m) = 35 x 25 (modulo 100)
= 875/100 = 75, remainder
The choice of ro and p for any given value of m require great care, and the method used is
also not a random process because sequence of numbers generated is determined by the input
data for the method. Thus, the numbers generated through this ·process are pseudo random
numbers because these are reproducible and hence, not random.
The random numbers that are generated by using computer software are uniformly distributed
decimal fractions between 0 and 1. The software works on the concept of cumulative
distribution function for the random variables for which we are seeking to generate random
numbers.
For example, for the negative exponential function with density function,
f ( x ) e x ,0 x ,
The cumulative distribution function is given by,
x
e
x
F (x) dx 1 e x
0
x
or , e 1 F (x)
x log[ 1 F ( x )]
or , x (1 / ) log[ 1 F ( x )]
If r = F(x) is a uniformly distributed random decimal fraction between 0 and 1, then
the exponential variables associated with r is given by:
This is an exponential process generator since 1-r is a random number and can be
replaced by r.
Remark
i. We can pick up random numbers from random table, or
ii. Use built-in Excel formula to generate random numbers.
The demand per day for Tires at a Tire distributor is a discrete random variable defined as in
Table 9.3.
0 10
1 20
2 40
3 60
4 40
5 30
200 days
Solution:
Table 9.4 Probability of Demand
= 0 + .1 + .4 + .9 + .8 + .75
= 2.95 tires
Consider the Case of drive-in market which consists of one cash registrar (the service
facility) and a single queue of customers. The inter arrival time and service time is as in
table ‘a’ and ‘b’. Assume that the time intervals between customer arrivals are discrete
random variables.
8 customers
Average queue length = .80
10 customers
24.5 min
Average time in the system = 2.45min per customer
10 customers
When a machine breaks down, it must be repaired; and it takes either one, two, or
three days for the repair to be completed, according to the discrete probability
distribution shown in table (I)
The company would like to know if it should implement a machine maintenance program
at a cost of $20,000 per year that would reduce the frequency of breakdowns and thus the
time for repair.
The maintenance program would result in the following continuous probability
function for time between breakdowns
x
f ( x) ,0 x 6 weeks
18
Where x = Weeks between machine breakdowns
The reduced repair time resulting from the maintenance program is defined by the discrete
probability distribution shown in table (II).
10.1 Introduction
Often decision-making process involves several decisions to be taken at different times. For
example problems of inventory control, evaluation of investment opportunities, long-term
corporate planning, and so on require sequential decision-making. The mathematical
technique of optimizing such a sequence of interrelated decisions over a period of time is
called dynamic programming. It uses the idea of recursion to solve a complex problem,
broken into a series of interrelated (sequential) decision stages (also called sub problems)
where the outcome of a decision at one stage affects the decision at each of the next stages.
The word dynamic has been used because time is explicitly taken into consideration.
Dynamic programming (DP) differs from linear programming in two ways:
i. In DP, there is no set procedure (algorithm) as in LP that can be used to solve all
problems. DP is a technique that allows to break up the given problem into a sequence
of easier and· smaller sub-problems which are then solved in stages.
ii. LP gives one time period (single stage) solution whereas DP considers decision-making
overtime and solves each sub-problem optimally.
Stage: The dynamic programming problem can be decomposed or divided into a sequence of
smaller sub-problems called stages. At each stage there are a number of decision alternatives
(courses of action) and a decision is made by selecting the most suitable alternative. Stages
very often represent different time periods in the planning period of the problem, places,
people or other entities. For example, in. the replacement problem, each year is a stage, in the
salesman allocation problem, each territory represents a stage.
State: Each stage in a dynamic programming problem has a certain number of states
associated with it. These states represent various conditions of-the decision process at a stage.
Return function: At each stage, a decision is made which can affect the state of the system at
the next stage and help in arriving at the optimal solution at the current stage. Every decision
that is made has its own merit in terms of worth or benefit associated with it and can be
described in an algebraic equation form. This equation is generally called a return function,
since for every set of decisions; a return on each decision is obtained. This return function in
general depends on the state variable as well as the decision made at particular stage. An
optimal policy or decision at a stage yields optimal (maximum or minimum) for a given value
of the state variable.
Figure 10.1 depicts the decision alternatives known at each stage for their evaluation. The
range of such decision alternatives and their associated returns at a particular stage is a
function of the state input to the stage itself. The state input to a stage is the output from the
previous (larger number) stage and the previous stage output is a function of the state input to
itself, and the decision taken at that stage. Thus to evaluate any stage we need to know the
values of the state input to it (there may be more than one state inputs to a stage) and the
decision alternatives and their associated returns at the stage.
Step 1: Identify the problem decision variables and specify objective function to be
optimized under certain limitations, if any.
Step 2: Decompose (or divide) the given problem into a number of smaller sub-problems (or
stages). Identify the state variables at each stage.
Step 3: Write down a general recursive relationship for computing the optimal policy. Decide
whether to allow the forward or the backward method to solve the problem.
Step 4: Construct appropriate tables to show the required values of the return function at each
stage .
Step 5: Determine the overall optimal policy or decisions and its value at each stage.
A traveler wants to travel from city 1(SF) to city10 (NY). He/she has to travel by four
different stage coaches, in order to complete the journey. The different routes and time (hrs)
are given. What is the minimum time taken by traveler from SF to NY?
Solution:
The stages for the network problem to determine the shortest route between an origin (SF)
and destination (NY) are given below.
= 15 Hours
Problem 1
The general NLP problem can be stated mathematically in the following form:
Optimize (Max or Min) Z= f(x1, x2, …, xn)
Subject to constraints
gi (x1, x2,…,xn){≤,=, ≥} = bi ; i= 1,2,…,m
xj ≥ 0 for all j= 1,2,…,n
Here f(x1,x2,…,xn) and gi(x1,x2,…,xn) are real valued function of n decision variables
and at least one of these is non-linear.
Where V = volume
P = price
Cf = fixed cost
V= 1,500 – 24.6P…………………………………..(2)
Substituting equation (2) and the values of Cf & Cv in the profit function , the
equation becomes;
1.696.8-49.2p = 0
P= $34.49
The method of Lagrange multipliers (λ) finds the stationary points (maxima, minima, etc.)
of a function of several variables, when the variables are subject to constraints. The lagrange
multiplier, λ: reflects a change in the objective function from a unit change in the right-hand-
side value of a constraint.
subject to m constraints
g1(x1, x2,…,xn)= 0
g2(x1, x2,…,xn)= 0
. . , ……………………………. (2)
gm(x1, x2,…,xn)= 0
……………… (4)
for x1,x2,…,xn, 1, 2, … ,m (n+m equations for n+m unknowns).
Note: Since 1, 2, … ,m appear in L only as multipliers of g1,g2,…,gm, the m equations
The solutions obtained for x1,x2,…,xn from step 3 are the values of these variables at the
stationary points of the function f.
Example
The Furniture company makes Chairs and Tables. The company has developed the following
non-linear programming model to determine the optimal number of Chairs (x1) and Tables
(x2) to produce each day in order to maximize profit, given a constrain for available
Mahogany wood.
Maximize Z= (280x1- 6x12+160x2-3x22)$
Subject to 20x1+10x2= 800ft2
Determine the optimal solution to this model using
a) the substitution method, and
b) the method of Lagrange multiplier.
Solution:
(a) Solve using the substitution method
Solve the constraint for x1,
X1= 40 – 0.5 x2……………………………(1)
and, substitute this term into the objective function
L
280 12 x1 20 0 .......... ...( 1)
x1
L
160 16 x 2 10 0 .......... .....( 2 )
x 2
L
20 x1 10 x 2 800 0 .......... ....( 3)
280-12x1-20λ = 0--------------------------(1)
160-6x2-10λ = 0 ---------------------------(2)
Solve simultaneously by setting all the three equations equal to zero. Eliminate λ in the first
two equations
Then,
Z= $5,355.56.
Kuhn-Tucker Conditions
The optimal solution of a general non-linear programming problem can be identified
by using a set of conditions developed by Kuhn and Tucker.
Problem 2
Find the optimum solution of the following constrained multivariable problem.
Minimize Z = X12 + (X2 +1)2 + (X3-1)2
Subject to the constraint:
X1 + 5X2 -3X3 = 6
Problem 3
3. Obtain the necessary conditions for the optimum solution of the following problem.
Minimize f (X1, X2) = 3e2x1+1 + 2ex2+5
subject to the constraint:
g(X1, X2) = X1 + X2 -7 = 0
Problem 4
Solve the following problem by using the method of Lagrangian multipliers.
Minimize Z = X12 + X22 + X32
Subject to the constraints:
X1 + X2 + 3X3 = 2
5X1 + 2X2 + X3 = 5