You are on page 1of 165

BAHIRDAR UNIVERSITY

INSTITUTE OF TECHNOLOGY
FACULTY OF MECHANICAL AND INDUSTRIAL ENGINEERING
INDUSTRIAL ENGINEERING PROGRAM

OPERATIONS RESEARCH
Compiled Notes

Amare Matebu Kassa (Dr.-Ing)

November, 2015
Table of Contents

CHAPTER 1..................................................................................................... 6
INTRODUCTION ........................................................................................... 6
1.1 History of Operations Research .................................................................................................... 6
1.2 Definitions of Operations Research .............................................................................................. 7
1.3 Applications of Operations Research ............................................................................................ 8
1.4 Model and Modeling in Operations Research............................................................................. 10
1.4.1 Different Models .................................................................................................................. 12
1.4.2 Design Optimization ............................................................................................................ 16

CHAPTER 2................................................................................................... 19
LINEAR PROGRAMMING ......................................................................... 19
2.1 Linear programming Model ........................................................................................................ 21
2.2 Solving linear programming ....................................................................................................... 26
2.2.1 Graphical Method ................................................................................................................ 26
2.2.2 Simplex Method ................................................................................................................... 32
2.2.3 Big M Method ...................................................................................................................... 35
2.3 Duality and Sensitivity ................................................................................................................ 37
2.3.1 Primal and Duality ............................................................................................................... 37
2.3.2 Economic Interpretation of the Dual Problem ..................................................................... 39
2.3.3 Sensitivity Analysis.............................................................................................................. 40

CHAPTER 3................................................................................................... 43
INTEGER PROGRAMMING ...................................................................... 43
3.1 Introduction ................................................................................................................................. 43
3.2 Integer Programming Models ..................................................................................................... 43
3.3 Methods of solving Integer Programming ............................................................................ 46
3.3.1 Gomory's All Integer Programming Algorithm .................................................................. 46

Amare Matebu Kassa (Dr.-Ing) Page 2


3.3.2 Branch and Bound Method .................................................................................................. 53

CHAPTER 4 ........................................................................................................................... 54
TRANSPORTATION AND ASSIGNMENT MODEL ...................................................... 54
4.1 Introduction ................................................................................................................................. 54
4.2 Solution Mechanism for Transportation Problems ..................................................................... 56
4.3 Special cases in transportation .................................................................................................... 58
4.4 Assignment problem ................................................................................................................... 59
4.4.1 Solving Methods for Assignment Problem .......................................................................... 61

CHAPTER 5................................................................................................... 63
DECISION ANALYSIS................................................................................. 63
5.1 Introduction ................................................................................................................................. 63
5.2 Types of Decision Making Environments .................................................................................. 66
5.2.1 Type 1 Decision Making under Certainty ............................................................................ 66
5.2.2 Type 2 Decision Making under Risk ................................................................................... 66
5.2.3 Type 3 Decision Making under Uncertainty ........................................................................ 66
5.3 Decision Analysis with Additional Information - Bayesian Analysis ........................................ 75

CHAPTER 6................................................................................................... 83
GAME PROGRAMMING ............................................................................ 83
6.1 Introduction ................................................................................................................................. 83
6.2 Two-Persons-Zero-Sum Game ................................................................................................... 85
6.3 Pure Strategies ............................................................................................................................ 88
6.4 Mixed Strategy ............................................................................................................................ 90
6.4.1 Rule of Dominance .............................................................................................................. 90
6.5 Methods of Solving Mixed strategies ......................................................................................... 93
6.5.1 Algebraic Method ................................................................................................................ 93
6.5.2 Graphical Method ................................................................................................................ 94
6.5.3 Linear programming (LP) Method ....................................................................................... 95
Amare Matebu Kassa (Dr.-Ing) Page 3
CHAPTER 7 ........................................................................................................................... 99
MARKOV ANALYSIS .......................................................................................................... 99
7.1 Introduction ................................................................................................................................. 99
7.2 State and Transition Probabilities ............................................................................................. 101
7.3 Multi –Period Transition Probabilities and the Transition Matrix ............................................ 107
7.3.1 Steady- State Probability (Future State prediction) ........................................................... 109
7.4 Special Cases in Markov Chains ............................................................................................... 110

CHAPTER 8 ......................................................................................................................... 114


QUEUING ANALYSIS ....................................................................................................... 114
8.1 Introduction ............................................................................................................................... 114
8.2 Application of Queuing Theory ................................................................................................ 115
8.3 Characteristics of Queuing System ........................................................................................... 117
8.3.1Arrival Characteristics ........................................................................................................ 117
8.3.2 Queue discipline................................................................................................................. 119
8.4 Queuing Models ........................................................................................................................ 121

CHAPTER 9 ......................................................................................................................... 133


SIMULATION ..................................................................................................................... 133
9.1 Introduction ............................................................................................................................... 133
9.2 Definition of simulation ............................................................................................................ 135
9.3 Steps of simulation process ....................................................................................................... 138
9.4 Advantages and disadvantages of simulation ........................................................................... 139
9.5 Types of Models ....................................................................................................................... 140
9.6 Types of Simulation .................................................................................................................. 141
9.7 Stochastic simulation and random numbers.............................................................................. 143
9.7.1 Monte Carlo Simulation ..................................................................................................... 144
9.7.2 Random Number Generation ............................................................................................. 146
9.7.3 Computer Generator........................................................................................................... 147

Amare Matebu Kassa (Dr.-Ing) Page 4


CHAPTER 10............................................................................................... 156
DYNAMIC PROGRAMMING & NON-LINEAR PROGRAMMING .... 156
10.1 Introduction ............................................................................................................................. 156
10.2 General Algorithms for DP ..................................................................................................... 158
10.3 Non-Linear Programming Problems (NLPPs) ........................................................................ 161
10.3.1 Unconstrained Optimization Problem .............................................................................. 161
10.3.2 Constrained NLPPs with Equality ................................................................................... 162
10.3.3 Constrained NLPPs with Inequality ................................................................................. 165

Amare Matebu Kassa (Dr.-Ing) Page 5


CHAPTER 1
INTRODUCTION

1.1 History of Operations Research

Operations research is a relatively new discipline. It is generally agreed that Operations


research (OR) came into existence as a discipline during World War II when there was a
critical need to manage scarce resources. World War II: British military leaders asked
scientists and engineers to analyze several military problems:
 Deployment of radar
 Management of convoy, bombing, antisubmarine, and mining operations.
The result was called Military Operations Research, later Operations Research. Operations
research originated in Great Britain during World War II to bring mathematical or
quantitative approaches to bear on military operations. It is started in the UK and developed
in the USA. Establishment of teams of scientists to study the strategic and tactical problems
involved in military operations. The objective was to find the most effective utilization of
limited military resources by the use of quantitative techniques.

There are three important factors behind the rapid development in the use of OR approach:
1. The economic and industrial boom after World War II resulted in continuous
mechanization and automation.
2. Many Operations Researchers continued their research after World War II.
3. Analytic power was made available by high-speed computers.

During 1950s, there was substantial progress in the application of OR techniques for Civilian
activities along with great interest in the professional development and education in OR. In
1948, OR club was formed in England which later changed its name to the Operational
Research Society of UK.

During OR foundation, its primary applications were: to support military operations: such as
to support radar systems, against submarine, etc. Following the war, numerous peacetime
applications emerged, leading to the use of OR and management science in many industries

Amare Matebu Kassa (Dr.-Ing) Page 6


and occupations. In 1952, the Operations Research Society of America (ORSA) was
founded. By 1960s, OR groups were formed in several organizations.
This analytical approach is known by several different names:
 Operations Research (OR)
 Operational Research (UK)
 Decision Sciences (DS)
 Systems Science
 Mathematical Modeling
 Industrial Engineering
 Critical Systems strategic thinking
 Success Science(S), and Systems Analysis and Design

Because of OR ‘s multi-disciplinary character and application in varied fields, it has a bright


future, provided people devoted to OR study can help meet the needs of society. However, in
order to make the future of OR brighter, its specialists have to make good use of the
opportunities available to them.

1.2 Definitions of Operations Research

Operations research is the application of scientific methods, techniques and tools to problems
involving the operations of systems so as to provide those in control of the operations with optimum
solutions to the problems. OR is the application of the scientific method to the study of the operations
of large, complex organizations or activities. OR is the application of the scientific method to the
analysis and solution of managerial decision problems.

OR is the application of the methods of science to complex problems in the direction and management
of large systems of men, machines, materials, and money in industry, business, government and
defense (OR Society, UK).

Operations research is essentially a collection of mathematical techniques and tools which in


conjunction with a systems approach is applied to solve practical decision problems of an economic or
engineering nature George, 1978.

In summary, Operations research incorporates the following issues:


 Application of scientific method
 Study of large and complex systems

Amare Matebu Kassa (Dr.-Ing) Page 7


 Analysis of managerial problems
 Finding optimal solution

1.3 Applications of Operations Research

Operations research can be applied in every business organizations (both profit making and non-profit
making organizations), Some of the application areas are listed as follows.
 Manufacturing
- Aggregate production planning, assembly line, blending, inventory control
- Employment, training, layoffs and quality control
- Transportation, planning and scheduling
 Facilities planning
- Location and size of warehouse or new plant
- Logistics, layout and engineering design
- Transportation, planning and scheduling
 Finance and accounting
- Capital budgeting, cost allocation and control, and financial planning
 Marketing
- Sales effort allocation and assignment
- Predicting customer loyalty
 Purchasing, procurement and Exploration
- Optimal buying and reordering with or without price quantity discount

Amare Matebu Kassa (Dr.-Ing) Page 8


Some of the successful applications of Operations research are listed on the following table 1.1.

Company Year Problem Techniques Used Annual Savings

Designing buffers into


Hewlett Packard 1998 Queuing models $280 million
production line
IP, Forecasting,
Taco Bell 1998 Employee scheduling $13 million
Simulation
Redesign production &
Proctor & Gamble 1997 Transportation models $200 million
distributon system
Delta Airlines 1994 Assigning planes to routes Integer Programming $100 million
Queuing models,
AT&T 1993 Call center design $750 million
Simulation
Yellow Freight Systems, Network models,
1992 Design trucking network $17.3 million
Inc. Forecasting, Simulation
San Francisco Police
1989 Patrol Scheduling Linear Programming $11 million
Dept.
Bethlehem Steel 1989 Design an Ingot Mold Stripper Integer Programming $8 million
North American Van
1988 Assigning loads to drivers Network modeling $2.5 million
Lines
Refinery operations & Linear Programming,
Citgo Petroleum 1987 $70 million
distribution Forecasting
Scheduling reservation
United Airlines 1986 LP, Queuing, Forecasting $6 million
personnel
Dairyman's Creamery 1985 Optimal production levels Linear Programming $48,000
Phillips Petroleum 1983 Equipment replacement Network modeling $90,000

Operations research includes the following basic characteristics.


 Managerial Decision Making
 Scientific approach
 System approach
 Mathematical models
 Computers

Operations research provides rational basis for decision making:


 Solves the type of complex problems that turn up in the modern business environment
 Builds mathematical and computer models of organizational systems composed of
people, machines, and procedures
 Uses analytical and numerical techniques to make predictions and decisions based on
these models
Why must we learn the decision-making process?"
 Organizations are becoming more complex.
 Environments are changing so rapidly that past practices are no longer adequate.

Amare Matebu Kassa (Dr.-Ing) Page 9


 The costs of making bad decisions have increased.

Optimization is everywhere.
It is embedded in language, and part of the way we think.
 firms want to maximize value to shareholders
 people want to make the best choices
 We want the highest quality at the lowest price
 When playing games, we want the best strategy
 When we have too much to do, we want to optimize the use of our time.

Mathematical optimization is nearly everywhere:


 Agriculture
 Military
 Production Management
 Financial Management
 Marketing Management
 Personnel Management
 Health care
 Transportation
 Construction
 Telecommunications, etc.

1.4 Model and Modeling in Operations Research

Definition: A model is an abstract description of a decision situation. A model is a


representation of the reality that captures "the essence" of reality. It is a schematic description
of a system, theory, or phenomenon that accounts for its known or inferred properties and
may be used for further study of its characteristics. A model never to be a one-to-one image
of reality. The main characteristics of OR is to try to quantify aspects of decision problems
on the basis of models.

Amare Matebu Kassa (Dr.-Ing) Page 10


Figure 1.1 Levels of abstraction in the model development

The scientific approach to decision making requires the use of one or more mathematical
models. A mathematical model is a mathematical representation of the actual situation that
may be used to make better decisions or clarify the situation.

Benefits of Modeling
 Economy - it is often less costly to analyze decision problems using models.
 Timeliness - models often deliver needed information more quickly than their real-world
counterparts.
 Feasibility - models can be used to do things that would be impossible.
 Models give us insight and understanding that improves decision making.

Feedback
Figure 1.2 General framework of OR application

Amare Matebu Kassa (Dr.-Ing) Page 11


Table 1.2 Methods of Operations research

1.4.1 Different Models

Linear Programming models


 A single objective function, representing either a profit to be maximized or a cost to
be minimized, and a set of constraints that circumscribe the decision variables.
 The objective function and constraints all are linear functions of the decision
variables.
 Software has been developed that is capable of solving problems containing millions
of variables and tens of thousands of constraints.

Nonlinear Programming models


 The objective and/or any constraint is nonlinear.
 In general, much more difficult to solve than linear.
 Most (if not all) real world applications require a nonlinear model.
 In order to make the problems tractable, we often approximate using linear functions.

Dynamic Programming
 A DP model describes a process in terms of states, decisions, transitions and returns.
 The process begins in some initial state where a decision is made.

Amare Matebu Kassa (Dr.-Ing) Page 12


Stochastic models
 In many practical situations the attributes of a system randomly change over time.
 Examples include the number of customers in a checkout line, congestion on a
highway, the number of items in a warehouse, and the price of a financial security are
some of the stochastic.

Markov Chains
 A stochastic process that can be observed at regular intervals such as every day or
every week can be described by a matrix which gives the probabilities of moving to
each state from every other state in one time interval.

Simulation
 It is often difficult to obtain a closed form expression for the behavior of a stochastic
system.
 Simulation is a very general technique for estimating statistical measures of complex
systems.
 A system is modeled as if the random variables were known.

Network Flow Programming


 A special case of the more general linear program.
 Includes such problems as the transportation problem, the assignment problem, the
shortest path problem, the maximum flow problem, and the minimum cost flow
problem.

Amare Matebu Kassa (Dr.-Ing) Page 13


Figure 1.3 The OR problem solving Schema

Steps for solving OR problem

 Formulate the problem to be solved


 Build a mathematical model
 Select appropriate tool necessary to solve the problem
 If necessary, simplify by introducing appropriate assumptions
 Perform the analysis
 Implement findings

Example: 1
A crop farmer has to decide which crops to grow yearly on the 6 hec of land. There are two
crops; corn and potato. The yearly return per hectare of corn is 20,000 Birr and that of potato
is 10,000 Birr. Every hectare grown with corn requires 20 hours of labor and every hectare
potato requires 40 hours. The available number of hours is 200. Environmental requirements
impose that at maximum two-third of the available surface can be used for growing corn, and
crop rotation requirements tell us that only half of the area can be occupied by potato. The
question is how much corn and potato are to grow, taking the restrictions into account, such
that the return is as high as possible?

Amare Matebu Kassa (Dr.-Ing) Page 14


Solution:
Define decision variable
Let x1 and x2 be the number of hectors corn and potato grown respectively, the
following decision model represents the problem.
Max {w = 20,000X1 + 10,000X2} objective functions
Subject to:
20X1 + 40X2 ≤ 200 (labor)
X1 + ≤ 4 (Environment) constraints
X2 ≤ 3 (Crop rotation)
X1 + X2 ≤ 6 (Land)

Mathematical Modeling and Optimization Building Blocks


 data (Actual Situation and Requirements, Control Parameters)
e.g., number of sites, unit capacities, demand forecasts, available resources
 model (variables, constraints, objective function)
e.g., how much to produce, how much to ship, (decision variables, unknowns)
 optimization algorithm and solver
e.g., simplex algorithm, B&B algorithm, outer approximation...
Amare Matebu Kassa (Dr.-Ing) Page 15
 optimal solution (Suggested Values of the Variables)
e.g. production plan, unit-connectivity, feed concentrations.

1.4.2 Design Optimization


Optimization is a component of design process. The design of systems can be formulated as
problems of optimization where a measure of performance is to be optimized while satisfying
all the constraints.

 Design variables – a set of parameters that describes the system (dimensions,


material, load, …)
 Design constraints – all systems are designed to perform within a given set of
constraints. The constraints must be influenced by the design variables (max. or min.
values of design variables).
 Objective function – a criterion is needed to judge whether or not a given design is
better than another (cost, profit, weight, deflection, stress, etc).
 The formulation of an optimization problem is extremely important, care should
always be exercised in defining and developing expressions for the constraints.
 The optimum solution will only be as good as the formulation.

Problem Formulation (Design of a two -bar structure

 The problem is to design a two-member bracket to support a force W without


structural failure. Since the bracket will be produced in large quantities, the design
objective is to minimize its mass while also satisfying certain fabrication and space
limitation.

Amare Matebu Kassa (Dr.-Ing) Page 16


Example – Design of a Beer Can

Design a beer can to hold at least the specified amount of beer and meet other design
requirement. The cans will be produced in billions, so it is desirable to minimize the cost of
manufacturing. Since the cost can be related directly to the surface area of the sheet metal
used, it is reasonable to minimize the sheet metal required to fabricate the can.

Fabrication, handling, aesthetic, shipping considerations and customer needs impose the
following restrictions on the size of the can:
1. The diameter of the can should be no more than 8 cm. Also, it should not be less than
3.5 cm.
2. The height of the can should be no more than 18 cm and no less than 8 cm.
3. The can is required to hold at least 400 ml of fluid.

Amare Matebu Kassa (Dr.-Ing) Page 17


Design variables
D = diameter of the can (cm)
H = height of the can (cm)
Objective function
The design objective is to minimize the surface area

(Non-linear)
The constraints must be formulated in terms of design variables.
The first constraint is that the can must hold at least 400 ml of fluid.

(Non-linear)

The other constraints on the size of the can are:

(Linear)

The problem has two independent design variable and five explicit constraints. The objective
function and first constraint are nonlinear in design variable whereas the remaining
constraints are linear.

Amare Matebu Kassa (Dr.-Ing) Page 18


CHAPTER 2
LINEAR PROGRAMMING

The development of linear programming has been ranked among the most important
scientific advances of the mid-20th century, and we must agree with this assessment. Its
impact since just 1950 has been extraordinary. Today it is a standard tool that has saved many
thousands or millions of dollars for most companies or businesses of even moderate size in
the various industrialized countries of the world; and its use in other sectors of society has
been spreading rapidly. A major proportion of all scientific computation on computers is
devoted to the use of linear programming. Dozens of textbooks have been written about
linear programming, and published articles describing important applications now number in
the hundreds.

What is the nature of this remarkable tool, and what kinds of problems does it address? You
will gain insight into this topic as you work through subsequent examples. However, a verbal
summary may help provide perspective. Briefly, the most common type of application
involves the general problem of allocating limited resources among competing activities in a
best possible (i.e., optimal) way. More precisely, this problem involves selecting the level of
certain activities that compete for scarce resources that are necessary to perform those
activities. The choice of activity levels then dictates how much of each resource will be
consumed by each activity. The variety of situations to which this description applies is
diverse, indeed, ranging from the allocation of production facilities to products to the
allocation of national resources to domestic needs, from portfolio selection to the selection of
shipping patterns, from agricultural planning to the design of radiation therapy, and so on.
However, the one common ingredient in each of these situations is the necessity for
allocating resources to activities by choosing the levels of those activities.

Linear programming uses a mathematical model to describe the problem of concern. The
adjective linear means that all the mathematical functions in this model are required to be
linear functions. The word programming does not refer here to computer programming;
rather, it is essentially a synonym for planning. Thus, linear programming involves the
planning of activities to obtain an optimal result, i.e., a result that reaches the specified goal
best (according to the mathematical model) among all feasible alternatives. Although
Amare Matebu Kassa (Dr.-Ing) Page 19
allocating resources to activities is the most common type of application, linear programming
has numerous other important applications as well. In fact, any problem whose mathematical
model fits the very general format for the linear programming model is a linear programming
problem. Furthermore, a remarkably efficient solution procedure, called the simplex method,
is available for solving linear programming problems of even enormous size. These are some
of the reasons for the tremendous impact of linear programming in recent decades.

Linear Programming (LP) is a mathematical procedure for determining optimal allocation of


scarce resources. LP deals with a class of programming problems where both the objective
function to be optimized is linear and all relations among the variables corresponding to
resources are linear. Any LP problem consists of an objective function and a set of
constraints. In most cases, constraints come from the environment in which you work to
achieve your objective.

 Some of the major application areas to which LP can be applied are:


 Work scheduling
 Production planning & Production process
 Capital budgeting
 Financial planning
 Blending (e.g. Oil refinery management)
 Farm planning
 Distribution
 Multi-period decision problems
 Inventory model
 Financial models
 Work scheduling

Linear programming is used for facility location Decisions. Linear programming is used as
“What-If” Tool. It is used to find the optimal shortest path in the network locations.
 LP-based techniques can be used to locate
 manufacturing facilities,
 distribution centres,
 warehouse/storage facilities etc.

Amare Matebu Kassa (Dr.-Ing) Page 20


taking into consideration factors such as
 facility/distribution capacities,
 customer demand,
 budget constraints,
 Quality of service to customers etc.

2.1 Linear programming Model

 Objective: the goal of an LP model is maximization or minimization


 Decision variables: amounts of either inputs or outputs
 Feasible solution space: the set of all feasible combinations of decision variables as
defined by the constraints
 Constraints: limitations that restrict the available alternatives
 Parameters: numerical values
 Linearity: the impact of decision variables is linear in constraints and objective
function.
 Divisibility: non-integer values of decision variables are acceptable.
 Certainty: values of parameters are known and constant.
 Non-negativity: negative values of decision variables are unacceptable.

Formulation of a LP Model
1. Identify the decision variables and express them in algebraic symbols. (like X1, X2,
etc..)
2. Identify all the constraints or limitations and express as equations (scares resources
like time, labor, raw materials etc …)
3. Identify the Objective Function and express it as a linear function (the decision maker
want to achieve it).

Requirements of a LP Problem

1. LP problems seek to maximize or minimize some quantity (usually profit or cost)


expressed as an objective function.

Amare Matebu Kassa (Dr.-Ing) Page 21


2. The presence of restrictions, or constraints, limits the degree to which we can
pursue our objective.
Linear programming
1. LP model formulation
 An LP is one of the bedrocks of OR
 It is a tool for solving optimization problems
2. Any linear program consists of 4 parts:
 a set of decision variables,
 the objective function,
 and a set of constraints
 Sign Restrictions

General Mathematical Formulation of LP


Optimize (Maximize or Minimize)
Z = c1 x1 + c2 x2+…+cn xn
Subject to:
a11 x1 + a12 x2 +…+ a1n xn (≤, =, ≥ ) b1
a21 x1 + a22 x2 +…+ a2n xn (≤, =, ≥ ) b2
. . . . . . .
. . . . . . .
am1 x1 + am2 x2 +…+ amn xn (≤, =, ≥ ) bm
and x1, x2, …xn ≥ 0

Example 1

The KADISCO Company owns a small paint factory that produces both interior and exterior
house paints for wholesale distribution. Two basic raw materials, A and B, are used to
manufacture the paints. The maximum availability of A is 6 tons a day; that of B is 8 tons a
day. The daily requirements of the raw materials per ton of interior and exterior paints are
summarized in the following table.
Tons of Raw Material per Ton of Paint
Exterior Interior Maximum Availability (tons)
Raw Material A 1 2 6
Raw Material B 2 1 8

Amare Matebu Kassa (Dr.-Ing) Page 22


A market survey has established that the daily demand for the interior paint cannot exceed
that of exterior paint by more than 1 ton. The survey also showed that the maximum demand
for the interior paint is limited to 2 tons daily.

The wholesale price per kg is $3000 for exterior paint and $2000 per interior paint. How
much interior and exterior paint should the company produce daily to maximize gross
income?

Define
XE = Tons of exterior paint to be produced
XI = Tons of interior paint to be produced
Maximize Z = 3000XE + 2000XI
Subject to:
XE + 2XI ≤ 6 (1) (availability of raw material A)
2XE + XI ≤ 8 (2) (availability of raw material B)
_XE + XI ≤ 1 (3) (Restriction in production)
XI ≤ 2 (4) (demand restriction)
XE, XI ≥ 0

Example 2. Production-Mix Example

Amare Matebu Kassa (Dr.-Ing) Page 23


Let:
X1 = number of units of XJ201 produced
X2 = number of units of XM897 produced
X3 = number of units of TR29 produced
X4 = number of units of BR788 produced
Maximize profit = 9X1 + 12X2 + 15X3 + 11X4
subject to .5X1 + 1.5X2 +1.5X3 +1X4 ≤ 1,500 hours of wiring
3X1 + 1X2 + 2X3 + 3X4 ≤ 2,350 hours of drilling
2X1 + 4X2 + 1X3 + 2X4 ≤ 2,600 hours of assembly
.5X1 + 1X2 + .5X3 + .5X4 ≤ 1,200 hours of inspection
X1 ≥ 150 units of XJ201
X2 ≥ 100 units of XM897
X3 ≥ 300 units of TR29
X4 ≥ 400 units of BR788

Example 3: Advertisement
Dorian makes luxury cars and jeeps for high-income men and women. It wishes to advertise
with 1 minute spots in comedy shows and football games. Each comedy spot costs $50 and is
seen by 7M high-income women and 2M high-income men. Each football spot costs $100
and is seen by 2M high-income women and 12M high-income men. How can Dorian reach
28M high-income women and 24M high-income men at the least cost?

Solution:
The decision variables:
X1 = the number of comedy spots
X2 = the number of football spots

Min Z = 50X1 + 100X2


S.T 7X1 + 2X2 ≥ 28
2X1 + 12 X2 ≥ 24
X1, X2 ≥ 0

Amare Matebu Kassa (Dr.-Ing) Page 24


Example 4: Post Office
A PO requires different numbers of employees on different days of the week. Union rules
state each employee must work 5 consecutive days and then receive two days off. Find the
minimum number of employees needed.

Monday Tuesday Wednesday Thursday Friday Saturday Sunday

Staff 17 13 15 19 14 16 11
Needed

Solution:

Let the decision variables are xi (number of employees starting on day i)

Example 5: A Diet Problem


Suppose the only foods available in your local store are potatoes and steak. The decision
about how much of each food to buy is to made entirely on dietary and economic
considerations. We have the nutritional and cost information in the following table:

Solution:
The problem is to find a diet (a choice of the numbers of units of the two foods) that meets all
minimum nutritional requirements at minimal cost. Formulate the problem LP.

Amare Matebu Kassa (Dr.-Ing) Page 25


Minimize {Z = $25X1 + $50X2}

3X1 + X2 ≥8
4X1 + 3X2 ≥ 19
X1 + 3X2 ≥7
X1 , X2 ≥0

Example 6: Blending Problem

Bryant's Pizza, Inc. is a producer of frozen pizza products. The company makes a net income
of $1.00 for each regular pizza and $1.50 for each deluxe pizza produced. The firm currently
has 150 pounds of dough mix and 50 pounds of topping mix. Each regular pizza uses 1 pound
of dough mix and 4 ounces (16 ounces= 1 pound) of topping mix. Each deluxe pizza uses 1
pound of dough mix and 8 ounces of topping mix. Based on the past demand per week,
Bryant can sell at least 50 regular pizzas and at least 25 deluxe pizzas. The problem is to
determine the number of regular and deluxe pizzas the company should make to maximize
net income. Formulate this problem as an LP problem.

Solution:
A Blending Problem
Let X1 and X2 be the number of regular and deluxe pizza, then the LP formulation is:

Maximize: { Z = X1 + 1.5 X2 }
Subject to:
X1 + X2 ≤ 150
0.25 X1 + 0.5 X2 ≤ 50
X1 ≥ 50
X2 ≥ 25
X1 , X2 ≥ 0

2.2 Solving linear programming

2.2.1 Graphical Method


A. Corner Point Method
1. Define the problem mathematically
2. Graph by constraints by treating each inequality as equality

Amare Matebu Kassa (Dr.-Ing) Page 26


3. Locate the feasible region and the corner points
4. Find out the value of objective function at these points
Find out the optimal solution and the optimal value of objective function if it exists

B. Iso-Profit or Iso-Cost Line Method


1. Define the problem mathematically
2. Graph by constraints by treating each inequality as equality
3. Locate the feasible region and the corner points
4. Draw out a line having the slope of Objective Function Equation (this is called Iso-
Cost / Profit Line in Minimization and Maximization problems respectively)
somewhere in the middle of the feasible region.
5. Move this line away from origin (in case of Maximization) or towards Origin (in case
of Minimization) until it touches the extreme point of the feasible region
6. If a single point is encountered, that reflects optimality and its coordination is the
solution. If Iso-Profit/ Cost line coincides with any constraint line at the extreme, then
this is the case of multiple optimum solutions.

Example1. The product-mix problem at Shader Electronics


 Two products
1. Shader X-pod, a portable music player
2. Shader BlueBerry, an internet-connected color telephone
 Determine the mix of products that will produce the maximum profit

Decision Variables:
X1= number of X-pods to be produced
X2= number of BlueBerrys to be produced

Amare Matebu Kassa (Dr.-Ing) Page 27


Objective Function:
Maximize Profit = $7X1 + $5X2

There are three types of constraints


 Upper limits where the amount used is ≤ the amount of a resource
 Lower limits where the amount used is ≥ the amount of the resource
 Equalities where the amount used is = the amount of the resource
Constraints፡
4X1 + 3X2 ≤ 240 (hours of electronic time)
2X1 + 1X2 ≤ 100 (hours of assembly time)

Graphical Solution
 Can be used when there are two decision variables
1. Plot the constraint equations at their limits by converting each equation to an
equality
2. Identify the feasible solution space
3. Create an iso-profit line based on the objective function
4. Move this line outwards until the optimal point is identified

Amare Matebu Kassa (Dr.-Ing) Page 28


Iso-Profit Line Solution Method
Choose a possible value for the objective function
$210 = 7X1 + 5X2
Solve for the axis intercepts of the function and plot the line
X2 = 42 X1 = 30

Amare Matebu Kassa (Dr.-Ing) Page 29


Corner-Point Method

Amare Matebu Kassa (Dr.-Ing) Page 30


 The optimal value will always be at a corner point
 Find the objective function value at each corner point and choose the one with the
highest profit
Point 1: (X1 = 0, X2 = 0) Profit $7(0) + $5(0) = $0
Point 2 : (X1 = 0, X2 = 80) Profit $7(0) + $5(80) = $400
Point 4 : (X1 = 50, X2 = 0) Profit $7(50) + $5(0) = $350
Solve for the intersection of two constraints
4X1 + 3X2 ≤ 240 (electronics time)
2X1 + 1X2 ≤ 100 (assembly time)
4X1 + 3X2 = 240
- 4X1 - 2X2 = -200
+ 1X2 = 40

4X1+3(40) = 240
4X1+120 = 240
X1=30
 The optimal value will always be at a corner point
 Find the objective function value at each corner point and choose the one with the
highest profit
Point 1: (X1 = 0, X2 = 0) Profit $7(0) + $5(0) = $0
Point 2: (X1 = 0, X2 = 80) Profit $7(0) + $5(80) = $400
Point 4: (X1 = 50, X2 = 0) Profit $7(50) + $5(0) = $350
Point 3: (X1 = 30, X2 = 40) Profit $7(30) + $5(40) = $410

Solving Minimization Problems

 Formulated and solved in much the same way as maximization problems


 In the graphical approach an iso-cost line is used
 The objective is to move the iso-cost line inwards until it reaches the lowest cost
corner point

Example 2: Minimization:
X1 = number of tons of black-and-white picture chemical produced
X2 = number of tons of color picture chemical produced

Amare Matebu Kassa (Dr.-Ing) Page 31


Minimize total cost = 2,500X1 + 3,000X2
Subject to:
X1≥ 30 tons of black-and-white chemical
X2≥ 20 tons of color chemical
X1 + X2 ≥ 60 tons total
X1, X2 ≥ $0 non-negativity requirements

2.2.2 Simplex Method

Realistic linear programming problems often have several decision variables and many
constraints. Such problems cannot be solved graphically; instead an algorithm such as the
simplex procedures is used. Simplex method is thus the most effective analytical method of
solving linear programming problems.

The simplex method is an ITERATIVE or “step by step” method or repetitive algebraic


approach that moves automatically from one basic feasible solution to another basic feasible
solution improving the situation each time until the optimal solution is reached at.

Objective Function
Optimize (Max. or Min.) z = Σ c j xj for j = 1..n
Subject to: (Constraints)

Amare Matebu Kassa (Dr.-Ing) Page 32


Σ a ij xj (≤, =, ≥) bi ; for j = 1 ..n, i = 1,2, …m
(Non negativity restrictions)
xj ≥ 0 ; j= 1, 2, …, n

General Mathematical Formulation of LP


Optimize (Maximize or Minimize)
Z = c1 x1 + c2 x2+…+cn xn
Subject to:
a11 x1 + a12 x2 +…+ a1n xn (≤, =, ≥) b1
a21 x1 + a22 x2 +…+ a2n xn (≤, =, ≥) b2
.
.
am1 x1 + am2 x2 +…+ amn xn (≤, =, ≥ ) bm

and x1, x2, …xn ≥ 0

1. The standard form of LP problem


i. All the constraints should be expressed as equations by slack or surplus and/or
artificial variables
ii. The right hand side of each constraint should be made non-negative; if it is not, this
should be done by multiplying both sides of the resulting constraint by -1

Example:
2X1+3X2-4X3+X4 ≤ -50, that gives
-2X1-3X2+4X3-X4 ≤ 50 we multiply both sides by negative
iii. Three types of additional variables, namely
a. Slack Variable (S)
b. Surplus variable (-S), and
c. Artificial variables (A) are added in the given LP problem to convert it into
standard form for two reasons:
 To convert an inequality into equation to have a standard form of an LP model,
and

Amare Matebu Kassa (Dr.-Ing) Page 33


 To get an initial feasible solution represented by the columns of an identity
matrix.

The summery of the extra variables needed to add in the given LP problem to convert it into
standard form is given below.
Types of Extra variables to be Coefficient of extra variables Presence of
constraint added in the objective function variables in the
Max Z Min Z initial solution mix
≤ Add only slack variable 0 0 Yes
≥ Subtract surplus 0 0 No
variable and
Add artificial variable -M +M Yes
= Add artificial variable -M +M Yes

Some Definitions
 Solution: pertains to the values of decision variables that satisfies constraints
 Feasible solution: Any solution that also satisfies the non negativity restrictions
 Basic Solution: For a set of m simultaneous equations in n unknowns (n>m), a
solution obtained by setting n- m of the variables equal to zero and solving the m
equation in m unknowns is called basic solution.
 Basic Feasible solution: A feasible solution that is also basic
 Optimum Feasible solution: Any basic feasible solution which optimizes the objective
function
 Degenerate Solution: when one or more basic variable becomes equal to zero.

2. Test of optimality
i. If all Zj - Cj > 0, then the basic feasible solution is optimal (Maximization case)
ii. If all Zj - Cj < 0, then the basic feasible solution is optimal (Minimization case)

3. Variable to enter the basis


i. A variable that has the highest negative value in the Zj-Cj row (Maximization case)
ii. A variable that has the most positive value in the Zj-Cj row(Minimization case)

Amare Matebu Kassa (Dr.-Ing) Page 34


4. Variable to leave the basis
The row with the worst negative/largest positive and minimum replacement ratio (or
both maximization & minimization cases respectively).

Steps in simplex methods:


Step 1: Formulate LP Model
Step 2: Standardize the problem
Step 3: Obtain the initial simplex tableau
Step 4: check optimality (optimality test)
Step 5: Choose the “incoming” or “entering” variables
Step 6: Choose the “leaving “or “outgoing” variable
Step 7 Repeat step 4-6 till optimum basic feasible solution is obtained. Or go to step 3
and repeat the procedure until all entries in the Cj – Zj row are either negative
or zero.

Example: 1
Solve the problem using the simplex approach
Max. Z=300x1 +250x2
Subject to:
2x1 + x2 < 40 (Labor )
x1+3x2 < 45 (Machine)
x1 < 12 (Marketing)
x1, x2 > 0

2.2.3 Big M Method

Minimize Z with inequalities of constraints in “> “form.


There are two methods to solve minimization LP problems:
1. Direct method/Big M-method/: Using artificial variables
2. Conversion method: Minimization by maximizing the dual

Amare Matebu Kassa (Dr.-Ing) Page 35


Surplus Variable (-s):
A variable inserted in a greater than or equal to constraint to create equality. It represents the
amount of resource usage above the minimum required usage. Surplus variable is subtracted
from a > constraint in the process of converting the constraint to standard form. Neither the
slack nor the surplus is negative value. They must be positive or zero.

Example: let us consider 5x1+2x2 ≥ 20


When x1 = 4.5 and
x2 = 2 ==>5(4.5)+2(2)-s = 20 ==> s=11
When x1= 2 and x2= 5 ==> s= 0
But when x1= 0 and x2= 0 (No production)
==> s = -20 (This is mathematically unaccepted).
 Thus, in order to avoid the mathematical contradiction, we have to add artificial
variable (A)
 Artificial variable (A): Artificial variable is a variable that has no meaning in a
physical sense but acts as a tool to create an initial feasible LP solution.

Following are the characteristics of Big-M Method:


1. High penalty cost (or profit) is assumed as M
2. M is assigned to artificial variable A in the objective function Z.
3. Big-M method can be applied to minimization as well as maximization problems
with the following distinctions:
 Minimization problems: -Assign +M as coefficient of artificial variable A in
an objective function Z.
 Maximization problems: -Here –M is assigned as coefficient of artificial
variable A in the objective function Z.
4. Coefficient of S (slack/surplus) takes zero values in the objective function Z
5. For minimization problem, the incoming variable corresponds to a highest
positive value of Zj- Cj.
6. Solution is optimal when all the values of Zj- Cj non positive (For minimization
case)

Amare Matebu Kassa (Dr.-Ing) Page 36


Example 1: Big Method

The ABC printing company is facing a tight financial squeeze and is attempting to cut costs
wherever possible. At present it has only one printing contract, and luckily the book is selling
well in both the hardcover and paper back editions. It has just received a request to print more
copies of this book in either the hardcover or paperback form. The printing cost for hardcover
books is birr 600 per 100 while that for paperback is only birr 500 per 100. Although the
company is attempting to economize, it does not wish to lay off any employee. Therefore, it
feels oblized to run its two printing presses at least 80 and 60 hours per week, respectively.
Press I can produce 100 hardcover books in 2 hours or 100 paperback books in 1 hour. Press
II can produce 100 hardcover books in 1 hour or 100 paperbacks books in 2 hours. Determine
how many books of each type should be printed in order to minimize costs.

Example 2: Big - method


Minimize Z= 25x1 +30x2
Subject to:
20x1+15x2 > 100
2x1+ 3x2 > 15
x1, x2 >0

2.3 Duality and Sensitivity

Every LP has another LP associated with it, which is called its dual. The first way of starting
a linear problem is called the primal of the problem. The second way of starting the same
problem is called the dual. The optimal solutions for the primal and the dual are equivalent,
but they are derived through alternative procedures.

2.3.1 Primal and Duality

The dual contains economic information useful to management, and it may also be easier to
solve, in terms of less computation than the primal problem. Corresponding to every LP,
there is another LP. The given problem is called the primal. The related problem to the given
problem is known as the dual.

Amare Matebu Kassa (Dr.-Ing) Page 37


The dual of a dual is the primal. If the primal has optimal solution, the dual will have optimal
solution. If the primal has no optimal solution, the dual will not have optimal solution.
Whether we follow the dual or primal system, the optimal solution will remain equal.

Primal Dual
Objective is minimization Objective is maximization & vice versa

≥ type constraints ≤ type constraints


Number of columns Number of rows

Number of rows Number of columns


Number of decision variables Number of constraints
Number of constraints Number of decision variables
Coefficient of objective function RHS value
RHS values Coefficient of objective function

Finding the Dual of an LP


Define the variables for a max problem to be z, x1, x2, …,xn and the variables for a min
problem to be w, y1, y2, …, yn.

Finding the Dual of an LP

Normal Max problem


Its Dual

Normal Min problem


Its Dual

Amare Matebu Kassa (Dr.-Ing) Page 38


Finding the dual to a max problem in which all the variables are required to be nonnegative
and all the constraints are ≤ constraints (called normal max problem) is shown on the next
section.

2.3.2 Economic Interpretation of the Dual Problem

Example: A Dakota work shop want to produce desk, table, and chair with the available
resource of: Timber, Finishing hours and carpenter hours as revised in the table below. The
selling price and available resources are also revised in the table. Formulate this problem as
Primal and Dual Problem?

Resource Desk Table Chair Availability


Timmber 8 board ft 6 board ft 1 board ft 48 boards fit
Finishing 4 hours 2 hours 1.5 hours 20 hours
Carpentry 2 hours 1.5 hours 0.5 hours 8 hours
Selling price $60 $30 $20

Interpreting the Dual of the Dakota (Max) Problem

The first dual constraint is associated with desks, the second with tables, and the third with
chairs. Decision variable y1 is associated with Timber, y2 with finishing hours, and y3 with
carpentry hours. Suppose an entrepreneur wants to purchase all of Dakota’s resources. The

Amare Matebu Kassa (Dr.-Ing) Page 39


entrepreneur must determine the price he or she is willing to pay for a unit of each of
Dakota’s resources.

To determine these prices we define:


y1 = price paid for 1 boards ft of lumber
y2 = price paid for 1 finishing hour
y3 = price paid for 1 carpentry hour
The resource prices y1, y2, and y3 should be determined by solving the Dakota dual.

The total price that should be paid for these resources is 48 y1 + 20y2 + 8y3. Since the cost of
purchasing the resources is to minimized:
Min w = 48y1 + 20y2 + 8y3 is the objective function for Dakota dual.
In setting resource prices, the prices must be high enough to induce Dakota to sell.
For example, the entrepreneur must offer Dakota at least $60 for a combination of resources
that includes 8 board feet of timber, 4 finishing hours, and 2 carpentry hours because Dakota
could, if it wished, use the resources to produce a desk that could be sold for $60. Since the
entrepreneur is offering 8y1 + 4y2 + 2y3 for the resources used to produce a desk, he or she
must chose y1, y2, and y3 to satisfy: 8y1 + 4y2 + 2y3 ≥ 60. Similar reasoning shows that at
least $30 must be paid for the resources used to produce a table.
Thus y1, y2, and y3 must satisfy: 6y1 + 2y2 + 1.5y3 ≥ 30

Likewise, at least $20 must be paid for the combination of resources used to produce one
chair. Thus y1, y2, and y3 must satisfy: y1 + 1.5y2 + 0.5y3 ≥ 20. The solution to the Dakota
dual yields prices for timber, finishing hours, and carpentry hours.

2.3.3 Sensitivity Analysis

In an LP model, the input data (also known as parameters) such as:


i) Profit (cost) contribution Cj per unit of decision variable
ii) Availability of resources (bj)
iii) Consumption of resources per unit of decision variables (aij) are assumed constant and
known with certainty during a planning period.

Amare Matebu Kassa (Dr.-Ing) Page 40


However, in real-world situations some data may change over time because of the dynamic
nature of business: such changes in any of these parameters may raise doubt on the validity of
the optimal solution of the given LP model. Thus, the decision maker in such situation would
like to know how sensitive the optimal solution is to the changes in the original input data
values.

Sensitivity analysis and parametric linear programming are the two techniques that evaluate
the relationship between the optimal solution and the changes in the LP model parameters.
Sensitivity analysis is the study of sensitivity of the optimal solution of an LP problem due to
discrete variations (changes) in its parameters.

The degree of sensitivity of the solution due to these variations can range from no change at
all to a substantial change in the optimal solution of the given LP problem. Thus, in
sensitivity analysis, we determine the range over which the LP model parameters can change
without affecting the current optimal solution.

Example: Sensitivity analysis


A company wants to produce three products: A, B and C. the unit profit on these products is
Birr 4, Birr 6, and Birr 2 respectively. These products require two types of resources, human
power and raw material. The LP model formulated for determining the optimal product mix
is as follows:

Maximize Z = 4x1 + 6x2 + 2x3


Subject to the constraints:
i) Human power constraint
X1 + X2 +X3 ≤ 3
ii) Raw material constraint
X1 + 4X2 + 7X3 ≤ 9
and X1, X2, X3 ≥ 0
Where X1, X2, and X3 = number of units of product A, B and C, respectively to be produced.
a. Find the optimal product mix and the corresponding profit of the company.

Amare Matebu Kassa (Dr.-Ing) Page 41


b. Find the range of the profit contribution of product C (i.e coefficient C3 of variable
X3) in the objective function such that current optimal product mix remains
unchanged.
c. What shall be the new optimal product mix when the profit per unit from product C is
increased from Birr 2 to Birr 10?
d. Find the range of the profit contribution of product A (i.e coefficient C1 of variable
X1) in the objective function such that current optimal product mix remains
unchanged.

Amare Matebu Kassa (Dr.-Ing) Page 42


CHAPTER 3
INTEGER PROGRAMMING

3.1 Introduction
In linear programming, each of the decision variable as well as slack and /or surplus variable
is allowed to take any fractional value. However, there are certain practical problems in
which the fractional value of the decision variables has no significance. For example it does
not make sense saying 1.5 men working on a project or 1.6 machines in a workshop. The
integer solution to a problem can, however, be obtained by rounding off the optimum value
of the variables to the nearest integer value. This approach can be easy In terms of economy
of effort, time and cost that might be required to derive an integer solution but this solution
may not satisfy all the given constraints. Secondly, the value of the objective function so
obtained may not be optimal value. All such difficulties can be avoided if the given problem,
where an integer solution is required, is solved by integer programming techniques.

There are certain decision problems where decision variables make sense only if they have
integer values in the solution. Capital budgeting, construction scheduling, plant location and
size, routing and shipping schedule, batch size, capacity expansion, fixed charge, etc., are few
problems which demonstrate the areas of application of integer programming.

3.2 Integer Programming Models


There are three types of models

Pure Integer Model: All decision variables required to


have integer solution values.

0-1 Integer Model: All decision variables required to


have integer values of zero or one.

Mixed Integer Model: Some of the decision variables (but


not all) required to have integer
values.

Amare Matebu Kassa (Dr.-Ing) Page 43


Pure integer linear programming problem in its standard form can be stated as follows:
Maximize Z = C1X1 +C2X2 +….. + CnXn
Subject to the Constraints:
a11x1 + a12x2 + …..+ a1nxn = b1
a21x1 + a22x2 + …..+ a2nxn = b2
a31x1 + a32x2 + …..+ a3nxn = b3
. . . .
. . . .
am1x1 + am2x2 + …..+ amnxn = bm
and X1, X2, …, Xn  0 and are integers.
Example: Pure integer Model

 Machine shop obtaining new presses and lathes.


 Marginal profitability: each press $100/day; each lathe $150/day.
 Resource constraints: $40,000 budget, 200 sq. ft. floor space.
 Machine purchase prices and space requirements:
Machine Required floor
Space (ft2) Purchase price
Press 15 $8,000
Lathe 30 $4,000

Integer Programming Model:


Let x1 = number of presses
x2 = number of lathes
Maximize Z = $100x1 + $150x2
subject to:
$8,000x1 + 4,000x2  $40,000
15x1 + 30x2  200 ft2
x1, x2  0 and integer

A 0 - 1 Integer Model - example


■ Recreation facilities selection to maximize daily usage by residents.
■ Resource constraints: $120,000 budget; 12 acres of land.

Amare Matebu Kassa (Dr.-Ing) Page 44


■ Selection constraint: either swimming pool or tennis center (not both).

Expected Usage Land Requirement


Recreation
(people/day) Cost ($) (acres)
Facility
Swimming pool 300 35,000 4
Tennis Center 90 10,000 2
Athletic field 400 25,000 7
Gymnasium 150 90,000 3

Integer Programming Model:


Let x1 = construction of a swimming pool
x2 = construction of a tennis center
x3 = construction of an athletic field
x4 = construction of a gymnasium
Maximize Z = 300x1 + 90x2 + 400x3 + 150x4
subject to:
$35,000x1 + 10,000x2 + 25,000x3 + 90,000x4  $120,000
4x1 + 2x2 + 7x3 + 3x4  12 acres
x1 + x2  1 facility
x1, x2, x3, x4 = 0 or 1

A Mixed Integer Model


 $250,000 available for investments providing greatest return after one year.
 Data:
 Condominium cost $50,000/unit; $9,000 profit if sold after one year.
 Land cost $12,000/ acre; $1,500 profit if sold after one year.
 Municipal bond cost $8,000/bond; $1,000 profit if sold after one year.
 Only 4 condominiums, 15 acres of land, and 20 municipal bonds available.

Integer Programming Model:


Let : x1 = condominiums purchased
x2 = acres of land purchased

Amare Matebu Kassa (Dr.-Ing) Page 45


x3 = bonds purchased
Maximize Z = $9,000x1 + 1,500x2 + 1,000x3
subject to:
50,000x1 + 12,000x2 + 8,000x3  $250,000
x1  4 condominiums
x2  15 acres
x3  20 bonds
x1, x2 x3  0 and integer

3.3 Methods of solving Integer Programming

There are various methods for solving integer programming problems. In this class, we shall
see two methods – namely:
1. Gomory’s cutting plane method

2. Branch and Bound method

3.3.1 Gomory's All Integer Programming Algorithm

An iterative procedure for the solution of an all integer programming problem by Gomory's
cutting plane method may be summarized in the following steps.

Step 1 Initialization Formulate the standard integer LP problem. If there are any non-integer
coefficients in the constraint equations, convert them into integer coefficients. Solve it by
simplex method, ignoring the integer requirement of variables.

Step 2 Test the optimality


a) Examine the optimal solution. If all basic variables (i.e. XBi = bi ≥ 0) have integer values,
the integer optimal solution has been derived and the procedure should be terminated. The
current optimal solution obtained in Step 1 is the optimal basic feasible solution to the integer
linear programming.
b) If one or more basic variables with integer requirements have non-integer solution values,
then go to Step 3.

Amare Matebu Kassa (Dr.-Ing) Page 46


Step 3 Generate cutting plane: choose a row r corresponding to a variable X, which has the
largest fractional value fr and generate the cutting plane (a Gomory constraint).
n
- fr  Sg   frj X j Where 0 ≤ frj < 1 and 0 < fr < 1
ir i

If there are more than one variables with the same largest fraction, then choose the one that
has the smallest contribution to the maximization LP problem or the largest to the
minimization LP problem.

Step 4 Obtain the new solution: add the cutting plan generated in step 3 to the bottom of the
optimal simplex table as obtained in step 3. Find a new optimal solution by using the dual
simplex method, i.e choose a variable to enter into the new solution having the smallest
ratio:{(Cj-Zj)/Yij; Yij < 0)} and return to step 2. The process is repeated until all basic
variables with integer requirements assume non-negative integer value.

The procedure for solving and integer linear programming problem can be explained through
a flow chart shown in figure 3.1 below.

Figure 3.1 Flow chart for solving ILP problem

Amare Matebu Kassa (Dr.-Ing) Page 47


Integer Programming Graphical Solution

 Rounding non-integer solution values up to the nearest integer value can result in an
infeasible solution.

 A feasible solution is ensured by rounding down non-integer solution values but may
result in a less than optimal (sub-optimal) solution.

Gomory's cutting-plane method

Gomory's cutting-plane method was developed by R.E. Gomory in 1956 to solve integer
linear programming problems using the dual simplex method. It is based on the generation of
a sequence of linear inequalities called a cut. This 'cut' cuts out a part of the feasible region
of the corresponding LP problem while leaving out the feasible region of the integer linear
programming problem. The hyper plane boundary of a cut is called the cutting plane.

Consider the following linear integer programming (LIP) problem


Maximize Z= 14x1 + 16x2
subject to the constraints
4x1 + 3x2 ≤ 12
6x1 + 8x2 ≤ 24
and x1, x2 ≥ 0 and are integers.
Relaxing the integer requirements, the problem is solved graphically using figure 3.1. We
obtain the optimal solution to this LP problem as:
X1 = 1.71, X2 = 1.71 and Max Z = 51.42
This solution does not satisfy the integer requirement of variables XI and x2. The feasible
region (solution space) formed by the constraints is marked by OABC in figure 3.1.

Rounding off this solution to Xl = 2, X2 = 2 does not satisfy both the constraints and therefore
the solution is infeasible. The dots in figure 3.1, also referred to as lattice points, represent all
of the integer solutions that lie within the feasible solution space of LP problem. However, it
is difficult to evaluate every such point to determine the value of the objective function.

Figure 3.1 suggests that we can find a solution to the problem when it is formulated as an LP
problem (which by chance could contain integers). It may be noted that the optimal lattice

Amare Matebu Kassa (Dr.-Ing) Page 48


point C lies at the corner of the solution space OABC obtained by cutting away the small
portion above the dotted line. This suggests a solution procedure that successively cuts down
(reduces) the feasible solution space until an integer-valued corner is found.

Figure 3.2 Concept of cutting plane

The optimal integer solution to the given LP problem is: x1 = 0, x2 = 3 and Max Z = 48.
Notice that its lattice point is not even adjacent to the most desirable LP problem solution
corner.

Remark: Reducing the feasible region by adding extra constraints (cut) can never give an
improved objective function value. Usually it makes it worse and if ZIP represents the
minimum value of objective function in an ILP problem and ZLP the minimum value of
objective function in an LP problem, then ZIP ≤ ZLP.

Example of Gomory’s method -


Graphical Solution of Machine Shop Model

Maximize Z = $100x1 + $150x2

Subject to:

8,000x1 + 4,000x2  $40,000


15x1 + 30x2  200 ft2
x1, x2  0 and integer

Amare Matebu Kassa (Dr.-Ing) Page 49


Figure 3.1 Feasible Solution Space with Integer Solution Points

Optimal Solution:
Z = $1,055.56
x1 = 2.22 presses
x2 = 5.55 lathes
Example 3.1 Solve the following integer programming problem using Gomory’s
cutting plane algorithm:

Maximize: Z = X1 + X2
Subjected to the constraints:
3X1 + 2X2 ≤ 5
X2 ≤ 2
and X1 , X2 ≥ 0 and are integers

Example 2 solve the following integer programming problem using Gomory’s cutting
plane algorithm.
Maximize Z = X1 + X2
Subject to the constraints
3X1 + 2X2 ≤ 5
X2 ≤ 2
and X1, X2 ≥ 0 and integer

Amare Matebu Kassa (Dr.-Ing) Page 50


Solution
Step 1 Obtain the optimal solution to the LP problem ignoring the integer restriction by the
simplex method. The optimal solution is shown in table 3.1.

Table 3.1 Optimal non-integer Solution


Cj 1 1 0 0
Variables Solution
In basis values X1 X2 S3 S2
CB B b(=XB)
1 X1 1/3 1 0 1/3 -2/3
1 X2 2 0 1 0 1
Z= 7/2 Cj-Zj 0 0 -1/3 -1/3

In table 3.1, since all cj - Zj ≤ 0, the optimal solution is: X1 = 1/3, X2 = 2 and Max Z = 7/2.

Step 2: In the current optimal solution, all basic variables in the basis are not integers and the
solution is not acceptable. Since both decision variables X1 and X2 are assumed to take on
integer value, a pure integer cut is developed under the assumption that all the variables are
integers, as explained in step 3.

Step 3: since X1 is the only basic variable whose value is a non-negative fraction, we shall
consider the first row for generating the Gomory cut. Considering X1-equation as the source
row in Table 3.1, we write:
1/3 = x1 + 0 x2 + 1/3 s1 – 2/3 s2 (X1 – source row)
The factoring of the x1 – source row yields:
(0 + 1/3) = (1 + 0) x1 + (0+1/3) s1 + (-1 +1/3) s2
Notice that each of the non-integer coefficients is factored into integer and fractional parts in
such a manner that the fractional part is strictly positive.
Rearrange the equation so that all of the integer coefficients appear on the left-hand side. This
gives:
1/3 + (s2 – x1) = 1/3 s1 + 1/3 s2

Since X1 ands, are non-negative, left-hand side must satisfy


1/3 ≤ 1/3 s1 + 1/3 s2
1/3 + sg1 = 1/3 s1 + 1/3 s2 or -1/3 = sg1-1/3 s1 - 1/3 s2 (Cut 1)

Amare Matebu Kassa (Dr.-Ing) Page 51


Where Sg1 is the new non-negative (integer) slack variable.

By adding the Gomory cut at the bottom of table 3. 1 table 3.2.

Table 3.2 Improved solution


Cj 1 1 0 0 0
Variables Solution X1 X2 S1 S2 sg1 Min Ratio
In basis values
XB Xi
CB B b(=XB)
1 x1 1/3 1 1/3 0 -2/3 0
1 x2 2 0 1 0 10 0
0 sg1 -1/3 0 0 -1/3 -1/3 1
Z= 7/2 Cj -Zj 0 0 -1/3 -1/3 0
Ratio: (Cj-Zj)/Y3j (<0) - - 1 1 -

Step 4 Apply the dual simplex method to find the new optimal solution. The key row and key
column are marked in table 3.2. The new solution is shown in table 3.3.

Table 3.3 New optimal solution


Cj 1 1 0 0 0
Variables Solution X1 X2 S1 S2 sg1
In basis values
CB B b(=XB)
1 x1 0 1 0 0 -1 1
1 x2 2 0 1 0 1 0
0 sg1 1 0 0 1 1 -3
Z= 2 Cj -Zj 0 0 0 0 -1

The solution given in table 3.3 is: X1 = 0, X2 = 2, Sgl = 1 and Max Z = 2. This also satisfies
the integer requirements.

Amare Matebu Kassa (Dr.-Ing) Page 52


3.3.2 Branch and Bound Method
 Traditional approach to solving integer programming problems.
 Feasible solutions can be partitioned into smaller subsets
 Smaller subsets evaluated until best solution is found.
 Method is a tedious and complex mathematical process.

The concept behind this method is to divide the entire feasible solution space of LP problem
into smaller parts called sub-problems and then search each of them for an optimal solution.
This approach is useful in those cases where there is a large number of feasible solutions and
enumeration of those becomes economically impractical. The branch and bound method
starts by improving feasible and infeasible upper and/or lower bounds for the decision
variables in each sub-problem. This helps in reducing the number of simplex method to arrive
at the optimal solution, because each sub-problem worse than the current feasible bound is
discarded and only the remaining sub-problems are examined.

The algorithm branch and bound method

Splitting of original LP problem into Sub-problem B and C

Amare Matebu Kassa (Dr.-Ing) Page 53


CHAPTER 4
TRANSPORTATION AND ASSIGNMENT MODEL

4.1 Introduction
One important application of LP is in the area of physical distribution (transportation) of
goods and services from several supply origins to several demand destinations.
Transportation problems are expressed mathematically in terms of LP model and involve
many variables and constraints, which take a long time to solve it.

Transportation problem involves a large number of variables and constraints; it takes a long
time to solve it. The objective is to determine the number of units which should be shipped
from an origin to a destination in order to satisfy the required quantity of goods or services at
each demand destination. The structure of transportation problem involves a large number of
shipping route from several supply origins to several demand destinations. It is used to
determine the optimum transportation schedule that minimizes total transportation cost / time.
Though Transportation problem can be solved using standard LP, however the transportation
algorithm is computationally more efficient.

The problem can be represented in the form of transportation table.


Warehouse W1 W2 W3 Wn Capacity (aj)
Factories
F1 X11C11 X12C12 X13C13 X1nC1n a1

F2 X21C21 X22C22 X23C23 X2nC2n a2

.. .. .. .. .. ..

Fm Xm1Cm1 Xm2Cm2 Xm3Cm3 XmnCmn an

Requirements b1 b2 b3 bn 
(bi)

Amare Matebu Kassa (Dr.-Ing) Page 54


Mathematically it can be expressed as:
m n

Minimize Z =  CijXij (Objective Function)


i1 j 1

Subject to:
n

 Xij
j 1
 ai ; for i = 1, 2, ……., m (Supply Constraints)

 i 1
Xij  bj ; for j = 1, 2, ……., n (Demand Constraints)

Xij  0 for all i and j (Non Negativity Condition)

A necessary and sufficient condition for the existence of a feasible solution to the
transportation problem is:
Total supply = Total demand
m n

 a i   b j (also called rim conditions)


i 1 j1

That is the total capacity (or supply) must be equal total requirement (or demand).

The characteristics of TP are as follows:


 A limited supply of one commodity is available at certain sources or origins.
 There is a demand for the commodity at several destinations.
 The quantities of supply at each source and the demand at each destination are
constant.
 The shipping or transportation costs per unit from each source to each destination are
assumed to be constant.
 No shipments are allowed between sources or between destinations. All supply and
demand quantities are given in whole number or integers.
 The problem is to determine how many units shipped from each source to each
destination so that all demands are satisfied at the minimum total shipping costs.
Uses of transportation techniques:
 Reduce distribution or transportation cost
 Improve competitiveness of product
 Assist proper location of warehouses
 Assist proper location of new factories or plants being planned.
 Close down warehouses which are found costly and uneconomical.
Amare Matebu Kassa (Dr.-Ing) Page 55
The objective of transportation problem is:
 To identify the optimal shipping routes-minimum cost route.
 To identify the maximum amount that can be shipped over the optimum route.
 To determine the total transformation cost or the profit of transportation.

Transportation Problem

Origin Destination centers


(Sources of Supply) (Point of demand centers)

3
F1 50000 W1 6000
2
6 7
7

F2 6000 5 W2 4000
2
3
W3 2000
2 5
4
F3 2500 5 W4 1500

4.2 Solution Mechanism for Transportation Problems


 Find initial solution
 North West Corner Method
 Least/Minimum Cost Method
 Vogel’s Approximation Method
 Find optimal solution
 Stepping Stone Method
 MODI (Modified Distribution) Method.

Solution algorithm for a transportation problem

Step 1: Formulate the problem and set up in the matrix form


Step 2: Obtain an initial basic feasible solution. NWCM & LCM methods are used to find
the initial feasible solution.

Amare Matebu Kassa (Dr.-Ing) Page 56


 The initial solution obtained by any of the three methods must satisfy the following
condition:
 The solution must be feasible: It must satisfy all the supply and demand
constraints
 The number of positive allocations must equal to m+n-1, n = # of columns and m
= # of Rows.

Step 3: Test the initial solution for optimality


Step 4: Repeat step 3 until an optimal solution is reached

4.2.1 North West Corner Method (NWCM) and Least Cost Method

1. Begin with the upper left hand cell (Left, upper most in the table), & allocate as many
units as possible to that cell. This will be the smaller amount of either the row supply
or the column demand. Adjust the row & column quantities to reflect the allocation.
2. Subtract from the row supply & from the column demand the amount allocated.
3. If the column demand is now zero, move to the cell next to the right, if the row supply
is zero, move down to the cell in the next row. If both are zero, move first to the next
cell on the right then down one cell.
4. Once a cell is identified as per step (3), it becomes a northwest cell. Allocate to it an
amount as per step (1).
5. Repeat the above steps (1) - (4) until all the remaining supply and demand is gone.

Steps in Least Cost Method (LCM)


 The objective is to minimize the total transportation cost, we must try to transport as
much as possible through those routes (cells) where the unit transport cost is lowest.
 This method takes into account the minimum unit cost of transportation for obtaining
initial solution and can be summarized as follows.
Step 1: select the cell with lowest unit cost in the entire transportation table and allocate as
much as possible to this cell and eliminate (line out) that row or column.
Step 2: After adjusting the supply and demand for all uncrossed-out rows and columns repeat
the procedure with the next lowest unit cost among the remaining rows and columns of the
transportation table.

Amare Matebu Kassa (Dr.-Ing) Page 57


Step 3: Repeat the procedure until the entire available supply at various sources and demand
at various destinations is satisfied.

Example 4.1
A company has three production facilities S1, S2, and S3 with production capacity of 7,9, and
18 units (in 100s) per week of a product respectively. These units are to be shipped to four
warehouses D1, D2, D3, and D4 with requirements of 5, 6, 7 and 14 units (in 100’s) per week,
respectively. The transportation costs (in Birr) per unit between factories to warehouses are
given in the table below.

D1 D2 D3 D4 Capacity

S1 19 30 50 10 7

S2 70 30 40 60 9

S3 40 8 70 20 18

Demand 5 8 7 14 34

Based on the above information, attempt the following questions:


1. Formulate the transportation problem as an LP model to minimize the total
transportation cost.
2. Determine the total transportation cost of the initial solution using:
i) North West Corner Method
ii) Least Cost Method

4.3 Special cases in transportation


1. Unbalanced supply and demand
 For a feasible solution to exist, it is necessary that the total supply must be equal the
total demand.
 But a situation may arise when the total available supply is not equal to the total
demand. The following two cases may arise:
a) If total supply exceeds total demand, an additional column called a dummy demand
centre can be added to the transportation table to absorb the excess supply.

Amare Matebu Kassa (Dr.-Ing) Page 58


b) If total demand exceeds total supply, a dummy row called a dummy supply center
can be added to the transportation table to account for the excess demand quality.

2. Degeneracy in Transportation Problem


 A condition that occurs when the number of occupied cells in any solutions is less
than the number of rows plus the number of columns minus 1 (m+n-1) in a
transportation table.
 A solution will be called degenerate when the number of occupied cells is less than
the required number, m+n-1.
 To resolve degeneracy, we processed by allocating a very small quantity close to zero
to one or more unoccupied cell so as to get m+n-1 number of occupied cells.

4.4 Assignment problem

An assignment problem is a particular case of transportation problem where the objective is


to assign a number of resources to an equal number of activities so as to minimize total cost
or maximize total profit of allocation. The problem of assignment arises because available
resources like men, machines, etc have varying degrees of efficiency for performing different
activities. Therefore, cost, profit or time of performing the different activities is different.

Thus, the problem is how the assignments should be made so as to optimize the given
objective. Some of the problems where assignment technique may be useful are: Assignment
of workers to machines, salesmen to different sales areas, classes to rooms, etc.
Assignment models are special type of transportation models where
 Number of sources = Number of destinations
 Each capacity and requirement value = 1
 It can be solved by using the methods such as:
 Enumeration method
 Simplex method
 Transportation method
 Hungarian method
 The Hungarian method was developed by Hungarian Mathematician : D. Konig

Amare Matebu Kassa (Dr.-Ing) Page 59


Following are the assumptions:
 Number of jobs is equal to the number of machines or persons
 Each man or machine is loaded with one and only one job.
 Each man or machine is independently capable of handling any of the job being
presented.
 Loading criteria must be clearly specified such as “minimizing operating time” or
“maximizing profit” ,or “minimizing production cost” or “minimizing throughout
(production cycle) time ” etc.

General Mathematical formulation of AP

Given n resources (or facilities) and n activities (or jobs), and effectiveness (in terms of cost,
profit, time, etc), of each resources (facility) for each activity (job), the problem lies in
assigning each resources to one and only one activity so that the given measure of
effectiveness is optimized.

The data matrix for this problem is shown on table below.

Resources (workers) Activities (Jobs) Supply


J1 J2 …………. Jn
W1 C11 C12 …………C1n 1
W2 C21 C22 …………C2n 1
. . . . .
. . . . .
. . . . .
Wn Cn1 ……………… Cnn 1
Demand 1 1 …………. 1 n

Let Xij denote the Assignment of ith machine to jth job such that:

Xij = 1 if facility i is assigned to job j


0 otherwise

Amare Matebu Kassa (Dr.-Ing) Page 60


n n

Minimize Z =   CijXij
i 1 j 1
(Objective Function)
n
Subject to: 
j 1
Xij  1; for i = 1, 2, …, n (resource availability)
n
for j = 1, 2, …, n, (Activity Requirement)

i 1
Xij  1;

Cij represent cost of assigning ith person to jth job

Xij = 0 or 1 for all i and j

Assignment Problem:
The above problem can be presented as a LP as follows:
MinZ = 20X11 +15X12 + 31X13 +17X21 +16X22 +33X23 +18X31+19X32 +27X33
Subject to the constraints of
a. P constraints:
x11 +x12 +x13 =1 operator one
J1 J2 J3
x21 + x22 + x23 =1 operator two
X31 +X32 +X33 = 1 operator three P1 20 15 31
P2 17 16 33
P3 18 19 27
b. j constraints
x11 + x21 + x31 = 1 job one
x12 + x22 + x32 = 1 job two
x13 + x23 +x33 = 1 job three

Sign Restrictions: xij either 0 or 1 for all i and j


Since all xij can be either 0 or 1, there will be one assignment in each operator constraint and
one assignment in each job constraint.

4.4.1 Solving Methods for Assignment Problem


Hungarian Method
1. Perform row operation
2. Perform column operation
3. Make the assignments by Circles and crosses

Amare Matebu Kassa (Dr.-Ing) Page 61


4. If number of assignments = No. of rows or columns, then optimality has reached that
leads to the end.
5. If not, draw the minimum number of vertical and horizontal lines covering all zeros
 Select the smallest uncovered element
 Subtract it from all uncovered elements
 Add it to all lying at the intersection of lines
 Go to step 3 and repeat until optimality.

Special Cases of Assignment Problem


 The Hungarian method of assignment discussed above requires that the number of
columns and rows in the assignment matrix be equal.
 However, when the given cost matrix is not a square matrix, the assignment problem
is called an unbalanced problem. In such cases a dummy row (s) or column (s) are
added in the matrix (with zeros as the cost elements to make it a square matrix.
 For example, if the cost matrix is of order 4 x 3, a dummy column would be added
with zero cost element in that column.

Example 4.2
A computer center has three programmers. The center wants three application programs to be
developed. The head of the computer center, after studying carefully the programs to be
developed, estimate the computer time in minutes required by the experts for the application
programs as follows.

Programs (estimated time in Minutes)


A B C
Programmers

1 120 100 80

2 80 90 110

3 110 140 120

Amare Matebu Kassa (Dr.-Ing) Page 62


CHAPTER 5
DECISION ANALYSIS

5.1 Introduction
The success or failure that an individual or organization experiences, depends to a large
extent on the ability of making appropriate decisions. Making' of a decision requires an
enumeration of feasible and viable alternatives (courses of action or strategies), the projection
of consequences associated with different alternatives, and a measure of effectiveness (or an
objective) by which the most preferred alternative is identified. Decision theory provides an
analytical and systematic approach to the study of decision-making. In other words, decision
theory provide a method of natural decision-making wherein data concerning –the occurrence
of different outcomes (consequences) may be evaluated to enable the decision-maker to
identify suitable alternative (or course of action).

Decision models useful in helping decision-makers make the best possible decisions are
classified according to the degree of certainty. The scale of certainty can range from
complete certainty to complete uncertainty. The region which falls 'between these two
extreme points corresponds to the decision making under risk (probabilistic problems).

Irrespective of the type of decision model, there are certain essential characteristics which are
common to all as listed below.

Decision alternatives There is a finite number of decision alternatives available with the
decision-maker at each point in time when a decision is made. The number and type of such
alternatives may depend on the previous decisions made and on what has happened
subsequent to those decisions. These alternatives are also called courses of action (actions,
acts or strategies) and are under control and known to the decision-maker. These may be
described numerically such as, stocking 100 units of a particular item, or non-numerically
such as, conducting a market survey to know the likely demand of an item.

State of nature A possible future condition (consequence or event) resulting 'from the choice
of a decision alternative depends upon certain factors beyond the control of the decision-
maker. These factors are called states of nature (future). For example, if the decision is to

Amare Matebu Kassa (Dr.-Ing) Page 63


carry an umbrella or not, the consequence (get wet or do not) depends on what action nature
takes.

The states of nature are mutually exclusive and collectively exhaustive with respect to any
decision problem. The states of nature may be described numerically such as, demand of 100
units of an item or non-numerically such as, employees strike, etc.

Payoff A numerical value .resulting from each possible combination of alternatives and states
of nature is called payoff. The payoff values are always conditional values because of
unknown states of nature.

A tabular arrangement of these conditional outcome (payoff) values is known as payoff


matrix as shown in table 5.1.

Table 5.1 General form of payoff matrix

The decision-making process involves the following steps:


i. Identify and define the problem
ii. Listing of all possible future events, called states of nature, which can occur in the context
of the decision problem.
iii. Identification of all the courses of action (alternatives or decision choices) which are
available to the decision-maker
iv. Expressing the payoffs (Pij) resulting from each pair of course of action and state of
nature. These payoffs are normally expressed in a monetary value.
v. Apply an appropriate mathematical decision analysis model to select best course of
action from the given list.

Amare Matebu Kassa (Dr.-Ing) Page 64


Example 5.1 A firm manufactures three types of products. The fixed and variable costs are
given below:
Fixed Cost (Rs) Variable Cost per Unit (Rs)
Product A 25,000 12
Product B 35,000 9
Product C 53,000 7
The likely demand (units) of the products is given below:
Poor demand 3,000
Moderate demand 7,000
High demand 11,000
If the sale price of each type of product is Rs 25, then prepare the payoff matrix.

Solution .Let D1, D2 and D3 be the poor, moderate and high demand, respectively. Then
payoff is given:
Payoff = Sales revenue - Cost
The calculations for payoff (in thousands) for each pair of alternative demand (course of
action) and the types of product (state of nature) are shown below:

Dl A = 3x 25 - 25 - 3 x 12 = 14 D2A = 7 x 25 - 25 - 7 x 12 =66
D1 B = 3 x 25 - 35 - 3 x 9 = 13 D2 B = 7 x 25 - 35 - 7 x 9 = 77
Dl C = 3 x 25 - 53 - 3 x 7= 1 D2 C = 7 x 25 - 53 - 7 x 7 = 73

D3 A = 11 x 25 - 25 - 11 x 12 = 118
D3 B = 11 x 25 - 35 - 11 x 9 = 141
D3 C = 11 x 25 - 53 - 11 x 7 = 145

The payoff values are shown in table 5.2.


Table 5.2 payoff values
Product type Alternative Demand
D1 D2 D3
A 14 66 118
B 13 77 141
C 1 73 145

Amare Matebu Kassa (Dr.-Ing) Page 65


5.2 Types of Decision Making Environments
Decisions are made based upon the data available about the occurrence of events as well as
the decision situation (or environment). There are four types of decision-making
environment: Certainty, uncertainty, risk 'and conflict.

5.2.1 Type 1 Decision Making under Certainty


In this case the decision-maker has the complete knowledge (perfect information) of
consequence of every decision choice (course of action or alternative) with certainty.
Obviously, he will select an alternative that yields the" largest return (payoff) for the known
future (state of nature). For example, the decision to purchase either N.S.C. (National Saving
Certificate); Indira Vikas Patra, or deposit in N.S.S. (National Saving Scheme) is one in
which it is reasonable to assume complete information about the future because there is no
doubt that the Indian government will pay the interest when it is due and the principal at
maturity. In this decision-model, only one possible state of nature (future) exists.

5.2.2 Type 2 Decision Making under Risk


In this case the decision-maker has less than complete knowledge with certainty of the
consequence of every decision choice (course of action). This means there is more than one
state of nature (future) and for which he makes an assumption of the probability with which
each state of nature will occur. For example, probability of getting head in the toss of a coin
is 0.5.

5.2.3 Type 3 Decision Making under Uncertainty


In this case the decision-maker is unable to specify the probabilities with which the various
states of nature (futures) will occur. Thus, decisions under uncertainty are taken with even
less information than decisions under risk. For example, the probability that Mr X will be the
prime minister of the country 15 years from now is not known.

In the absence of knowledge about the probability of any state of nature (future) occurring,
the decision- maker must arrive at a decision only on the actual conditional payoff values,
together with a policy (attitude). There are several different criteria of decision-making in this
situation. The criteria that we will discuss in this section include:
i) Maximax or Minimin

Amare Matebu Kassa (Dr.-Ing) Page 66


ii) Maximin or Minimax
iii) Equally likely
iv) Criterion of realism
v) Criterion of regret

1. Maxima or Minimin
In this criterion the decision-maker ensures that he/she should not miss the opportunity to
achieve the largest possible profit (maximax) or lowest possible cost (minimin). Thus, he/she
selects the alternative (decision choice or course of action) that represents the maximum of
the maxima (or minimum of the minima) payoffs (consequences or outcomes).
 The working method is summarized as follows:
a) Locate the maximum (or minimum) payoff values corresponding to
each alternative (or course of action), then .
b) Select an alternative with best anticipated payoff value (maximum
for profit and minimum for cost).

2. Maximin or Minimax
In this criterion the decision-maker ensures that he/she would earn no less (or pay no more)
than some specified amount. Thus, he/she selects the alternative that represents the maximum
of the minima (or minimum of the maxima in case of loss) payoff in case of profits.
 The working method is summarized as follows:
a) Locate the minimum (or maximum in case of profit) payoff value in case of loss' (or
cost) data corresponding to each alternative, then
b) Select an alternative with the best anticipated payoff value (maximum for profit and
minimum for loss or cost).

3. Equally likely Decision (Laplace)


Since the probabilities of states of nature are not known, it is assumed that all states of nature
will occur with equal probability, i.e. each state of nature is assigned an equal probability. As
states of nature are mutually exclusive and collectively exhaustive, so the probability of each
of these must be 1/(number of states of nature).

Amare Matebu Kassa (Dr.-Ing) Page 67


 The working method is summarized as follows:
a) Assign equal probability value to each state of nature by using the formula:
1 ÷ (number of states of nature).
b) Compute the expected (or average) payoff for each alternative (course of action) by
adding all the payoffs and dividing by the number of possible states of nature or by
applying the formula: (Probability of state of nature j) x (Payoff value for the
combination of alternative i and state of nature j.)
c) Select best expected payoff value (maximum for profit & minimum for cost).

4. Criterion of Realism (Hurwicz)


 This criterion suggests that a rational decision-maker should be neither completely
optimistic nor pessimistic and, therefore, must display a mixture of both.
 Hurwicz, who suggested this criterion, introduced the idea of a coefficient of optimism
(denoted by ) to measure the decision-maker's degree of optimism.
 This coefficient lies between 0 and 1, where 0 represents a complete pessimistic attitude
about the future and 1 a complete optimistic attitude about the future. Thus, if  is the
coefficient of optimism, then (1 - ) will represent the coefficient of pessimism.
 The Hurwicz approach suggests that the decision maker must select an alternative that
maximizes H (Criterion of realism) =  (Maximum in column) + (1 -  ) (Minimum in
column).

The working method is summarized as follows:


a) Decide the coefficient of optimism  (alpha) and then coefficient of pessimism (1 - ).
b) For each alternative select the largest and lowest payoff value and multiply these with 
and (1- ) values, respectively. Then calculate the weighted average, H by using above
formula.
c) Select an alternative with best anticipated weighted average payoff value.

5. Criterion of Regret (Savage Criterion)

This criterion is also known as opportunity loss decision criterion or minimax regret decision
criterion because decision-maker feels regret after adopting a wrong course of action (or

Amare Matebu Kassa (Dr.-Ing) Page 68


alternative) resulting in an opportunity loss of payoff. Thus, he/she always intends to
minimize this regret.

 The working method is summarized as follows:


a) From the given payoff matrix, develop an opportunity-loss (or regret) matrix as follows:
i) Find the best payoff corresponding to each state of nature, and
ii) Subtract all other entries (payoff values) in that row from this value.
b) For each course of action (strategy or alternative) identify the worst or maximum regret
value. Record this number in a new row.
c) Select the course of action (alternative) with the smallest anticipated opportunity-loss
value.

Payoff Tables
 Payoff table analysis can be applied when:
 There is a finite set of discrete decision alternatives.
 The outcome of a decision is a function of a single future event.
 In a Payoff table
 The rows correspond to the possible decision alternatives.
 The columns correspond to the possible future events.
 Events (states of nature) are mutually exclusive and collectively exhaustive.
 The table entries are the payoffs.

Decision Analysis -Components of Decision Making


■ A state of nature is an actual event that may occur in the future.
■ A payoff table is a means of organizing a decision situation, presenting the payoffs
from different decisions given the various states of nature.

Table 5.2 Payoff Table

Amare Matebu Kassa (Dr.-Ing) Page 69


Table 5.3 Decision Analysis: Decision Making without Probabilities

Compute decision-Making Criteria maximax, maximin, minimax, minimax regret, hurwicz,


and equal likelihood.

Decision Making without Probabilities - Maximax Criterion

In the maximax criterion the decision maker selects the decision that will result in the
maximum of maximum payoffs; an optimistic criterion.

Table 5.4 Payoff table Illustrating a Maximax Decision

Decision Making without Probabilities - Maximin Criterion

In the maximin criterion the decision maker selects the decision that will reflect the maximum
of the minimum payoffs; a pessimistic criterion.

Amare Matebu Kassa (Dr.-Ing) Page 70


Table 5.5 Payoff table illustrating a Maximin Decision

Decision Making without Probabilities - Minimax Regret Criterion

Regret is the difference between the payoff from the best decision and all other decision
payoffs. The decision maker attempts to avoid regret by selecting the decision alternative that
minimizes the maximum regret.

Table 5.6 Regret table illustrating the Minimax Regret Decision

Decision Making without Probabilities - Hurwicz Criterion

 The Hurwicz criterion is a compromise between the maximax and maximin criterion.
 A coefficient of optimism, , is a measure of the decision maker’s optimism.
 The Hurwicz criterion multiplies the best payoff by  and the worst payoff by 1- .,
for each decision, and the best result is selected.

Amare Matebu Kassa (Dr.-Ing) Page 71


Decision Values
Apartment building $50,000(.4) + 30,000(.6) = 38,000
Office building $100,000(.4) - 40,000(.6) = 16,000
Warehouse $30,000(.4) + 10,000(.6) = 18,000

Decision Making without Probabilities -Equal Likelihood Criterion

The equal likelihood (or Laplace) criterion multiplies the decision payoff for each state of
nature by an equal weight, thus assuming that the states of nature are equally likely to occur.

Decision Values
Apartment building $50,000(.5) + 30,000(.5) = 40,000
Office building $100,000(.5) - 40,000(.5) = 30,000
Warehouse $30,000(.5) + 10,000(.5) = 20,000
Decision Making without Probabilities -Summary of Criteria Results

 A dominant decision is one that has a better payoff than another decision under each
state of nature.
 The appropriate criterion is dependent on the “risk” personality and philosophy of
the decision maker.

Criterion Decision (Purchase)


Maximax Office building
Maximin Apartment building
Minimax regret Apartment building
Hurwicz Apartment building
Equal likelihood Apartment building

Decision Making with Probabilities -Expected Value

 Expected value is computed by multiplying each decision outcome under each state of
nature by the probability of its occurrence.

Amare Matebu Kassa (Dr.-Ing) Page 72


EV (Apartment) = $50,000(.6) + 30,000(.4) = 42,000
EV (Office) = $100,000(.6) - 40,000(.4) = 44,000
EV (Warehouse) = $30,000(.6) + 10,000(.4) = 22,000

Decision Making with Probabilities -Expected Opportunity Loss

 The expected opportunity loss is the expected value of the regret for each decision.
 The expected value and expected opportunity loss criterion result in the same
decision.

EOL(Apartment) = $50,000(.6) + 0(.4) = 30,000


EOL(Office) = $0(.6) + 70,000(.4) = 28,000 selected
EOL(Warehouse) = $70,000(.6) + 20,000(.4) = 50,000

Decision Making with Probabilities -Decision Trees


A decision tree is a diagram consisting of decision nodes (represented as squares), probability
nodes (circles), and decision alternatives (branches).

Amare Matebu Kassa (Dr.-Ing) Page 73


Table 5.7 Payoff Table for Real Estate Investment Example

Figure 5.2 Decision Tree for Real Estate Investment Example

 The expected value is computed at each probability node:


EV(node 2) = .60($50,000) + .40(30,000) = $42,000
EV(node 3) = .60($100,000) + .40(-40,000) = $44,000
EV(node 4) = .60($30,000) + .40(10,000) = $22,000

 Branches with the greatest expected value are selected.

Amare Matebu Kassa (Dr.-Ing) Page 74


Figure 5.3 Decision Tree with Expected Value at Probability Nodes

Decision Making with Probabilities -Sequential Decision Trees

 A sequential decision tree is used to illustrate a situation requiring a series of


decisions.
 Used where a payoff table, limited to a single decision, cannot be used.
 Real estate investment example modified to encompass a ten-year period in which
several decisions must be made.

5.3 Decision Analysis with Additional Information - Bayesian Analysis


Bayes’ Theorem and Posterior Probabilities

Knowledge of sample (survey) information can be used to revise the probability estimates
for the states of nature. Prior to obtaining this information, the probability estimates for the
states of nature are called prior probabilities. With knowledge of conditional probabilities for
the outcomes or indicators of the sample or survey information, these prior probabilities can
be revised by employing Bayes' Theorem. The outcomes of this analysis are called posterior
probabilities or branch probabilities for decision trees.

Amare Matebu Kassa (Dr.-Ing) Page 75


Bayes’ Theorem provides a procedure to calculate these probabilities

 Bayesian analysis uses additional information to alter the marginal probability of the
occurrence of an event.

 A conditional probability is the probability that an event will occur given that another
event has already occurred.
 Economic analyst provides additional information for real estate investment decision,
forming conditional probabilities:
g = good economic conditions
p = poor economic conditions
P = positive economic report
N = negative economic report
P(Pg) = 0.80 P(Ng) = 0.20
P(Pp) = 0.10 P(Np) = 0.90

 A posterior probability is the altered marginal probability of an event based on


additional information.
 Prior probabilities for good or poor economic conditions in real estate decision:
Amare Matebu Kassa (Dr.-Ing) Page 76
P(g) = 0.60; P(p) = 0.40
 Posterior probabilities by Bayes’ rule:
(gP) = P(Pg)P(g)/[P(Pg)P(g) + P(Pp)P(p)]
= (0.80)(0.60)/[(0.80)(0.60) + (0.10)(0.40)] = 0.923
 Posterior (revised) probabilities for decision:
P(gN) = 0.250 P(pP) = 0.077 P(pN) = 0.750

Decision Analysis with Additional Information-Decision Trees with Posterior Probabilities


Decision tree with posterior probabilities differ from earlier versions in that:
■ Two new branches at beginning of tree represent report outcomes.
■ Probabilities of each state of nature are posterior probabilities from Bayes’ rule.

Figure 5.4 Decision Tree with Posterior Probabilities

EV (apartment building) = $50,000(.923) + 30,000(.077) = $48,460


EV (strategy) = $89,220(.52) + 35,000(.48) = $63,194

Amare Matebu Kassa (Dr.-Ing) Page 77


Figure 5.7 Decision Tree Analysis

Table 5.13 Payoff table for Auto Insurance Example

Expected Cost (insurance) = 0.992($500) + 0.008(500) = $500


Expected Cost (no insurance) = 0.992($0) + 0.008(10,000) = $80

Decision should be do not purchase insurance, but people almost always do purchase
insurance.
 Utility is a measure of personal satisfaction derived from money.
 Utiles are units of subjective measures of utility.
 Risk averters forgo a high expected value to avoid a low-probability disaster.
 Risk takers take a chance for a bonanza on a very low-probability event in lieu of a
sure thing.

Amare Matebu Kassa (Dr.-Ing) Page 78


Decision Analysis - Example Problem Solution

Decision States of Nature


Good foreign Poor foreign
competitive conditions competitive conditions

Expand $ 800,000 $ 500,000


Maintain Status quo $ 1,300,000 $ -150,000
Sell now $ 320,000 $ 320,000

a. Determine the best decision without probabilities using the 5 criteria of decision
making with uncertainty.
b. Determine best decision with probabilities assuming 0.70 probabilities of good
conditions, 0.30 of poor conditions. Use expected value and expected opportunity loss
criteria.
c. Compute expected value of perfect information.
d. Develop a decision tree with expected value at the nodes.
e. Given following, P(Pg) = 0.70, P(Ng) = 0.30, P(Pp) = 20, P(Np) = 0.80,
determine posterior probabilities using Bayes’ rule.
f. Perform a decision tree analysis using the posterior probability obtained in part e.

Step 1 (part a): Determine decisions without probabilities.


Maximax Decision: Maintain status quo
Decisions Maximum Payoffs
Expand $800,000
Status quo 1,300,000 (maximum)
Sell 320,000
Maximin Decision: Expand
Decisions Minimum Payoffs
Expand $500,000 (maximum)
Status quo -150,000
Sell 320,000

Amare Matebu Kassa (Dr.-Ing) Page 79


Minimax Regret Decision: Expand
Decisions Maximum Regrets
Expand $500,000 (minimum)
Status quo 650,000
Sell 980,000
Hurwicz ( = 0.3) Decision: Expand
Expand $800,000(0.3) + 500,000(0.7) = $590,000
Status quo $1,300,000(0.3) - 150,000(0.7) = $285,000
Sell $320,000(.3) + 320,000(0.7) = $320,000
Equal Likelihood Decision: Expand
Expand $800,000(0.5) + 500,000(0.5) = $650,000
Status quo $1,300,000(0.5) - 150,000(0.5) = $575,000
Sell $320,000(0.5) + 320,000(0.5) = $320,000

Step 2 (part b): Determine Decisions with EV and EOL.


Expected value decision: Maintain status quo
Expand $800,000(0.7) + 500,000(0.3) = $710,000
Status quo $1,300,000(0.7) - 150,000(0.3) = $865,000
Sell $320,000(0.7) + 320,000(0.3) = $320,000
Expected opportunity loss decision: Maintain status quo
Expand $500,000(0.7) + 0(0.3) = $350,000
Status quo 0(0.7) + 650,000(0.3) = $195,000
Sell $980,000(0.7) + 180,000(0.3) = $740,000
Step 3 (part c): Compute EVPI.
EV given perfect information = 1,300,000(0.7) + 500,000(0.3)
= $1,060,000
EV without perfect information = $1,300,000(0.7) - 150,000(0.3)
= $865,000
EVPI = $1,060, 000 - 865,000 = $195,000

Step 4 (part d): Develop a decision tree.

Amare Matebu Kassa (Dr.-Ing) Page 80


Step 5 (part e): Determine posterior probabilities.
P(gP) = P(Pg)P(g)/[P(Pg)P(g) + P(Pp)P(p)]
= (.70)(.70)/[(.70)(.70) + (.20)(.30)] = 0.891
P(pP) = 0.109
P(gN) = P(Ng)P(g)/[P(Ng)P(g) + P(Np)P(p)]
= (.30)(.70)/[(.30)(.70) + (.80)(.30)] = 0.467
P(pN) = 0.533

Step 6 (part f): Decision tree analysis.

Amare Matebu Kassa (Dr.-Ing) Page 81


Amare Matebu Kassa (Dr.-Ing) Page 82
CHAPTER 6
GAME PROGRAMMING

6.1 Introduction
In business and economics literature, the term 'game' refers to the general situation of conflict
and competition in which two or more competitors (or participants) are involved in decision-
making activities in anticipation of certain outcomes over a period of time. The competitors
are referred as players. A player may be an individual, a group of individuals, or an
organization. Few examples of competitive and conflicting decision environment involving
the interaction between two or more competitors where techniques of theory of games may be
used to resolve them are: (i) pricing of products, where a firm's ultimate sales are determined
not only by the price levels it selects but also by the prices its competitors set, (ii) various TV
networks have found that program success is largely dependent on what the competitors
presents in the same time slot; the outcomes of one networks programming decisions have,
therefore, been increasingly influenced by the corresponding decisions made by other
networks, (iii) success of a business tax strategy depends greatly on the position taken by the
internal revenue service regarding the expenses that may be disallowed, (iv) success of an
advertising/marketing campaign depends. largely on various types of services offered to the
customers, etc.

As an area of academic study, theory of games provides a series of mathematical models that
may be quite useful in explaining interactive decision-making concepts. But as a practical
tool, it is limited in scope. However, such models provide an opportunity to a competitor to
evaluate not only his personal alternatives (courses of action), but also the evaluation of the
opponent's (or competitor's) possible choices in order to win the game.

The models in the theory of games can be classified depending upon the following factors:
Number of players: If a game involves only two players (competitors), then it is called a two-
person game. However, if the number of players is more, the game is referred to as n-person
game.

Amare Matebu Kassa (Dr.-Ing) Page 83


Sum of gains and losses: If in a game sum of the gains to one player is exactly equal to the
sum of losses to another player, so that sum of the gains and losses equals zero, then the game
is said to be a zero-sum game. Otherwise it is said to be non-zero sum game.
Strategy The strategy for a player is the list of all possible actions (moves or courses of
action) that he will take for every payoff (outcome) that might arise. It is assumed that the
rules governing the choices are known in advance to the players. The outcome resulting from
a particular choice is also known to the players in advance and is expressed in terms of
numerical values (e.g. money, per cent of market share or utility). Here it is not necessary that
players have definite information about each other's strategies.

The particular strategy (or complete plan) by which a player optimizes his gains or losses
without knowing the competitor's strategies is called optimal strategy. The expected outcome
per play when players follow their optimal strategy is called the value of the game:
Generally, two types of strategies are employed by players in a game.
a. Pure Strategy It is the decision rule which is always used by the player to select the
particular strategy (course of action). Thus, each player knows in advance of all strategies
out of which he always selects only one particular strategy regardless of the other player's
strategy, and the objective of the players is to maximize gains or minimize losses.
b. Mixed Strategy Courses of action that are to be selected on a particular occasion with
some fixed probability are called mixed strategies. Thus, there is a probabilistic situation
and objective of the players is to maximize expected gains or to minimize expected losses
by making choice among pure strategies with fixed probabilities.

Therefore, based on number of persons game theory classified into two:


i. Two player game and
ii. Multiple player game
Again based on the outcomes of the game, it is classified into two:
i. Two persons zero sum game
ii. n-person game

Two-person zero-sum games play a central role in the development of the game theory.

Amare Matebu Kassa (Dr.-Ing) Page 84


6.2 Two-Persons-Zero-Sum Game

A game with only two players, say player A and player B is called a two-person zero-sum
game, if one player's gain is equal to the loss of other player so that total sum is zero.

Payoff matrix: The payoffs (a quantitative measure of satisfaction a player gets at the end of
the play) in terms of gains or losses, when players select their particular strategies (courses of
action), can be represented in the form of a matrix, called the payoff matrix. Since the game
is zero-sum, the gain of one player is equal to the loss of other and vice-versa. In other words,
one player's payoff table would contain the same amounts in payoff table of other player with
the sign changed. Thus, it is sufficient to construct payoff table only for one of the players.

If player A has m strategies represented by the subscripted letters: A1, A2, ..., Am and player B
has n strategies represented by the subscripted letters: B1, B2, ... , Bn. The numbers m and n
need not be equal. The total number of possible outcomes is therefore m x n. Here, it is
assumed that each player knows not only his own list of possible courses of action but also of
his opponent. For convenience, it is assumed' that player A is always a gainer whereas player
B a loser. Let aij be the payoff which player A gains from player B if player A chooses
strategy i and player B chooses strategy j. Then the payoff matrix is shown in the table 6.1.

Table 6.1 Payoff matrix

Amare Matebu Kassa (Dr.-Ing) Page 85


Figure 6.1 Flow chart of Game theory Approach

By convention, the rows of the payoff matrix denote player A's strategies and the columns
denote player B's strategies. Since player A is assumed to be the gainer always, so he wishes
to gain as large a payoff aij as possible while player B. will do his best to reach as small a
value of aij as possible. Of course, the gain to player B and loss to A must be - aij.

Amare Matebu Kassa (Dr.-Ing) Page 86


Assumptions of the Game

1. Each player has available to him a finite number of possible strategies (courses of action).
The list may not be the same for each player.
2. Player A attempts to maximize gains and player B minimize losses.
3. The decisions of both players are made individually prior to the play with no
communication between them.
4. The decisions are made simultaneously and also announced simultaneously so that neither
player has an advantage resulting from direct knowledge of the other player's decision.
5. Both the players know not only possible payoffs to themselves but also of each other.

Example:
Consider the following game in which player I has two choices from which to select, and
player II has three alternatives for each choice of player I. The payoff matrix ‘a’ is given
below.
Player II
j1 j2 j3
Player I i1 4 1 3
i2 2 3 4
a = Payoff Matrix

In the payoff matrix, the two rows (i1 and i2) represent the two possible strategies that player I
can employ, and the three columns (j1, j2, and j3) represent the three possible strategies that
player II can employ.

The payoff matrix is oriented to player I, meaning that a positive aij is a gain for player I and
a loss for player II, and a negative aij is a gain for player II and a loss for player I. For
example, if player I uses strategy 2 and player II uses strategy 1, player I receives a 21 = 2
units and player II thus loses 2 units.

Clearly, in our example player II always loses; however, the objective is to minimize the
payoff to player I.
 In a game situation it is assumed that the pay-off table is known to all players.

Amare Matebu Kassa (Dr.-Ing) Page 87


 The game is organized in such a way that the player who wants to maximized the
gain is on the left, and the player who wants to minimize the gain is on the top.
 The best strategy for each player is optimal strategy.
 In a pure strategy game, each player adopts a single strategy as an optimal strategy

6.3 Pure Strategies

It uses Minimax and Maximin principles (Game with saddle point). If the maximin value
equals the minimax value, then the game is said to have a saddle (equilibrium) point and the
corresponding strategies are called optimal strategies. The amount of payoff, i.e. V at an
equilibrium point is known as the value of the game. A game may have more than one saddle
point. A game with no saddle point is solved by strategies with fixed probabilities.

Example 1
Two companies, A and B, sell two products. Company A advertises in Radio (A1), Television
(A2), and Newspaper (A3). Company B in addition to using Radio (B1), Television (B2), and
Newspaper (B3), also uses mails and brochures (B4). Depending on the effectiveness of each
advertising campaign, one company can capture a portion of the market from the others.

The following matrix (on table 6.1) summarizes the percentage of the market captured or lost
by company A.
Table 6.1

Company B

Strategies B1 B2 B3 B4

Company A A1 8 -2 9 -3

A2 6 5 6 8

A3 -2 4 -9

Determine the optimal strategy for the game, and the value of the game?

The solution of the game is based on the principle of securing the best of the worst for each
player (Maximin and Minimax Principles)

Amare Matebu Kassa (Dr.-Ing) Page 88


 The application is similar with the criteria of decision theory.
 A decision maker (Company A) is pessimistic and thus selects the strategy that
maximizes gains from among the minimum possible outcomes (Maximin).
 At the same time, the other player (Company B) attempts to minimize losses from
among the maximum anticipated losses (Minimax).

Solution

Analysis of the Result


If company A selects strategy A1, then regardless of what B does, the worst what can happen
is that A loses 3% of the market share to B. This is represented by the minimum value of the
entries in row 1. Similarly, the strategy A2 worst outcome is for A to capture 5% of the
market share from B. and the strategy A3 worst outcome is for A to loss 9% to B. These
results are listed in the “Row min” column. To achieve the best of the worst, Company A
chooses strategy A2 because it corresponds to the maximum value, or the largest element in
the “row min” column.

Next, consider Company B’s strategy


Because the given payoff matrix is for A, B’s best of the worst criterion requires determining
the minmax value. The result is that company B should select strategy B 2. The optimal
solution of the game calls for selecting strategies A2, and B2, which means that both
companies should use television advertising. The payoff will be in favor of company A,
because its market share will increase by 5%. In this case, we say that the value of the game
is 5%, and that A and B are using a saddle-point solution.

Amare Matebu Kassa (Dr.-Ing) Page 89


The saddle –point solution precludes the selection of a better strategy by either company. If B
moves to another strategy (B1, B3, or B4), Company A can stay with strategy A2, which
ensures that B will lose a worse share of the market (6% or 8%). By the same token, A does
not want to use a different strategy because if A moves to strategy A3, B can move to B3 and
realize a 9% increase in market share. A similar conclusion is realized if A moves to A2, as B
can move to B4, and realize a 3% increase in market share. The optimal saddle-point solution
of a game need not be a pure strategy. Instead, the solution may require mixing two or more
strategies randomly.

6.4 Mixed Strategy


An "equilibrium decision point", that is a "saddle point", also known as a "minimax and
maximin "point, represents a decision by two players upon which neither can improve by
unilaterally departing from it.
 When there is no saddle point, one must choose the strategy randomly.
 This is the idea behind a mixed strategy.
 A mixed strategy for a player is defined as a probability distribution on the set of
the pure strategies

6.4.1 Rule of Dominance

The rules of dominance are used to reduce the size of the payoff matrix. These rules help in
deleting certain rows and/or columns of the payoff matrix which are inferior (less attractive)
to at least one of the remaining rows and/or columns (strategies) in terms of payoffs to both
the players. Rows and/or columns once deleted will never be used for determining the
optimum strategy for both the players.

The rules of dominance are especially used for the evaluation of two-person zero-sum games
without saddle (equilibrium) point. Certain dominance principles are stated as follows:
1. For player B who is assumed to be the loser, if each element in a column, say Cr, is
greater than or equal to the corresponding element in another column, say Cs, in the
payoff matrix, then the column Cr, is dominated by column Cs, and therefore, column
Cr, can be deleted from the payoff matrix. In other words, player B will lose more by
choosing strategy for Cr, column than by choosing strategy for column Cs therefore, he
will never use strategy corresponding to column Cr.

Amare Matebu Kassa (Dr.-Ing) Page 90


2. For player A who is assumed to be the gainer, if each element in a row, say Rr is less
than or equal to the corresponding element in another row, say Rs in the payoff matrix,
then the row Rr; is dominated by row Rs and therefore, row Rr can be deleted from the
payoff matrix. In other words, player A will never use the strategy corresponding to row
Rr because he will gain less for choosing such strategy.
3. A strategy say, k can also be dominated if it is inferior (less attractive) to an average of
two or more other pure strategies. In this case, if the domination is strict, then strategy k
can be deleted. If strategy k dominates the convex linear combination of some other
pure strategies, then one of the pure strategies involved in the combination may be
deleted. The domination will be decided as per rules1and 2 above.

Remark: Rules (principles) of dominance discussed are used when the payoff matrix
is a profit matrix for the player A and a loss matrix for player B. Otherwise the
principle gets reversed.

 The rules of dominance are used to reduce the size of the payoff matrix.
 These rules help in deleting certain rows and/or columns of the payoff matrix which
are inferior (less attractive) to at least one of the remaining rows and/or columns
(strategies) in terms of payoffs to both the players.
 Rows and/or columns once deleted will never be used for determining the optimum
strategy for both the players.

Consider the following game in which player I has two choices from which to select, and
player II has three alternatives for each choice of player I. The payoff matrix ‘a’ is given
below:
Table 6.2

Player II

Strategies j1 j2 j3
Player I
i1 4 1 3

i2 3 4

Amare Matebu Kassa (Dr.-Ing) Page 91


Dominance occurs when all the payoffs for one strategy are better than the corresponding
payoffs for another strategy. In the above table 6.2 the values of strategy 2, 1 & 2, for player
II are both lower than the corresponding payoffs of 4 and 3 for strategy 1 and the
corresponding payoffs of 3 and 4 for strategy 3. Since Strategy 2 dominates Strategy 1 and 3,
these two later strategies can be eliminated from consideration.

Example 2: Mixed Strategies


Table 6.3: Payoff tables for Camera Companies

Company I Company II strategies


Strategies
A B C

1 9 7 2

2 11 8 4

3 4 1 7

Let the values in table are the percentage increase or decrease in market share for company I.
Determine an optimal strategy for company I and II , and also find the value of the game.

Solution
The first step is to check the payoff table for any dominant strategies. Doing so, we find
strategy 2 dominates strategy I, and strategy B dominates strategy A. Thus, strategies 1 and A
can be eliminated from the payoff table.
Table 6.4: Payoff table with Maximin
and Minimax criterion

Table 6.3: Payoff Table with


strategies 1 and A eliminated

Amare Matebu Kassa (Dr.-Ing) Page 92


In a mixed strategy game, players switch decisions in response to the decision of the other
player and eventually return to the initial decisions, resulting in a closed loop.

Table 6.5: Payoff table with closed loop

6.5 Methods of Solving Mixed strategies


There are three methods for solving problems with mixed strategy: Algebraic Method,
Graphical Method, and Linear Programming Method

6.5.1 Algebraic Method


It is based on the value of expected gain and loss method, A plan of strategies is determined
by each player so that the expected gain of one equals the expected loss of the other.

Consider the previous example:

Company I Company II strategies


Strategies
B C

2 8 4

3 1 7

Let p = Probability that Company I uses strategy 2.


(1-p) = Probability that Company I uses strategy 3.
 Assume Com II uses strategy B, the expected gain for com I is:
8p + (1-p)1= 1+ 7p ---------------------------------(1)
 Assume Com II uses strategy C, the expected gain for com I is:
4p + (1-p)7= 7-3p ----------------------------------(2)

Amare Matebu Kassa (Dr.-Ing) Page 93


Since Com I is in different between choosing Strategy 2& 3, let us equate equation1=2
7p+1= 7-3p
P = 6/10= 0.6
P = 60%
1-p = 0.4 = 40%

6.5.2 Graphical Method

The graphical method is useful for the game where the payoff matrix is one of the size 2 x n
or m x 2. i.e the game with mixed strategies that has only two undominated pure strategies for
one of the players in the two-person zero-sum game. Optimal strategies for both the players
assign non-zero probabilities to the same number of pure strategies.

Example 3: Use graphical method in solving the value of the game.

Player A Player B
B1 B2 B3 B4
A1 2 2 3 -2
A2 4 3 2 6

Solution
The game does not have a saddle point. If the probability of player A’s playing A1 and A2 in
the strategy matrix is denoted by P1 and P2, respectively, where p2= 1-p1, then the expected
payoff (gain) to player A will be:

B’s Pure strategies A’s Expected Payoff


B1 2P1 + 4p2
B2 2P1 + 3P2
B3 3P1 + 2P2
B4 -2p1 + 6P2
These four expected payoff lines can be plotted on the graph to solve the game.

Amare Matebu Kassa (Dr.-Ing) Page 94


6.5.3 Linear programming (LP) Method

The major advantage of using linear programming technique is to solve mixed-strategy


games of larger dimension payoff matrix. To illustrate the transformation of a game
problem to a linear programming problem. Consider a payoff matrix of size m x n. Let a ij be
the element in the ith row and jth column of game payoff matrix, and letting Pi be the
probabilities of m strategies (i = 1, 2, …, m) for player A. Then the expected gains for player
A, for each of player B’s strategies is given as follows.

For a matrix m x n:

Let V = Value of the game

pi = Probabilities of m strategies (i = 1, 2, 3…m)

Pj = Probabilities of n strategies (j = 1, 2, 3…n)

aij = the element in the ith and jth column of game payoff matrix.

m
V   Pi a ij , for j  1,2,3, ..., n
i 1

To obtain values of probability Pi, the values of the game to player A for all strategies by
player B must be at least equal to V. Thus to maximize the minimum expected gains, it is
necessary that:

Amare Matebu Kassa (Dr.-Ing) Page 95


a11P1 + a21P2+…+ am1Pm ≥ V
a12P1 + a22P2+…+ am2Pm ≥ V
. . .
. . .
a1nP1 + a2nP2+…+ amnPm ≥ V
Where , P1 +p2 +…+ Pm= 1; Pi ≥ 0

Dividing both sides of the m inequalities and equation by V, V>0.


a11P1/V+ a21P2/V+…+ am1Pm/V ≥ 1
a12P1/V + a22P2/V+…+ am2Pm/V ≥ 1
. . .
. . .
a1nP1/V + a2nP2/V+…+ amnPm/V ≥ 1
P1/V +p2/V +…+ Pm/V= 1/V
Since the objective of player A is to maximize the value of the game, V which is equivalent
to minimizing 1/V, the resulting linear programming problem can be stated as follows:

Let Pi/V= Xi, (≥ 0)


Minimize Zp (=1/V)= x1 + x2 +…+ xm
 Subject to the constraints
a11x1+ a21x2+…+ am1xm ≥ 1
a12x1 + a22x2+…+ am2xm ≥ 1
. . .
. . .
a1nx1 + a2nx2+…+ amnxm ≥ 1
and x1, x2, …, xm ≥ 0
Where Xi= Pi/V ≥0, i= 1,2,…,m

Similarly, player B has a similar problem with the inequalities of the constants reversed, i.e.
minimize the expected loss. Since minimizing of V is equivalent to maximizing 1/V,
therefore, the resulting linear programming problem can be stated as:
Maximize Zq (1/V)= y1 + y2 +…+ Ym
Subject to the constraints

Amare Matebu Kassa (Dr.-Ing) Page 96


a11y1+ a12y2+…+ a1nyn ≤ 1
a21y1 + a22y2+…+ a2nyn ≤ 1
. . .
. . .
am1y1 + am2y2+…+ amnyn ≤ 1
and y1, y2, …, ym ≥ 0
Where yi = qi/V ≥0, j = 1,2,…,n

Figure 6.2 Flow chart of game theory approach

Examples of competitive situations that can be organized into two-person, zero-sum games:
 a union negotiating a new contract with management;
 two armies participating in a war game;
 two politicians in conflict over a proposed legislative, one attempting to secure its
passage and the other attempting to defeat it;
 a firm trying to increase its market share with a new product and a competitor
attempting to minimize the firm’s gains; and
 A contractor negotiating with a government agent for a contract on a project, etc.

Amare Matebu Kassa (Dr.-Ing) Page 97


Problem -1
A company management and the labour union are negotiating a new three year settlement.
Each of these has 4 strategies:
I - Hard and aggressive bargaining II - Reasoning and logical approach
III - Legalistic strategy IV - Conciliatory approach
The costs to the company are given for every pair of strategy choice.
What strategy will the two sides adopt? Also determine the value of the game.

Union Company strategies


Strategies
I II III IV

I 20 15 12 35

II 25 14 8 10

III 40 2 10 5

IV -5 4 11 0

Problem – 2
In a game of matching coins with two players, suppose A wins one unit of value when there
are two heads wins nothing when there are two tails and losses ½ unit of value when there is
one head and one tail. Determine the payoff matrix, the best strategies for each player and the
value of the game to A.

Problem – 3
For the following payoff matrix, transform the zero-sum game into an equivalent linear
programming problem and solve it by using simplex method.

Player B

Player A B1 B2 B3

A1 1 -1 3

A2 3 5 -3
A3 6 2 -2

Amare Matebu Kassa (Dr.-Ing) Page 98


CHAPTER 7
MARKOV ANALYSIS

7.1 Introduction
Markov chain models developed by the Russian mathematician Andrei A. Markov in 1905,
are a particular class of probabilistic models known as stochastic processes, in which the
current state of a system depends on all of its previous states. But in Markov process
(sequence of events) the current state of a system depends only on its the immediately
preceding state. For example, consider the following few systems.

i. Market place for a product and its competitive brands.


ii. Machines used to manufacture a product.
iii. Billing, credit and collection procedures involved in converting accounts receivable from
the product's sales into cash.
iv. Area of specialization by a management student at the same time.

In all these examples, the respective system may be in one of several possible states. These
states describe all possible conditions of a system (or process):
i. In marketing example, states may be expressed in terms of the brand that a customer is
presently using.
ii. In production example, a machine can be in one of two states: Working or not at any
point in time.
iii. In financial transaction example, the accounts receivable can fall into one of the states:
Cash sale, credit sale or incollectable money.
iv. A student can specialize in only few management functional areas and not in all areas at
the same time.

The movement of these systems from one state to another is a Markov process because
outcomes are purely random and the probabilities of various outcomes depend only on the
existing state.

Markov chain is a particular class of probabilistic models known as stochastic processes, in


which the current state of a system depends on all of its previous states. In the Markov

Amare Matebu Kassa (Dr.-Ing) Page 99


process, the current state of a system depends only on its immediately preceding state.
Markov chain is developed by the Russian mathematician Andrey Markov in 1905.
 Probability of mutually dependent events
 Concept of chained events

Characteristics of a Markov chain


As mentioned above the movement of a system from one state to another depending upon the
immediately preceding state with constant probability, forms the basis of Markov chain.
However, before a problem can be classified as a Markov chain process, following properties
must be satisfied
i. There are finite numbers of possible states.
ii. States are both collectively exhaustive and mutually exclusive.
iii. The transition probabilities depend only on the current state of the system, i.e. if current
state is known, the conditional probability of the next state is independent of the states
prior to the present state.
iv. The long-run probability of being in a particular state will be constant over time.
v. The transition probabilities of moving to alternative states in the next time period, given
a state in the current time period must sum to 1.0.

Markov chains are classified by their order. The case in which probability occurrence of each
state depends only upon the immediate preceding state, it is said to be first order Markov
chain. In second order Markov chains, it is assumed that the probability of occurrence in the
forthcoming period depends upon the state in the last two periods. Similarly, m the third
order Markov chains, it is assumed that probability of a state in the forthcoming period
depends upon the states in the last three periods.

Application of Markov Chain Analysis


The major applications of Markov analysis are in the following areas:
1) Personnel Determining future manpower requirements of an organization taking into
consideration retirements, deaths, resignations, etc.
2) Finance Customer accounts receivable behavior.
3) Production Helpful in evaluating alternative maintenance policies, certain classes of
inventory and queuing problems, inspection and replacement analysis.
Amare Matebu Kassa (Dr.-Ing) Page 100
4) Marketing Useful in analyzing and predicting customer's buying behavior in terms of
loyalty to a particular product brand, switching patterns to other brands, and market
share of the company versus its competitors.

7.2 State and Transition Probabilities

Predicting future states involves knowing system's likelihood or probability of changing from
one state to another. These probabilities can be collected and placed in a matrix. Such matrix
is also called the matrix of transition probabilities and shows the likelihood that the system
will change from one state time period to the next. This enables us to predict future states (or
conditions) also.

Illustration Let there be three brands A, Band C of a product (such as toothpaste, refined oil,
soap, etc.) satisfying the same need and which may be readily substituted for each other. A
buyer can buy any one of the three brands at any point of time; therefore, there are three
states corresponding to each brand. Thus, at the time of buying, the decision of changing the
brand may result in a change from one state (brand) to another. From marketing research
point of view, it is assumed that numbers of states (brands) are finite and the decision of
change of brand is taken periodically so that such changes will occur over a period of time.

In general, let
Si = finite number of possible outcomes (i = 1, 2... m) of each of the sequence of experiments
or events (In above illustration events are purchases and the possible outcomes are three
brands of the product).
m = number of states
Pij = Conditional probability of outcome Sj for any particular experiment (or event) given that
outcome si occurred for the immediately preceding experiment (or event), that is, probability
of being in state j in the future given the current state i.

The outcomes S1, S2 .. , Sm are called states and the numbers pij are called transition
probabilities. If we assume that the process begins in some particular state, then we can
calculate probability of states relating to the overall sequence of events. Each time a new state

Amare Matebu Kassa (Dr.-Ing) Page 101


is reached, the system is said to have stepped (or incremented) one step ahead. Each step
represents a time period (or condition) which results in another possible state.

If Pij are assumed to be constant over time, i.e. these do not change from one event to another
of the sequence, then the Markov chain is said to be stationary otherwise said to be non-
stationary or time dependent. All conditional, one step, state probabilities can be represented
as elements of a square matrix of transition probabilities as follows:

In the transition matrix of the Markov chain, Pij = 0 when no transition occurs from state i to
state j; and Pij = 1 when the system is in state i, it can move only to state j at the next
transition. Each row of the transition matrix represents a one-step transition probability
distribution over all states. This means
m

i 1
Pij  1 for all i

and 0 ≤ Pij ≤ 1.

The states of the system and transition probabilities can also be represented by two types of
diagrams:

1. Transition diagram: It shows the transition probabilities (or shifts) that can occur in any
situation. The arrows from each state indicate the possible states to which a system can move
from the given state. The matrix of transition probabilities which corresponds to the figure is
shown below.

Amare Matebu Kassa (Dr.-Ing) Page 102


2. Probability tree diagram: Such diagrams are identical in nature to those developed in
earlier chapters but are used to illustrate only a limited number of transitions of a Markov
chain. The number in circle represents the state at the beginning of a transition. These trees
can also be used to evaluate and determine the probability that the given system will be in
any particular state at any particular time, given the current state.

Figure 7.1 Tree diagram

Amare Matebu Kassa (Dr.-Ing) Page 103


This figure represents two possible outcomes from an experiment with their assumed
probabilities of occurrence from one step to another, along with branches that may connect
them over a period of time.

Let probabilities of shifting from s1 to s1 itself and s2 as well as from state s2 to s1 and s2 itself
are represented as elements of a transition matrix of the Markov chain as follows:

Here, P11 + P12 = 1 and P21 + P22 = 1

Consider a discrete time finite-state (S) Markov Chain { Xt , t = 1,2,3,…} with stationary
transition probabilities P[Xt+1 = j/Xt= i] = Pij, i,j ϵ S .
Let ,P = (Pij)denote the matrix of transition probabilities.
The transition probabilities between Xt and Xt+n are denoted P(n)ij and the transition matrix
p(n) = Pn.

States and Transition Probabilities

Predicting future states involves knowing system’s likelihood or probability of changing from
one state to another.
 These probabilities can be collected and placed in a matrix.
 Such matrix is called the matrix of transition probabilities (transition matrix)

and 0 ≤ Pij ≤ 1.

Amare Matebu Kassa (Dr.-Ing) Page 104


Markov Analysis -Brand Switching Problem

Consider the brand switching problem in a gasoline service station, the probability of
customers changing the service station overtime is as given in table below.

Table 7.1: Probability of customer movement per month

This month Next month

NOC OiLibya

NOC 0.60 0.40

OiLibya 0.20 0.80

Figure 6.2 Transition Diagram for the Brand Switching Problem

The above table fulfills the Markov characteristics (assumptions)


 The probability of moving from state to all others sum to one.
 The probabilities apply to all system participants,
 The events are independent

Amare Matebu Kassa (Dr.-Ing) Page 105


Suppose the service stations wanted to know the probability that a customer trades with them
in the future (say month 3), trading with them at this month (1)

Figure 7.4 Tree diagram

If a customer trades with NOC in month (1)


 The probability that a customer purchasing gasoline from NOC in month 3;
0.36 + 0.08 = 0.44
 The probability of a customer’s trading with OiLibya in month 3 is
0.24 + 0.32 = 0.56

Figure 7.5 Tree diagram

Amare Matebu Kassa (Dr.-Ing) Page 106


If a customer trades with OiLibya in month (1)
 The probability that a customer purchasing gasoline from NOC in month 3;
0 .12 + .16 = 0.28
 The probability of a customer’s trading with OiLibya in month 3 is
0.08 + .64 = 0.72
Table 7.3: probability of customer movement in month 3

Starting state Probability of trade in month 3

NOC OiLibya

NOC .44 .56

OiLibya .28 .72

7.3 Multi –Period Transition Probabilities and the Transition Matrix

One of the purpose of the Markov analysis is to predict the future. The elements of the n-step
transition matrix Pn = [pnij] m x m are obtained by repeatedly multiplying the transition matrix P
by itself:
Pn = P n-1 x P
Let, V = represents the vector transition matrix ( fore example V1= Vector of state
probabilities at period, (t=1), then

Where each row i of Vn represents the state probability distribution after n transitions, given
that the process starts out in state i.

Steps of Constructing Matrix of Transition Probabilities


In order to illustrate the Markov chain, a problem is presented in which the states of activities
are brands of products and transition probabilities represent the likelihood of customers
moving from one brand to another.

Amare Matebu Kassa (Dr.-Ing) Page 107


The steps of constructing a matrix of transition probabilities may be summarized as follows:
Step 1: Determine retention probabilities:
To determine the retention probabilities, divide the number of customers retained for the
period under review by the number of customers at the beginning of the period.

Step 2: Determine gains and losses probabilities:


i. For those customers who switch brands, show gains and losses among the brands for
completing the matrix of transition.
ii. To convert the customer switching of brands so that all gains and losses take the form
of transition probabilities, divide the number of customers that each entity has gained
(or lost) by the original number of customers it served.

Step 3 Develop matrix of transition probabilities


 In a matrix of transition probabilities, retentions (as calculated in Step 1) are shown in
as values on the main diagonal. The rows in the matrix show the retention and loss of
customers while the columns show the retention and gain of customers.

Transition Matrix Probability

Using transition matrix solve the customer’s brand switching problem for month 3.

This month Next month

NOC OiLibya

NOC 0.60 0.40

OiLibya 0.20 0.80

Brand Switching Problem

P = the transition matrix


Vi = the transition probability matrix after i = n periods,
n = 1, 2, 3,….
V1(month 1),
 If a customer initially trade with NOC , p111= 1 and p112 = 0;

Amare Matebu Kassa (Dr.-Ing) Page 108


 If a customer initially trade with OiLibya, P221= 0 and P222 = 1

V2(month 2) = V1P

Similarly we can determine the probabilities of customer’s brand switching problem for
Month 4, 5, 6, 7, 8, 9, ….

7.3.1 Steady- State Probability (Future State prediction)


As the number of periods increase, further changes in the state probabilities are smaller. This
means that state probabilities may become constant and will eventually remain unchanged. At
that point, process reaches a steady state and will remain the same until outside actions
change the transition probabilities.

Vi+1 = Vi →

Example, for the Gasoline service probability matrix, determine the Steady-state probabilities

Solution
To determine the steady-state probability for period i+1, we normally do the following
equations.

Steady-State Probability

From our previous discussion:

Amare Matebu Kassa (Dr.-Ing) Page 109


Vi+1 = Vi P

For the first row of the matrix;

Pi+111 = 0.60Pi11 + 0.2pi12


Pi+112 = 0.40pi11 + 0.80pi12
Once steady state is reached Pi+111= pi11, and Pi+112= pi12
P11= 0.6p11 + 0.2p12………………(1)
P12= 0.4P11 + 0.8 P12……………………….(2)
P11 + P12 = 1.0………….………… (3)
P11 = 1.0 – P12 ………………………(4)

Substituting ……(equation 4) in (equation2), we obtain:


Pi11= 0.33 and Pi12 = 0.67
For the second row of the matrix;
Pi+121 = 0.60Pi21+ 0.2pi22
Pi+122 = 0.40pi21 + 0.80pi22
By solving the problem in similar manner like row 1, the values of row 2 are:
Pi21= 0.33 Pi22 = 0.67

7.4 Special Cases in Markov Chains


1. Transient state: A state is said to be transient if it is not possible to move to that state
from any other state except itself.

1 2 3

1 0.4 0.6 0

T= 2 0.3 0.7 0

3 1.0 0 0

Amare Matebu Kassa (Dr.-Ing) Page 110


State 3 is a transient state. Once state 3 is achieved, the system will never return to that state.
Both states 1 and 2 contain a zero probability of going to state 3. The system will move out of
state 3 to state 1 (with a 1.0 probability) but will never return to state 3.

2. Cycling Processes: a cycling ( or periodic) Markov chain process is one in which


transition matrix (T) contains all zero elements in the diagonal elements of the matrix T
and all other elements are either 1 or 0.

There can be no steady-state conditions for such Markov chains.

3. Absorbing state: a state is said to be absorbing (trapping) state if it does not leave that
state. This situation will occur if any transition probability in the diagonal matrix from
upper left to lower right is equal to 1.

State 3 in this transition matrix is referred to as an absorbing or trapping state. Once state 3 is
achieved, there is a 1.0 probability that it will be achieved in succeeding time periods.

4. Ergodic Markov chain


 A Markov chain is called an ergodic chain if it is possible to go from every state to
every state (not necessarily in one move).
 If it is possible to pass from one state to another in a finite number of states.
Example: A gasoline service station brand switching problem

Problem 1
Two manufacturers A and B are competing with each other in a restricted market. Over the
year, A's customers have exhibited a high degree of loyalty as measured by the fact that
customers are using A's product 80 per cent of the time. Also former customers purchasing
the product from B have switched back to A's product 60 per cent of the time.
a) Construct and interpret the state transition matrix in terms of:
Amare Matebu Kassa (Dr.-Ing) Page 111
i) retention and loss, and
ii) retention and gain.
b) Calculate the probability of a customer purchasing A's product at the end of the second
period.

Problem 2
There are three dairies A, B and C in a small town which supply all the milk consumed in
the town. Assume that the initial consumer sample is composed of 1,000 respondents
distributed over three dairies A, B and C. It is known by all the dairies that consumers
switch from one dairy to another due to advertising, price and dissatisfaction. All these
dairies maintain records of the number of their customers and the dairy from which they
obtained each new customer. The following table illustrates the flow of customers over an
observation period of one month. Assume that the matrix of transition probabilities
remains fairly stable and at the beginning of period one, market shares are A = 25%, B =
45% and C = 30%. Construct the state transition probability matrix to analyze the
problem.

Table 7.4: Flows of customers over a period of one month

Diary Period 1 Change during period Period 2


(customers) Gain Loss (Customers)

A 250 62 50 262
B 450 53 60 443
C 300 50 55 295
1,000 165 165 1,000

Problem 3
The number of units of an item that are withdrawn from inventory on a day-to-day basis
is a Markov chain process in which requirements for tomorrow depend on today's
requirements. A one day transition matrix is given below:
Number of units withdrawn from inventory.

Amare Matebu Kassa (Dr.-Ing) Page 112


a) Construct a tree diagram showing inventory requirements on two consecutive
days.
b) Develop a two-day transition matrix.
c) Comment on how a two-day transition matrix might be helpful to a manager who
is responsible for inventory management.

Problem 4
On January 1 (this year), Bakery A had 40 per cent of its local market share while
the other two bakeries B and C had 40 per cent and 20 per cent, respectively, of
the market share. Based upon a study by a marketing research firm, the following
facts were compiled; Bakery A retains 90 per cent of its customers while gaining 5
per cent of B's customers and 10 per cent of C's customers. Bakery B retains 85
per cent of its customers while gaining 5 per cent of A's customers and 7 percent
of C's customers. Bakery C retains 83 per cent if its customers and gains 5 per
cent of A's customers and 10 per cent of B's customers. What will each firm's
share be on January 1 next year and what will each firm's market share be at
equilibrium?

Amare Matebu Kassa (Dr.-Ing) Page 113


CHAPTER 8
QUEUING ANALYSIS
8.1 Introduction
A common situation occurring in everyday life is that of queuing or waiting in a line. Queues
(waiting lines) form at bus stops, ticket booths, doctors' clinics, bank counters, traffic lights
and so on. Queues are also found in industry, in shops where the machines wait to be
repaired; at a tool crib where the mechanics wait to receive tools, in a warehouse where parts
wait to be used and in telephone exchanges where incoming calls wait to mature.

In general, a queue is formed when either units requiring service-commonly referred to as


custom wait for service or the service facilities, stand idle and wait for customers. Some
customers wait when total number of customers requiring service exceeds the number of
service facilities, some service facilities stand idle when the total number of service facilities
exceeds the number of customers requiring service.

Queuing theory can be applied to a wide variety of operational situations where imperfect
matching between the customers and service facilities is caused by one's inability to predict
accurately the arrival and service times of customers. In particular, it can be used to
determine the level of service (either the service rate or the number of service facilities) that
balances the following two conflicting costs:

i) cost of offering the service


ii) cost incurred due to delay in offering service
The first cost is associated with the service facilities and their operation, and the second
represents cost of customer's waiting time.

Obviously, an increase in the existing service facilities would reduce the customer's waiting
time. Conversely, decreasing the level of service should result in long queue(s). This means
an increase (decrease) in the level of service increases (decreases) the cost of operating
service facilities but decreased (increases) the cost of waiting. Figure 8.1 illustrates both
types of costs as a function of level of service. The optimum service level is one that
minimizes the sum of the two costs.

Amare Matebu Kassa (Dr.-Ing) Page 114


Figure 8.1 Queuing costs Vs Level of Services

Of the two types of costs mentioned above, the cost of waiting is difficult to estimate.
However, it is measured in terms of loss of sales or goodwill when the customer is a human
being who has no sympathy with the service. But, if the customer is a machine waiting for
repair, then the cost of waiting is measured in terms of the cost of loss production.

8.2 Application of Queuing Theory


Many practical situations can be put in the queuing framework. Some examples are given
below.

 Telephone conversation.
 The landing of aircraft.
 The unloading and loading of ships.
 The scheduling of patients in clinics.
 The service in custom offices.
 Machine breakdown and repairs.
 The timing of traffic lights.
 Car washing.
 Restaurant service.
 Flow in production.
 Layout of manufacturing systems.

Amare Matebu Kassa (Dr.-Ing) Page 115


Table 8.1 some queuing problem examples
Problem Customers Service facilities
Determining the number of attendants Automobiles Petrol station attendants
required at a petrol station
Scheduling of patients in a clinic Patients Doctors

Determining the number of runways at an airport Aircraft Runways

Determining the size of a parking lot Automobiles Parking spaces

Determining the number of taxicabs for a fleet Public Taxicabs

Determining the capacity of a motel Motorists Lodging facilities

Determining the service rate for a, harbour Ships Harbour

I
Queuing theory is a mathematical approach to the analysis of systems that involve waiting in
line or queues. When a customer leaves a waiting line, the opportunity to make a profit by
providing the service is lost. The decision maker is now faced with a question of balancing
this opportunity cost against the expense of additional capacity.

 Queuing theory is first developed by Agner Krarup Erlang (1878- 1929)


 Solve telephone network congestion problems

Figure 8.2 Queuing system

In general, queuing analysis are used to find out more about:


 the waiting time of customers,
 the queue length,
Amare Matebu Kassa (Dr.-Ing) Page 116
 the number of service facilities, and
 the busy period.
Information from the analysis (models) would help to take action either to reschedule the
arrivals or to change the type and number of service facilities.

8.3 Characteristics of Queuing System


1. Arrivals or inputs to the system
 Population size, behavior, statistical distribution
2. Queue discipline, or the waiting line itself
 Limited or unlimited in length, discipline of people or items in it
3. The service facility
 Design, statistical distribution of service times
 Consider the following figure

Fig 8.3 A Simple Representation of Queuing System

 Queue
 A line of waiting customers who require service from one or more service
providers.
 Queuing system
 waiting room + customers + service provider

8.3.1Arrival Characteristics
 Size of the population
 Unlimited (infinite) or limited (finite)
 Pattern of arrivals
Amare Matebu Kassa (Dr.-Ing) Page 117
 Scheduled or random, often a Poisson distribution
 Behavior of arrivals
 Wait in the queue and do not switch lines
 No balking or reneging

Fig. 8.4 Behavior of customer arrival

Table 8.2 Common Probability Distribution

Amare Matebu Kassa (Dr.-Ing) Page 118


Poisson distribution

Where P(x) = probability of x arrivals


x = number of arrivals per unit of time
 = average arrival rate
e = 2.7183 (which is the base of the natural logarithms)

8.3.2 Queue discipline


 First-come-first served (FIFO) – Most common
 Last-come-first-served (LCFS)
 Service in random order (SIRO)
 General discipline (GD) i.e., any type of discipline
3. Service Characteristics
 Queuing system designs
 Single-channel system, multiple-channel system
 Single-phase system, multiphase system
 Service time distribution
 Constant service time
 Random service times, usually a negative exponential distribution

Figure 8.5 Negative Exponential Distributions

Amare Matebu Kassa (Dr.-Ing) Page 119


Figure 8.6 Single-channel, single-phase system

Figure 8.7 Single-channel, multiphase system

Figure 8.8 Multi-channel, single-phase system

Figure 8.9 Multi-channel, multiphase system

Amare Matebu Kassa (Dr.-Ing) Page 120


Figure 8.10 Examples for queuing elements

8.4 Queuing Models

 Widely used to estimate desired performance measures of the system


 Typical measures
 Server utilization
 Length of waiting lines
 Delays of customers
 Applications
 Determine the minimum number of servers needed at a service center
 Detection of performance bottleneck or congestion
 Evaluate alternative system designs

The conventional notation for the characteristics of queuing situation can be given in the
following format, it is also called Lee Kendall’s Notation:
Lee Kendall’s Notation→ (a/b/c) :( d/e/f)
Where
a = Description of the arrivals distribution, Poisson Distribution (M)
b = Description of the departures (service time) distribution, constant (D) ,
Exponential (M)
c = Number of parallel servers; (C)=1, 2,3,4,…
d = Queue discipline; FIFO, LIFO, SIRO, GD (any type)
Amare Matebu Kassa (Dr.-Ing) Page 121
e = Maximum number allowed in the system (in queue plus in service); finite ( Q
numbers), infinite (∞)
f = Size of calling source; finite (N) or infinite (∞)

For example, the model (M/D/2): (FIFO/N/∞) uses:


 Poisson inter arrival time,
 Constant service time or deterministic service time
 2 parallel servers or channels,
 The queue discipline is First come first served,
 There is a limit of N customers in the entire system, and
 The size of the source from which customers arrive is infinite.

Steady State - Measure of Performance


The most commonly used notations used to measure performance in a queuing situation are:

λ= arrival rate

λ eff = effective arrival rate

µ = service rate

n = number of customers in system

c = number of channels in multiple channel system

= mean number of busy servers

Q = maximum number of arrivals that can be in system

Ts = mean time in the system

Ns = mean number in the system

Tq = mean waiting time in queue

Nq = mean number in queue

P0 = probability of zero units

Pa = probability of maximum number in the system

Pn = probability of n units in the system.

Amare Matebu Kassa (Dr.-Ing) Page 122


Figure 8.11 Queuing Costs

Recall the system includes both the queue and the service facility

The relationship between Ns and Ts (also Nq and Tq) is known as little’s formula and is given
as:

The parameter λeff = λ when all arriving customers can join the system. Otherwise, if some
customers cannot join because the system is full, then λ eff < λ.

Amare Matebu Kassa (Dr.-Ing) Page 123


A direct relationship also exists between Ts and Tq. By definition

 Expected waiting   Expected waiting   Expected service 


       
 time in system   time in queue   time 

This can be expressed as


1
T s
 T q


.......... (5)

Multiplying both sides of equation (5) by λ eff which together with Little”s formula give


N  N  eff
.......... .( 6 )
s q

Thus by definition, the difference between the average number in the system N s, and the
average number in the queue, Nq must equal the average number of busy servers, . This will
give us:

c N  N  eff
.......... .( 7 )
s q

The percentage utilization of the servers is thus computed as :


c
  x100
c

Let us see the variety of queuing models with the assumption of:
 Poisson distribution arrivals
 FIFO discipline
 A single-service phase

i) Model 1 : (M/M/1) : (FIFO/∞/∞)


 Single channel
 Single phase
 Poisson arrival rate
 Exponential service rate
 Unlimited queue length

Amare Matebu Kassa (Dr.-Ing) Page 124


Using the notations of the generalized model, we have λ n = λ and μn = μ for all n= 0, 1, 2,…
and also, λeff = λ and λ loss= 0 , because all arriving customers can join the system.
 Arrivals are served on a FIFO basis and every arrival waits to be served regardless of
the length of the queue
 Arrivals are independent of preceding arrivals but the average number of arrivals does
not change over time
 Arrivals are described by a Poisson probability distribution and come from an infinite
population
 Service times vary from one customer to the next and are independent of one another,
but their average rate is known
 Service times occur according to the negative exponential distribution
 The service rate is faster than the arrival rate

Equations for Model 1

 
 -  -

2 
(  -  )
(  -  )
n
    
p0  1   1 -     
   

ii) Model 2: M/D/1 : FIFO// 


• single channel
• single phase
• Poisson arrival rate
• constant service rate
• unlimited queue length.

Amare Matebu Kassa (Dr.-Ing) Page 125


1 
Ts  Tq  Tq 
μ 2     

 2
Ns  Nq  Nq 
 2     

iii) Model 3: M/M/1 : FIFO/Q/ 

 single channel
 single phase
 Poisson arrival rate
 exponential service rate
 finite or limited queue length , Q customers in the system

Ts  Tq 
1 λ  1  Q  1 λ μ Q  Q λ μ Q 1 
Ns   
μ μ  1  λ μ [1  λ μ Q 1 ] 

iv) Model 4: (M/M/c) : (FIFO/∞/∞)

 multiple channel
 single phase
 Poisson arrival rate
 exponential service rate
 unlimited queue length.

Amare Matebu Kassa (Dr.-Ing) Page 126


 
 
 
1 P0  
1
 where   c 
Ts  Tq   c 1    
n
    c   c  
μ   

  c!

 c    
 n  0  n!    

  c N
Tq  P0  ( / ) = q
  c(c ! )(1 - /c )  
2
n n
 
   
  
 P n  Po for 0  n  c Po  
Ns  Nq  n! n!

n
  
  ( / )  c  
Nq  P0    
2 Pn  Po ;n  c
(c - 1)!(c  -  )  c ! c nc

iv) Model 5: (M/M/1) : (FIFO/∞/N)

 single channel
 single phase
 Poisson arrival rate
 exponential service rate
 unlimited queue length
 finite calling population

1
P0  , n = population size
1 N
N!  n
Ts  Tq 
  ( )
n  0 ( N  n )! 


Ns  Nq (1  p0 ) Nq  N 

(1  P0 )

Nq N!  n = 1, 2, 3, … N
Tq  Pn  ( ) n Po ,
( N  N s ) ( N  n)! 

Amare Matebu Kassa (Dr.-Ing) Page 127


Example 1
The new accounts loan officer of the Bank interviews all customers for new accounts. The
customers desiring to open new accounts arrive at the rate of four per hour according to a
Poisson distribution, and the accounts officer spends an average of 12 minutes with each
customer setting up a new account.
a) Determine the operating characteristics (P0, Ns, Nq, Ts, and Tq) for the system.
b) Add an additional accounts officer to the system described in the problem so that it is
now a multiple-server queuing system with two channels, and determine the operating
characteristic required in part (a).

Solution:
 Given λ = 4 customers per hour are arrive

µ = 5 customers per hour are served

a) Determine the operating characteristics for the single-server system

(M/M/1): (FIFO/∞/ ∞)

 4
p0  1   (1  )
 5
= 0.20 probability of no customers in the system

 4
Ns    4 customers on average in the queuing system
 - 5-4
2
Nq 
2 
(4)
 3.2 customers on average in the queuing line
 (  -  ) 5 (5 - 4)

 1
Ts    1 hr on average in the system
 - 5-4

 4
Tq    0.80 hr (48 min) average time in the queue
 (  -  ) 5 (5 - 4)

Amare Matebu Kassa (Dr.-Ing) Page 128


c) Determine the operating characteristics for the multiple-serve system:

(M/M/2): (FIFO/∞/ ∞)

 
 
 1 
P0   

c 1    n
    c  c  
  
 n!

 c!

 c    
 n0    

1

1 4 o
14
1
 1  4  2  ( 2 )( 5 ) 
           
0! 5
   1! 5   2!  5   ( 2 )( 5 )  4 

= 0.249 probability that no customers are in the system

  c N
Tq  Po  ( / )  = q
  c(c ! )(1 - /c )  
2

= 0.038 hr (2.5 min) average time spent waiting in line

1 1
Ts  Tq   0.038
μ 5

= 0.238 hr or (14.5 min) average time in the system

= 0.152 customer on average waiting to be served

Amare Matebu Kassa (Dr.-Ing) Page 129


 4
Ns  Nq   0.152 
 5
= 0.952 customer on average waiting in the system

Operating performance between alternative accountant officer

Single Server (one Double Server(two


accountant officer) accountant officer)

P0 0.20 0.249

Ns 4 Customers 0.952 Customers

Nq 3.2 Customers 0.152 Customers

Ts 1 hr 0.238 hr(14.5 min.)

Tq 0.80 hr (48min.) 0.038 hr (2.5 min.)

Example 2
Arrivals at a telephone booth are considered to be Poisson distribution with an average time
of 10minutes between arrivals. The length of the phone call is assumed to be distributed
exponentially with average time per call equal to 3 minutes, then
a) What is the probability that a person arriving at the booth will have to wait?
b) What is the average number of people that will be waiting in the system?
c) The telephone department will install another booth when convinced that a person
will have to wait at least 3 minutes for the phone. By how much must the flow of rate
of arrivals be increased in order to justify a second booth?

Solution:
The model is (M/M/1): (FIFO/∞/ ∞)
Given = 1/10 customers per min.
µ = 1/3 customers per min.

Amare Matebu Kassa (Dr.-Ing) Page 130


a) Probability (Nq > 0) = 1- Po = 1-(1-/ ) =

= 1-(1-3/10)= 0.3

 1 / 10cust. per min .


b) Ns  Ns 
  1 / 3cust. per min .  1 / 10cust. per min .

= 3/7= 0.43 customers

c) Tq= 3minite, µ =1/3 customers per min.

1
Tq 
 (   1 )

Solving for λ1 ,

λ1= 1/6 customers per minute

1 1 10  6 4
   
6 10 60 60
= 1/15 customers per minute

Problem 1:

A television repairman finds that the time spent on his jobs has an exponential distribution
with a mean of 30 minutes. If he repairs sets in the order in which they came in, and if the
arrival of sets follows a Poisson distribution approximately with an average rate of 10 per 8-
hour day, what is the repairman's expected idle time each day? How many jobs are ahead of
the average set just brought in?

Problem 2
In a railway marshalling yard, goods trains arrive at a rate of 30 trains per day. Assuming that
the inter-arrival time follows an exponential distribution and the service time (the time taken
to hump train) distribution is also exponential with an average of 36 minutes.
Calculate:
a) expected queue size (line length)
b) probability that the queue size exceeds 10.
If the input of trains increases to an average of 33 per day, what will be the change in (a)
and (b)?

Amare Matebu Kassa (Dr.-Ing) Page 131


Problem 3

Arrivals at telephone booth are considered to be Poisson with an average time of 10 minutes
between one arrival and the next. The length of phone call is assumed to be distributed
exponentially, with mean 3 minutes.
a) What is the probability that a person arriving at the booth will have to wait?
b) The telephone department will install a second booth when convinced that an arrival
would expect waiting for at least 3 minutes for a phone call. By how much should the
flow of arrivals increase in order to justify a second booth?
c) What is the average length of the queue that forms from time to time?
d) What is the probability that it will take him more than 10 minutes altogether to wait for
the phone and complete his call?

Amare Matebu Kassa (Dr.-Ing) Page 132


CHAPTER 9
SIMULATION
9.1 Introduction

Often, we do not find a mathematical technique that can solve a model which has been
constructed with great difficulties. Also, often it does seem possible to simplify the model in
any way without destroying its acceptability as a reasonable representation of 'reality'.
Consequently, the model cannot be manipulated to formulate a decision strategy or to obtain
some appreciation of potential responses when the model is subjected to various situations or
assumptions. However, a model once constructed may permit us to predict what will be the
consequences of taking a certain action. In particular, we could 'experiment' on the model by
'trying' alternative actions or parameters and compare their consequences. This
'experimentation' allows us to answer 'what if 'questions relating the effects of your
assumptions on the model response.

For example, suppose that we have built a model for some problem, but we neither have the
means nor the knowhow to solve it! The model could still be used by simply substituting and
comparing the consequences of using various parameters into the model. Analyzing a model
in this way is referred to as simulating the model.

As the complexity of a model increases, simulation is often the only remaining tool for
analysis. This is the case for problems which exhibit many sources of uncertainty. For such
problems, simulation seeks to replicate the uncertainty in the model and to assess the model's
response to events made to occur (i.e. simulated) with a frequency characterized by pre-
specified probabilities.

Where is simulation used? As implied earlier, it is used in almost all conceivable fields,
restricted only by our imagination and our ability to translate such imagination into a set of
computer directives. The following are some simple examples which will make us appreciate
what simulation is and what we can do with it.

Location of ambulances: In such studies, it is not known exactly where the demand for
ambulance services will arise and, therefore, simulation is required to test alternative

Amare Matebu Kassa (Dr.-Ing) Page 133


scheduling rules of the ambulances, their locations (in a geographical area), the response time
to an emergency call and, of course; the overall service quality and the costs incurred if
ambulances were to be configured in a certain way (in types, quantity, location, scheduling
and staffing).

Design of computer systems: In general, it is difficult to specify future needs. Even the best
forecasts are just that, forecasts! Simulation is then useful to highlight the basic
characteristics of a computer system configuration (its speed, size, computing qualities,
memory and so on.) in terms of the equipment used, the management of the computer system,
the costs of such configurations and the resulting computational service which it provides to
users.

Shop floor management: The management of modem shop floor factories is extremely
complex, involving many persons, machines, products parts, robots, material handling
systems, Television and computer aided vision and inspection systems. Even if we were able
to manage some specific aspects of the shop floor, integrating it into a simple mathematical
model is simply very difficult, provided we can construct a model (however complex it may
be) of the shop floor. Simulation is very useful to assess its behavior and how it responds to
certain policies (such as scheduling jobs to be performed on the basis of their due-dates, with
earliest dates jobs performed first, by comparing alternative configurations and routing
control policies, and so. on). It can also be used to test alternative work flows, the
introduction of a new machinery, and in some cases the use of completely new manufacturing
.and management technologies (such as in testing 'Japanese approaches' Just-in-Time
manufacturing, flexible manufacturing, etc.). In such cases, simulation is a proper framework
which provides insights regarding the 'new' shop floor before it is implemented. It can,
thereby, provide an experience and a learning tool which will give us greater confidence in
the productivity and the management of the future system, at a relatively small cost.

Design of optimal replenishment policy: In inventory control, the problem of optimal


replenishment policy arises due to the probabilistic (stochastic) nature of demand and lead
time. Thus, instead of manually trying out an appropriate replenishment alternative for each
level of demand and lead time for a specific period and then selecting the best one, we

Amare Matebu Kassa (Dr.-Ing) Page 134


process data( called experiment) on the computer and obtain the results in a very short time at
a very small cost.

Design of a queuing system: In queuing theory, the problem of balancing the cost of
waiting, against the cost of idle time of service facilities in the system arises due to the
probabilistic nature of the inter-arrival times of customers and the time taken to complete the
service to the customer. Thus, instead of trying out in actual manually with data to design a
queuing system, we process the data on computers and obtain the expected value of various
characteristics of the queuing system such as idle time of servers, average waiting time,
queue length, etc.

Other problems, such as aircraft management and routing, scheduling of bank tellers and
location of bank branches, the deployment of fire stations, routing and dispatching when
roads are not secured (where materials sent might not, potentially, reach their destination), the
location and the utilization of recreational facilities (such as parks, public swimming pools,
etc.), and many other problems have been studied or could be studied through simulation. .

9.2 Definition of simulation


Simulation is by no means a recent operations research technique. Simulation is one of the
oldest analysis tool known to man - that is, the representation of the real world by numbers
and other symbols that can be readily manipulated. To gain a better grasp of the real world,
games such as chess to simulate battles, backgammon to simulate racing, and other games to
simulate hunting and diplomacy were already invented in antiquity.

Today, a modem game like monopoly simulates the competitive arena of real estate. Many
have played baseball with a deck of cards which has hits, strikeouts and walks with a
cardboard, diamond and plastic chips as runners. The distribution of hits, runs and outs, etc.,
in a deck of cards serves as a realistic reflection of the overall average with which each will
occur in real life. Other games people play, generate experiences, understanding and gain
knowledge which is basically what we will be trying to do through simulation.

What is new in modem simulation? It is the availability of computers which make it possible
for us to deal with an extraordinarily large quantity of details which can be incorporated into

Amare Matebu Kassa (Dr.-Ing) Page 135


a model and the ability to manipulate the model over many 'experiments' (i.e. replicating all
the possibilities that may be imbedded in the external world, and events would seem to
recur). The modem use of the word simulation can be traced to the mathematicians Von
Neumann and Ulam in the late 1940's when they developed the term 'Monte Carlo analysis'
while trying first to 'break' the Casino at Monte Carlo (which they did not succeed!) and
subsequently, applying it to the solution of nuclear shielding problems that were either too
expensive for physical experimentation, or too complicated for treatment by known
mathematical techniques.

An agreed definition for the world simulation has not been reached so far, however a few
definitions are stated as:
A simulation of a system or an organism is the operation of a model or simulator
which is a representation of the system or organism. The model is amenable to
manipulation which would be impossible, too expensive or unpractical to perform on
the entity it portrays. The operation of the model can be studied and for it, properties
concerning the behaviour of the actual system can be inferred. (by Shubik)

This definition is. broad enough to be applied equally to military war games, business games,
economic models, etc. In this view simulation involves logical and mathematical constructs
that can be manipulated on a digital computer using iterations or successive trials.
Simulation is the process of designing a model of a real system and conducting
experiments with this model for the purpose of understanding the behavior (within the
limits imposed by a criterion or set of criteria) for the operation of the system. (by
Shannon)

Simulation is a numerical technique for conducting experiments on a digital


computer, which involves certain types of mathematical and logical relationships
necessary to describe the behaviour and structure of a complex real-world system
over extended periods of time. (by Naylor et al.)

Amare Matebu Kassa (Dr.-Ing) Page 136


Few other definitions of simulation are as under:
'X simulated y’ is true if and only if
i) X and Yare formal systems,
ii) Y is taken to be the real system,
iii) X is taken to be an approximation to the real system, and
iv) The rules of validity in X are non-error-free, otherwise X will become the real
system.
Simulation is the use of a system model that has the designed characteristics of reality
in order to produce the essence of actual operation. (by Churchman)

For Operations Research, simulation is a problem solving technique which uses a computer-
aided experimental approach to study problems that cannot be analyzed using direct and
formal analytical methods. As a result simulation can be thought of as a last resort technique.
It is not a technique which should be applied in all cases. However, Table 9.1 highlights what
simulation is and what it is not.

Table 9.1 Simulation – What it is / not


It is It is not
 a technique which uses computers.  an analytical technique which provides
 an approach for reproducing the exact solution.
processes by which events of chance and  a programming language but it could be
change are created in a computer. programmed into a set of commands
 a procedure for testing and experimenting which can form a language to facilitate
on models to answer what if ... , then so the programming of simulation.
and so ... types of questions.

Amare Matebu Kassa (Dr.-Ing) Page 137


9.3 Steps of simulation process
The process of simulating a system consists of following steps:

Step 1 Identify the problem


If any inventory system is being simulated, then the problem may concern the determination
of the size of order (number of units to be ordered) when inventory level falls up to the
reorder level (point).

Step 2 (a) Identify the decision variables


(b) Decide the performance criterion (objective) and decision rules

In the context of the above defined inventory problem, the demand (consumption rate), lead
time and safety stock are identified as decision variables. These variables shall be responsible
to measure the performance of the system in terms of total inventory cost under the decision
rule-when to order.

Step 3 Construct a numerical model


A numerical model is constructed to be analyzed on the computer. Sometimes the model is
written in a particular simulation language which is suited for the problem under analysis.

Step 4 Validate the model


Validation of the model is necessary to ensure whether it is truly representing the system
being analyzed and the results will be reliable.

Step 5 Design the experiments


Conduct experiments with the simulation model by listing specific values of variables to be
tested (i.e. list courses of action for testing) at each trial (run).

Step 6 Run the simulation model


Run the model on the computer to get the results in the form of operating characteristics.

Amare Matebu Kassa (Dr.-Ing) Page 138


Step 7 Examine the results
Examine the results of problem as well as their reliability and correctness, If the simulation
process is complete, then select the best course of action (or alternative) otherwise make
desired changes in model decision variables, parameters or design, and return to Step 3.

Figure 9.1 Steps of Simulation Process

9.4 Advantages and disadvantages of simulation


Advantages
1. This approach is suitable to analyze large and complex real-life problems which cannot
be solved by usual quantitative methods.
2. It is useful for sensitivity analysis of complex systems. In other words, it allows the
decision-maker to study the interactive system variables and the effect of changes in these
variables on the system performance in order to determine the desired one.

Amare Matebu Kassa (Dr.-Ing) Page 139


3. Simulation experiments are done with the model, not on the system itself. It also allows to
include additional information during analysis that most quantitative models do not
permit. In other words, simulation can be used to ‘experiment’ on a model of a real
situation without incurring the costs of operating on the system.
4. Simulation can be used as a pre-service test to try out new policies and decision rules for
operating a system before running the risk of experimentation in the real system.
5. The only 'remaining tool' when all other techniques become intractable or fail.

Disadvantages
1. Sometimes simulation models are expensive and take a long time to develop. For
example, a corporate planning model may take a long time to develop and prove
expensive also.
2. It is the trial and error approach that produces different solutions in repeated runs. This
means it does not generate optimal solutions to problems.
3. Each application of simulation is ad hoc to a great extent.
4. The simulation model does not produce answers by itself. The user has to provide all the
constraints for the solutions which he wants to examine.

Simulation is an attempt to duplicate the features, appearance, and characteristics of a real


system.
 The objectives of simulation are:
1. To imitate a real-world situation mathematically
2. To study its properties and operating characteristics
3. To draw conclusions and make action decisions based on the results of the simulation

Simulation is an alternative form of analysis when the problem situations are too complex to
be represented by the concise mathematical techniques.

9.5 Types of Models


 Illustrative
 models
 Abstract, symbolic models
 Verbal models

Amare Matebu Kassa (Dr.-Ing) Page 140


 Mathematical models
• Models for optimization E.g. Linear programming, game
theory etc.
• Models for prediction
• E.g. Markov models
• Models for experimentation
• Simulation
 Others (e.g. Musical notation)
 Analogy models

Figure 9.1 Types of models

9.6 Types of Simulation


Systems may have discrete or continuous state
I. Based on the use of either a continuous or discrete time representation.
1. Continuous simulation: The state changes all the time, not just at the time of some
discrete events.
For example: the water level in a reservoir with given in and out flow may change all
the time.

Amare Matebu Kassa (Dr.-Ing) Page 141


2. Discrete event simulation: The system only change at discrete points in time.
 Fore example: In an industrial plant, the system status changes when a new part
arrives at a machine or when a machine breaks down.
 Between any two events, the status of the modeled system remains constant.
 To represent the changes in the system, it is necessary to describe the actions or an
event which causes the status of the system to change.

Discrete event simulation has applications in a wide range of sectors including manufacturing
and service sectors.
 automotive
 healthcare
 electronics
 pharmaceuticals
 food and beverage
 packaging
 logistics, etc.

II. Based on the representation of the models


1. Analogue Simulation (AS):- In this type of simulation, an original physical system is
replaced by an analogous physical models that is easier to manipulate.
Examples:
 Manned space flight
 Treadmills that simulate automobile tire wear in laboratory.
2. Computer Simulation:- In this form of simulation, systems are replicated with
mathematical models, which are analyzed with computer.
 This form of simulation has become a very popular technique that has been
applied to a wide variety of business problems.
 One reason of its popularity is that it offers a means of analyzing very
complex systems that cannot be analyzed using the other techniques .

Amare Matebu Kassa (Dr.-Ing) Page 142


9.7 Stochastic simulation and random numbers
In simulation, probability distributions are used to define numerically outcomes in a sample
space by assigning a probability to each of possible outcomes. For example, if you flip a coin,
the sample space {H, T} is the set of possible outcomes. Each outcome can occur with some
probability, reflecting the element of chance. A random variable assigns a number to this
element of chance. In statistics, these numbers are estimated to assess the uncertainty inherent
in the model, but in simulation these variables are controlled numerically and used to mimic
these elements of uncertainty which are defined in a model. This is done by generating (using
the computer) outcomes with same frequency as those encountered in the process being
mimicked (simulated). In this manner many experiments (also called simulation runs) can be
performed, and leading to a collection of outcomes that have a frequency (probability)
distribution similar to that of the model you wish to study.

To use simulation, it is first necessary that you learn how to generate the sample random
events that make up complex models. Once this is done, it is possible to use the computer to
reproduce the process through which chance is generated in real life. In this manner a
problem can be evaluated that involves many interrelationships for their aggregate behavior
and assess this behavior as a function of a set of given parameters. Thus process generation
(simulating chance processes) and modeling are the two fundamental techniques that we need
in simulation.

The most elementary and important type of process is the random process, which requires for
its simulation the selection of samples (or events) drawn from a given distribution so that
repetition of this selection process will yield a frequency distribution of sample values that
faithfully matches the original distribution. When these samples are generated through some
mechanical or electronic means, they are pseudo random numbers (for they are not really
random since they are generated by a machine). Alternately it is possible to use table of
random numbers where the selection of number in any consistent manner will yield numbers
that behave as if they were drawn from a uniform distribution.

There are several ways of generating random numbers such as: Random numbers generators
(which are inbuilt feature of spread sheets and many computer languages) tables (see
appendix), a roulette wheel, etc.
Amare Matebu Kassa (Dr.-Ing) Page 143
Random numbers between 00 and 99 are used to obtain values of random variables that have
a known discrete probability distribution in which the random variable of interest can assume
one of a finite number of different values. In some applications, however, the random
variables are continuous, that is, they can assume any real value according to a continuous
probability distribution. For example, in queuing theory applications, the amount of time a
server spends with a customer is such a random variable which might follow an exponential
distribution.

9.7.1 Monte Carlo Simulation


The principle behind the Monte Carlo simulation technique is representative of the given
system under analysis by a system described by some known probability distribution and then
drawing random samples from probability distribution by means of random numbers. In case
it is not possible to describe a system in terms of standard probability distribution such as
normal, Poisson, exponential, gamma, etc., an empirical probability distribution can be
constructed.

The Monte Carlo simulation technique consists of following steps:


i. Setting up a probability distribution for variables to be analyzed.
ii. Building a cumulative probability distribution for each random variable.
iii. Generate random numbers. Assign an appropriate set of random numbers to represent
value or range (interval) of values for each random variable.
iv. Conduct the simulation experiment by means of random sampling.
v. Repeat Step 4 until the required number of simulation runs has been generated.
vi. Design and implement a course of action and maintain control.

One characteristic of some systems that makes them difficult to solve analytically is that they
consist of random variables represented by probability distributions. Thus, a large
proportion of the applications of simulations are for probabilistic models (Stochastic
simulation).
 The term Monte Carlo has become synonymous with probabilistic simulation in
recent years.
 Monte Carlo is a technique for selecting numbers randomly from a probability
distribution.

Amare Matebu Kassa (Dr.-Ing) Page 144


 In case it is not possible to describe a system in terms of standard probability
distribution such as normal, Poisson, exponential, etc., an empirical probability
distribution can be constructed.
 The Monte Carlo process is analogous to gambling devices.
 It may be used when the model contains elements that exhibit chance in their
behavior.

Figure 9.3 The Monte Carlo Process

The Monte Carlo technique consists of the following steps:


1. Set up probability distributions for important variables
2. Build a cumulative probability distribution for each variable
3. Establish an interval of random numbers for each variable
4. Generate random numbers
5. Simulate a series of trials by means of random sampling
6. Repeat step 5 until the required number of simulation runs has been generated.
7. Design and implement a courses of action and maintain control

Amare Matebu Kassa (Dr.-Ing) Page 145


9.7.2 Random Number Generation
Monte Carlo simulation requires the generation of a sequence of random numbers. This
sequence of random numbers helps in choosing random observations (samples) from the
probability distribution.

a) Arithmetic computation: The nth random number rn consisting of k-digits generated by


using multiplicative congruential method is given by:
rn = p x rn-1_l (modulo m)

Where p and m are positive integers, p < m, rn-1 is a k-digit number and
modulo m means that rn is the remainder when p x rn-1 is divided by m.
This means, rn and p x rn-1 differ by an integer multiple of m.

To start the process of generating random numbers, the first random number (also called
seed) ro is specified by the user. Then using above recurrence relation a sequence of k-digit
random number with period h <m at which point the number ro occurs again can be
generated.

For illustration,
let p = 35, m = 100 and arbitrarily start with ro = 57.
Since m - 1 = 99 is the 2-digit number, therefore,
it will generate 2-digit random numbers:
r1 = p x ro (modulo m) = 35 x 57 (modulo 100)
= 1,995/100 = 95, remainder
r2 = p x r1 (modulo m) = 35 x 95 (modulo 100)
=3,325/100 = 25, remainder
r3 = p x r2 (modulo m) = 35 x 25 (modulo 100)
= 875/100 = 75, remainder
The choice of ro and p for any given value of m require great care, and the method used is
also not a random process because sequence of numbers generated is determined by the input
data for the method. Thus, the numbers generated through this ·process are pseudo random
numbers because these are reproducible and hence, not random.

Amare Matebu Kassa (Dr.-Ing) Page 146


The above defined recurrence relation can also be used to generate random numbers as
decimal fraction between 0 and 1 with a desired number of digits. For this, the recurrence
relation un = rn /m is used to .generate uniformly distributed decimal fraction between 0 and 1.

9.7.3 Computer Generator

The random numbers that are generated by using computer software are uniformly distributed
decimal fractions between 0 and 1. The software works on the concept of cumulative
distribution function for the random variables for which we are seeking to generate random
numbers.

 For example, for the negative exponential function with density function,

f ( x )  e  x ,0  x  ,
The cumulative distribution function is given by,
x

 e
x
F (x)  dx  1  e   x
0
x
or , e  1  F (x)

 Taking logarithm on both sides, we have

  x  log[ 1  F ( x )]
or , x   (1 /  ) log[ 1  F ( x )]
 If r = F(x) is a uniformly distributed random decimal fraction between 0 and 1, then
the exponential variables associated with r is given by:

xn  (1 /  ) log(1  r)  (1 /  ) log r.

 This is an exponential process generator since 1-r is a random number and can be
replaced by r.
Remark
i. We can pick up random numbers from random table, or
ii. Use built-in Excel formula to generate random numbers.

Amare Matebu Kassa (Dr.-Ing) Page 147


 While picking up random numbers from the random number table
The starting point could be randomly chosen:
Start with any number in any column or row, and proceed in the same column or
row to the next number, but a consistent, unvaried pattern should be followed in
drawing random numbers.
We should not jump from one number to another indiscriminately.

Table 9.2 Random number generation using built-in excel formula

To Simulate Use built-in Excel formula

1) Random number, r (0≤ r≤ 1) = RAND( )

2) Random number, r = RAND( )*100


(0 ≤ r ≤ 100)

3) Continuous uniform distribution = a+(b-a)*RAND( )


between a and b

4) Discrete uniform distribution = INT(a+(b-a+1)*RAND( ))


between ‘a’ and ‘b’

Example: Demand and supply

The demand per day for Tires at a Tire distributor is a discrete random variable defined as in
Table 9.3.

Table 9.3 Probability distribution of demand of Tires

Demand for Tires Frequency of Demand

0 10

1 20
2 40
3 60
4 40
5 30
200 days

Amare Matebu Kassa (Dr.-Ing) Page 148


 Using random numbers from the given table;
a) Simulate the demand for the next 10 days
b) Also estimate the daily average demand for tires on the bases of simulated data
c) Compare the results with the expected daily demand.

Solution:
Table 9.4 Probability of Demand

Table 9.5 Assignment of Random Numbers

Amare Matebu Kassa (Dr.-Ing) Page 149


5

= (probablity of i units) x (demand of i units)


i 1

= (.05)(0) + (.10)(1) + (.20)(2) + (.30)(3) + (.20)(4) + (.15)(5)

= 0 + .1 + .4 + .9 + .8 + .75

= 2.95 tires

Example 2: Simulation of a Queuing System

Consider the Case of drive-in market which consists of one cash registrar (the service
facility) and a single queue of customers. The inter arrival time and service time is as in
table ‘a’ and ‘b’. Assume that the time intervals between customer arrivals are discrete
random variables.

Amare Matebu Kassa (Dr.-Ing) Page 150


For 10 customers arrivals to the cash registrar, Calculate
a) Average waiting time
b) Average queue line
c) Average time in the system
Solution

First we have to develop the cumulative probability distribution, to determine random


number ranges.

Amare Matebu Kassa (Dr.-Ing) Page 151


Once the simulation is complete, we can compute operating characteristics from the
simulation results as follows.
12.5 min
 Average waiting time =  1.25 min per customer
10customers

8 customers
 Average queue length =  .80
10 customers

24.5 min
 Average time in the system =  2.45min per customer
10 customers

Example 3: Simulation of a Machine Breakdown and Maintenance System

 A continuous probability distribution of the time between machine breakdowns is


given by;
x
f ( x) 
,0  x  4 weeks
8
Where X = Weeks between machine breakdowns

 When a machine breaks down, it must be repaired; and it takes either one, two, or
three days for the repair to be completed, according to the discrete probability
distribution shown in table (I)

Amare Matebu Kassa (Dr.-Ing) Page 152


Every time a machine breaks down, the cost to the company is an estimated $2,000 per day
in lost production until the machine is repaired.

The company would like to know if it should implement a machine maintenance program
at a cost of $20,000 per year that would reduce the frequency of breakdowns and thus the
time for repair.
 The maintenance program would result in the following continuous probability
function for time between breakdowns
x
f ( x)  ,0  x  6 weeks
18
Where x = Weeks between machine breakdowns

The reduced repair time resulting from the maintenance program is defined by the discrete
probability distribution shown in table (II).

Amare Matebu Kassa (Dr.-Ing) Page 153


Amare Matebu Kassa (Dr.-Ing) Page 154
Summary of the result
Option 1: Without the maintenance program
Cost = $84,000
Option 2: With maintenance program
Cost = $20,000+ $42,000
= $62,000
Therefore, option 2 is better for the organization with
Profit = $84,000 - $62,000
= $22,000

Amare Matebu Kassa (Dr.-Ing) Page 155


CHAPTER 10
DYNAMIC PROGRAMMING & NON-LINEAR PROGRAMMING

10.1 Introduction
Often decision-making process involves several decisions to be taken at different times. For
example problems of inventory control, evaluation of investment opportunities, long-term
corporate planning, and so on require sequential decision-making. The mathematical
technique of optimizing such a sequence of interrelated decisions over a period of time is
called dynamic programming. It uses the idea of recursion to solve a complex problem,
broken into a series of interrelated (sequential) decision stages (also called sub problems)
where the outcome of a decision at one stage affects the decision at each of the next stages.

The word dynamic has been used because time is explicitly taken into consideration.
Dynamic programming (DP) differs from linear programming in two ways:
i. In DP, there is no set procedure (algorithm) as in LP that can be used to solve all
problems. DP is a technique that allows to break up the given problem into a sequence
of easier and· smaller sub-problems which are then solved in stages.
ii. LP gives one time period (single stage) solution whereas DP considers decision-making
overtime and solves each sub-problem optimally.

Dynamic programming terminology


Regardless of the type or size of a dynamic programming problem, there are certain terms
and concepts that are common in every problem.

Stage: The dynamic programming problem can be decomposed or divided into a sequence of
smaller sub-problems called stages. At each stage there are a number of decision alternatives
(courses of action) and a decision is made by selecting the most suitable alternative. Stages
very often represent different time periods in the planning period of the problem, places,
people or other entities. For example, in. the replacement problem, each year is a stage, in the
salesman allocation problem, each territory represents a stage.

State: Each stage in a dynamic programming problem has a certain number of states
associated with it. These states represent various conditions of-the decision process at a stage.

Amare Matebu Kassa (Dr.-Ing) Page 156


The variables which specify the condition of the decision process or describe the status of the
system at a particular stage are called state variables. These variables provide information for
analyzing the possible effects that the current decision could have upon future courses of
action. At any stage of the decision-making process there could be a finite or infinite number
of states. For example, a specific city is referred to as state variable in any stage of the
shortest route problem.

Return function: At each stage, a decision is made which can affect the state of the system at
the next stage and help in arriving at the optimal solution at the current stage. Every decision
that is made has its own merit in terms of worth or benefit associated with it and can be
described in an algebraic equation form. This equation is generally called a return function,
since for every set of decisions; a return on each decision is obtained. This return function in
general depends on the state variable as well as the decision made at particular stage. An
optimal policy or decision at a stage yields optimal (maximum or minimum) for a given value
of the state variable.

Figure 10.1 depicts the decision alternatives known at each stage for their evaluation. The
range of such decision alternatives and their associated returns at a particular stage is a
function of the state input to the stage itself. The state input to a stage is the output from the
previous (larger number) stage and the previous stage output is a function of the state input to
itself, and the decision taken at that stage. Thus to evaluate any stage we need to know the
values of the state input to it (there may be more than one state inputs to a stage) and the
decision alternatives and their associated returns at the stage.

Figure 10.1 Information flow between stages

Amare Matebu Kassa (Dr.-Ing) Page 157


10.2 General Algorithms for DP

Step 1: Identify the problem decision variables and specify objective function to be
optimized under certain limitations, if any.
Step 2: Decompose (or divide) the given problem into a number of smaller sub-problems (or
stages). Identify the state variables at each stage.
Step 3: Write down a general recursive relationship for computing the optimal policy. Decide
whether to allow the forward or the backward method to solve the problem.
Step 4: Construct appropriate tables to show the required values of the return function at each
stage .
Step 5: Determine the overall optimal policy or decisions and its value at each stage.

Example 1: Shortest Route Problem (Stagecoach Problem

A traveler wants to travel from city 1(SF) to city10 (NY). He/she has to travel by four
different stage coaches, in order to complete the journey. The different routes and time (hrs)
are given. What is the minimum time taken by traveler from SF to NY?

Solution:
The stages for the network problem to determine the shortest route between an origin (SF)
and destination (NY) are given below.

Amare Matebu Kassa (Dr.-Ing) Page 158


To solve the problem, define problem stages, decision variables, state variables, return
function, and transition function

Amare Matebu Kassa (Dr.-Ing) Page 159


From the results of the Dynamic Programming, the minimum optimum time taken by the
traveler from SF to NY is 15 hours. This result is obtained by taking the route given below.

= 15 Hours

Problem 1

1. Determine the value of U1, U2, and U3 so as to :


Maximize Z = U1. U2. U3
Subject to the constraint:
U1 + U2 + U3 = 10
and U1, U2, U3  0

Amare Matebu Kassa (Dr.-Ing) Page 160


10.3 Non-Linear Programming Problems (NLPPs)

The general NLP problem can be stated mathematically in the following form:
Optimize (Max or Min) Z= f(x1, x2, …, xn)
Subject to constraints
gi (x1, x2,…,xn){≤,=, ≥} = bi ; i= 1,2,…,m
xj ≥ 0 for all j= 1,2,…,n
Here f(x1,x2,…,xn) and gi(x1,x2,…,xn) are real valued function of n decision variables
and at least one of these is non-linear.

Several methods have been developed for solving NLPPs:


 Unconstrained problems→ Simple Calculus
 Constrained with equality sign (=)
→ Substitution method
→ Lagrangian (λ) multiplier method
 Constrained with inequality(≤, ≥)
→ Kuhn-Tucker conditions

10.3.1 Unconstrained Optimization Problem


Consider the following profit (Break-even) analysis model;

Z = VP - Cf- VCv (profit function)………………(1)

Where V = volume

P = price

Cf = fixed cost

Cv = variable cost per unit

Volume of sales as a function of price(demand function) is given as

V= 1,500 – 24.6P…………………………………..(2)

Let, Cf = $10,000; Cv= $8

 Substituting equation (2) and the values of Cf & Cv in the profit function , the
equation becomes;

Z= 1,696.8P – 24.6P2 – 22,000

Amare Matebu Kassa (Dr.-Ing) Page 161


 To solve the problem we differentiate the function Z and set it equal to zero.

1.696.8-49.2p = 0

P= $34.49

10.3.2 Constrained NLPPs with Equality


Method of Lagrange Multipliers

The method of Lagrange multipliers (λ) finds the stationary points (maxima, minima, etc.)
of a function of several variables, when the variables are subject to constraints. The lagrange
multiplier, λ: reflects a change in the objective function from a unit change in the right-hand-
side value of a constraint.

To find the stationary points of a function f of n variables given by:

f = f(x1, x2, …, xn) ……………………………………….(1)

subject to m constraints

g1(x1, x2,…,xn)= 0

g2(x1, x2,…,xn)= 0

. . , ……………………………. (2)

gm(x1, x2,…,xn)= 0

The method of Lagrange multipliers consists of the following steps:


1. Introduce m new variables, called Lagrange multipliers λ1, λ2, …, λm (one for each
constraint equation).
2. Form the function L, called the Lagrangian and defined as:

L= f(x1,x2,..,xn) - λ1g1(x1,x2,..,xn) - λ2g2((x1,x2,..,xn) - …- λmgm(x1,x2,..,xn) ……………(3)

Amare Matebu Kassa (Dr.-Ing) Page 162


3. Take the partial derivatives of the Lagrangian L with respect to each of the variables, and
solve the equations:

……………… (4)

for x1,x2,…,xn, 1, 2, … ,m (n+m equations for n+m unknowns).

Note: Since 1, 2, … ,m appear in L only as multipliers of g1,g2,…,gm, the m equations

are just the constraint equations given by equation (2)

The solutions obtained for x1,x2,…,xn from step 3 are the values of these variables at the
stationary points of the function f.

Example
The Furniture company makes Chairs and Tables. The company has developed the following
non-linear programming model to determine the optimal number of Chairs (x1) and Tables
(x2) to produce each day in order to maximize profit, given a constrain for available
Mahogany wood.
Maximize Z= (280x1- 6x12+160x2-3x22)$
Subject to 20x1+10x2= 800ft2
Determine the optimal solution to this model using
a) the substitution method, and
b) the method of Lagrange multiplier.

Solution:
(a) Solve using the substitution method
Solve the constraint for x1,
X1= 40 – 0.5 x2……………………………(1)
and, substitute this term into the objective function

Amare Matebu Kassa (Dr.-Ing) Page 163


Z= 280(40-0.5x2) -6(40-0.5x2)2+160x2-3x22
Z= 1,600 -260x2-4.5x22
Differentiate Z, set it equal to zero, and solve for x2

X2= 260/9= 28.88 Tables


X1 = 40 - 0.5x2= 40 – 0.5(28.88)
= 25.56 Chairs
Then, Z= (280x1- 6x12+160x2-3x22)$
Z = 280(25.56)- 6(25.56)2+ 160 (28.88) – 3(28.88)2
= $5,355.56
(b) Solve by the method of Lagrange multipliers
Develop the Lagrangian function:
L = 280x1- 6x12+160x2-3x22 – λ(20x1+10x2-800)
Differentiate L with respect to x1, x2, and λ and set equal to zero:

L
 280  12 x1  20   0 .......... ...( 1)
 x1

L
 160  16 x 2  10   0 .......... .....( 2 )
x 2

L
  20 x1  10 x 2  800  0 .......... ....( 3)


280-12x1-20λ = 0--------------------------(1)

160-6x2-10λ = 0 ---------------------------(2)

-20x1-10x2 + 800 = 0 ----------------------(3)

Solve simultaneously by setting all the three equations equal to zero. Eliminate λ in the first
two equations

 x1= 25.56 Chairs, x2= 28.88 Tables

Then,

Z= $5,355.56.

Amare Matebu Kassa (Dr.-Ing) Page 164


10.3.3 Constrained NLPPs with Inequality

Kuhn-Tucker Conditions
 The optimal solution of a general non-linear programming problem can be identified
by using a set of conditions developed by Kuhn and Tucker.

Problem 2
Find the optimum solution of the following constrained multivariable problem.
Minimize Z = X12 + (X2 +1)2 + (X3-1)2
Subject to the constraint:
X1 + 5X2 -3X3 = 6

Problem 3
3. Obtain the necessary conditions for the optimum solution of the following problem.
Minimize f (X1, X2) = 3e2x1+1 + 2ex2+5
subject to the constraint:
g(X1, X2) = X1 + X2 -7 = 0

Problem 4
Solve the following problem by using the method of Lagrangian multipliers.
Minimize Z = X12 + X22 + X32
Subject to the constraints:
X1 + X2 + 3X3 = 2
5X1 + 2X2 + X3 = 5

Amare Matebu Kassa (Dr.-Ing) Page 165

You might also like