You are on page 1of 96

7 Bayesian analysis

ZHAN, Wenjie (Professor)


School of Management, Huazhong
University of Science & Technology
Tel: 027-87556472
Email: wjzhan@mail.hust.edu.cn
7 Bayesian analysis
Objectives:
How to revise the prior probability in the
light of new information.
Why? To know the Bayes theorem.
Reliable of new information.
Assessing the value of new information
7 Bayesian analysis
7.1 Introduction
7.2 Bayes Theorem
7.3 Application of Bayesian analysis
7.4 Reliable of New Information
7.5 Assessing the Value of New Information
7.6 Examples for Bayesian analysis
7.1 Introduction
In this chapter we will look at the process of
revising initial probability estimates in the
light of new information.
The focus of our discussion will be Bayes
theorem, which is named after an English
clergyman,Thomas Bayes, whose ideas were
published posthumously in 1763.
Bayes theorem will be used as a normative
tool, telling us how we should revise our
probability assessments when new
information becomes available.
7.1 Introduction
In Bayes theorem an initial probability
estimate is known as a prior probability.

When Bayes theorem is used to


modify a prior probability in the
light of new information the result is
known as a posterior probability.
Example 1: assistants
forecast is high (1/6)
Prior probability:
A companys sales manager estimates that there is a 0.2 probability
that sales in the coming year will be high, a 0.7 probability that they
will be medium and a 0.1 probability that they will be low:
p (High)=0.2, p (Medium)=0.7, p (Low)=0.1
New information 1:
She then receives a sales forecast from her assistant and the
forecast suggests that sales will be high.
Posterior probability:
What should be the sales managers revised estimates of the
probability of (a) high sales, (b) medium sales and (c) low sales?
p (High | high sales forecast)=?
p (Medium | high sales forecast)=?
p (Low | high sales forecast)=?
Example 1: assistants
forecast is high (2/6)
The accuracy of the assistants forecast
high:
By examining the track record of the assistants
forecasts she is able to obtain the following
probabilities:

P (high sales forecast | the market generated high sales) = 0.9


P (high sales forecast | the market generated medium sales) = 0.6
P (high sales forecast | the market generated low sales) = 0.3
Example 1: assistants
forecast is high (3/6)
Example 1: assistants
forecast is high (4/6)
The joint probabilities are:
P (high sales occur and high sales forecast) =
0.2 0.9 = 0.18
P (medium sales occur and high sales forecast)
= 0.7 0.6 = 0.42
p(low sales occur and high sales forecast) = 0.1
0.3 = 0.03
so the sum of the joint probabilities is 0.63
Example 1: assistants
forecast is high (5/6)
Posterior probabilities:
P (high | high sales forecast )
= 0.18/0.63 = 0.2857
P (medium | high sales forecast )

= 0.42/0.63 = 0.6667
P (low | high sales forecast )

= 0.03/0.63 = 0.0476
Steps to get posterior probability
(6/6)
(1) Construct a tree with branches representing all the possible
events which can occur and write the prior probabilities for these
events on the branches.
(2) Extend the tree by attaching to each branch a new branch
which represents the new information which you have obtained. On
each branch write the conditional probability of obtaining this
information given the circumstance represented by the preceding
branch.
(3) Obtain the joint probabilities by multiplying each prior
probability by the conditional probability which follows it on the
tree.
(4) Sum the joint probabilities.
(5) Divide the appropriate joint probability by the sum of the joint
probabilities to obtain the required posterior probability.
Example 2: assistants
forecast is medium (1/3)
Prior probability:
A companys sales manager estimates that there is a 0.2 probability
that sales in the coming year will be high, a 0.7 probability that they
will be medium and a 0.1 probability that they will be low:
p (High)=0.2, p (Medium)=0.7, p (Low)=0.1
New information 1:
She then receives a sales forecast from her assistant and the
forecast suggests that sales will be medium.
Posterior probability:
What should be the sales managers revised estimates of the
probability of (a) high sales, (b) medium sales and (c) low sales?
p (high | medium sales forecast)=?
p (medium| medium sales forecast)=?
p (low | medium sales forecast)=?
Example 2: assistants
forecast is medium (2/3)
The accuracy of the assistants forecast
medium:
By examining the track record of the assistants
forecasts she is able to obtain the following
probabilities:

P (medium sales forecast | market generated high sales) = 0.05


P (medium sales forecast | market generated medium sales) =0.2
P (medium sales forecast | market generated low sales) = 0.1
Example 2: assistants
forecast is medium (3/3)
Prior Prob. Con. Prob. Joint Prob. Post Prob.

p(fore mH)=0.20.05 p(H|fore m)=0.01/0.16


p(fore m|H)=0.05 =0.01 =0.0625
p(H)=0.2
p(fore mM)=0.70.2 p(M|fore m)=0.14/0.16
p(fore m|M)=0.2 =0.14 =0.875
p(M)=0.7

p(fore mL)=0.10.1 p(L|fore m)=0.01/0.16


p(fore m|L)=0.1 =0.01 =0.0625
p(L)=0.1

p(fore m)=0.01+0.14+0.01=0.16
Example 3: assistants
forecast is low (1/3)
Prior probability:
A companys sales manager estimates that there is a 0.2 probability
that sales in the coming year will be high, a 0.7 probability that they
will be medium and a 0.1 probability that they will be low:
p (High)=0.2, p (Medium)=0.7, p (Low)=0.1
New information 1:
She then receives a sales forecast from her assistant and the
forecast suggests that sales will be low.
Posterior probability:
What should be the sales managers revised estimates of the
probability of (a) high sales, (b) medium sales and (c) low sales?
p (high | low sales forecast)=?
p (medium| low sales forecast)=?
p (low | low sales forecast)=?
Example 3: assistants
forecast is low (2/3)
The accuracy of the assistants forecast
low:
By examining the track record of the assistants
forecasts she is able to obtain the following
probabilities:

P (low sales forecast | market generated high sales) = 0.05


P (low sales forecast | market generated medium sales) =0.2
P (low sales forecast | market generated low sales) = 0.6
Example 3: assistants
forecast is low (3/3)
Prior Prob. Con. Prob. Joint Prob. Post Prob.

p(fore lH)=0.20.05 p(H|fore l)=0.01/0.21


p(fore l |H)=0.05 =0.01 =0.0476
p(H)=0.2
p(fore lM)=0.70.2 p(M|fore l)=0.14/0.21
p(fore l |M)=0.2 =0.14 =0.6667
p(M)=0.7

p(fore lL)=0.10.6 p(L|fore l)=0.06/0.21


p(fore l |L)=0.6 =0.06 =0.2857
p(L)=0.1

p(fore m)=0.01+0.14+0.01=0.21
Summary of Example 1-3:
Accuracy of assistants forecast:
(1) Conditional Probabilities if real sales level is high:
p (forecast h| H) = 0.9
p (forecast m| H) = 0.05
p (forecast l |H) = 0.05

(2) Conditional Probabilities if real sales level is Medium:


p (forecast h| M) = 0.6
p (forecast m| M) =0.2
p (forecast l | M) =0.2

(3) Conditional Probabilities if real sales level is Low:


p (forecast h| L) = 0.3
p (forecast m| L) = 0.1
p (forecast l | L) = 0.6
Marg. Prob. Prior Prob. Cond. Prob. Joint Prob. Post. Prob.

p(H)=0.2 p(fore h|H)=0.9 p(fore hH)=0.18 p(H|fore h)=0.2857

p(M)=0.7
p(fore h|M)=0.6 p(fore hM)=0.42 p(M|fore h)=0.6667
p(fore h)=0.63
p(L)=0.1
p(fore h|L)=0.3 p(fore hL)=0.03 p(L|fore h)=0.0476

p(H)=0.2 p(fore m|H)=0.05 p(fore mH)=0.01 p(H|fore m)=0.0625

p(fore m)=0.16 p(M)=0.7


p(fore m|M)=0.2 p(fore mM)=0.14 p(M|fore m)=0.875

p(L)=0.1
p(fore m|L)=0.1 p(fore mL)=0.01 p(L|fore m)=0.0625

p(H)=0.2 p(fore l|H)=0.05 p(fore lH)=0.01 p(H|fore l)=0.0476


p(fore l)=0.21
p(M)=0.7
p(fore l|M)=0.2 p(fore lM)=0.14 p(M|fore l)=0.6667

p(L)=0.1
p(fore l|L)=0.6 p(fore lL)=0.06 p(L|fore l)=0.2857
7.2 Bayes theorem
If the events A and B are not
independent, the multiplication rule is:
p (A and B) = p (A) p (B|A), then
p (B|A)= p (A and B) / p (A)

p (A and B) = p (B) p (A|B), then


p (A|B)= p (A and B) / p (B)
7.2 Bayes theorem
Lets suppose:
A1=high;
A2=medium;
A3=low;
B=high forecast
We have:
A1, A2, and A3 are mutually exclusive events, and
P(A1)+ P(A2)+ P(A3)=1, which are named as the
prior probability.
Events Ai (i=1,2,3) and B are dependent.
7.2 Bayes theorem
By examining the track record of the
assistants forecasts, we have the following
conditional probabilities:
P(B|A1)=0.9
P(B|A2)=0.6
P(B|A3)=0.3
The problems here are to get the posterior
probability:
P(A1|B)=?
P(A2|B)=?
P(A3|B)=?
7.2 Bayes theorem
According to the multiplication rule:
P(A1|B)= P (A1 and B) / P (B)
P(A2|B)= P (A2 and B) / P (B)
P(A3|B)= P (A3 and B) / P (B)

The two problems are:


P (Ai and B), i=1,2,3, which is named as joint
probability.
P (B).
7.2 Bayes theorem
How to get P(Ai and B):
According to the multiplication rule, we have two
ways to get P(Ai and B):
(1) P(Ai and B)=P(Ai)P(B|Ai)
(2) P(Ai and B)=P(B)P(Ai|B)
We choose the equation (1), for the reason
we know P(Ai) and P(B|Ai) , but dont know
the P(B) and P(Ai|B).
P(A1 and B)=P(A1)P(B|A1) =0.20.9=0.18
P(A2 and B)=P(A2)P(B|A2) = 0.7 0.6 = 0.42
P(A3 and B)=P(A3)P(B|A3) = 0.1 0.3 =0.03
7.2 Bayes theorem
Getting the P (B) according to Theorem of total
probability: If A1, A2, ,An are mutually exclusive, and
constitute the sample space S that means that P(A1)+
P(A2)+ P(An)=1; then for any event B in S :

P( B) P( B and A1) P( B and A2) ... P( B and An)


P( A1) P( B | A1) P( A2) P( B | A2) ... P( An) P( B | An)

P (B) = sum joint probability.


The equation must satisfy two conditions:
(1) A1, A2, , An are mutually exclusive;
(2) P(A1)+ P(A2)+ P(An)=1.
7.2 Bayes theorem
Getting the posterior probability
according to the multiplication rule:
P(Ai|B)= P(Ai and B) / P(B)
= P(Ai and B) / sum [P(Ai and B)]
7.3 Application of Bayesian
analysis
The application of Bayesian analysis
involves the use of the posterior
probabilities, rather than the prior
probabilities, in the decision model.
Example 4: (1/9)
A retailer has to decide whether to hold a
large or a small stock of a product for the
coming summer season.
A payoff table for the courses of action and
outcomes is shown below:
Example 4: (2/9)
The following table shows the retailers utilities
for the above sums of money (it can be
assumed that money is the only attribute which
he is concerned about):
Example 4: (3/9)
The retailer estimates that there is a 0.4
probability that sales will be low and a
0.6 probability that they will be high.
P(low) =0.4
P(high) =0.6
What level of stocks should he hold?
Example 4: (5/9)
Example 4: (6/9)

Decision

Future sales Prior Hold small stocks Hold large stocks


level probability

Low 0.4 0.5 0


High 0.6 0.8 1
Example 4: Decision with
posterior probability (6/9)
New information:
Before implementing his decision the retailer receives a sales
forecast which suggests that sales will be high.

Conditional Probability:
In the past when sales turned out to be high the forecast had
correctly predicted high sales on 75% of occasions.
p (high forecast | sales came to be high) =0.75
However, in seasons when sales turned out to be low the
forecast had wrongly predicted high sales on 20% of
occasions.
p (high forecast | sales came to be low) =0.2
Example 4: Posterior
probability (7/9)
Example 4: Decision tree with
posterior probability (8/9)
Example 4: Decision matrix
with posterior probability (9/9)

Decision

Future sales Posterior Hold small stocks Hold large stocks


level probability

Low 0.15 0.5 0


High 0.85 0.8 1
7.4 Reliable of New Information
Example 5:
Suppose that a geologist is involved in a search for
new sources of natural gas in southern England.
In one particular location he is asked to estimate,
on the basis of a preliminary survey, the probability
that gas will be found in that location.
Having made his estimate, he will receive new
information from a test drilling.
Example 5:
Let us first consider a situation where the geologist is not
very confident about his estimate (prior probabilities) and
where the test drilling is very reliable.
The vague prior probability distribution that the
geologist can put forward is to assign probabilities of 0.5
to the two events gas exists at the location and gas
does not exist at the location.
Suppose that having put forward the prior probabilities of
0.5 and 0.5, the result of the test drilling is received.
This indicates that gas is present and the result can be
regarded as 95% reliable. By this we mean that there is
only a 0.05 probability that it will give a misleading
indication.
Probability tree
7.4 Reliable of new information
By considering the distance of the curves
from the diagonal line, it can be seen that the
more reliable the new information, the
greater will be the modification of the prior
probabilities.

Question: True or false?


At any given level of reliability, the modification is
relatively small where the prior probability is high,
while this modification is relatively big where the
prior probability is low.
7.4 Reliable of new information
Two extreme situation:
(1) If your prior probability of an event occurring is zero,
then the posterior probability will also be zero.
(2) If your prior probability of an event occurring is one,
then the posterior probability will also be one.
At the two extreme situation, whatever new
information you receive, no matter how reliable it is,
you will not change your prior probability to posterior
probability.
Question: Does the Bayesian analysis work at the two
extreme situation?
7.5 Assessing the Value of
New Information
New information can remove or reduce the
uncertainty involved in a decision and thereby
increase the expected payoff/utility.
However, in many circumstances it may be expensive
to obtain information since it might involve, for
example, the use of scientific tests, the engagement
of the services of a consultant or the need to carry
out a market research survey.
If this is the case, then the question arises as to
whether or not it is worth obtaining the new
information. That is we should assess the value of
new information.
The expected value of
perfect information (EVPI)
In many decision situations it is not possible to obtain
perfectly reliable information, but nevertheless the
concept of the expected value of perfect information
(EVPI) can still be useful.
We will use the following problem to show how the
value of perfect information can be measured.
For simplicity, we will assume that the decision maker
is neutral to risk so that the expected monetary value
criterion can be applied.
Example 6:
The Hewlett-Packard (HP) company specializes in
assembling and selling PC systems for use by family
doctor practices throughout the U.S.A. The company is
developing a new PC-based system. At present the
company is trying to decide on the manufacturing and
assembly process to be used. One aspect of this relates
to the keyboard that will be used in the system, which
will have specially labeled function keys.

The company has decided that it faces three


alternatives:
It can manufacture/assemble the keyboard itself.
It can buy the keyboards from a domestic manufacturer.
It can buy the keyboards from a manufacturer in the Far East
Example 6:
Uncertainty in the decision problem
There is uncertainty as to which decision to take
because there is uncertainty over future sale.
To help simplify the situation, the company is
planning for one three possible sales levels in the
future:
Low
Medium
high
Decision Tree
Low 0.2
-15
M Medium 0.5
10
High 0.3
55
Low 0.2 10
BD
HP Medium 0.5
30
High 0.3
25
Low 0.2
5
BA Medium 0.5
20
High 0.3
40
Fig.2-3 Completed decision tree (pay-off and
probability)
Decision Matrix (Pay-off Table)
Table 2-3 Pay-off table: probability and pay-off ($000s)
Decision

Sales Probability Manufacture Buy domestic Buy abroad


level (M) (BD) (BA)
Low 0.2 -15 10 5
Mediu 0.5 10 30 20
m
High 0.3 55 25 40
How to get the prior probability
Prior probability:
History data of sale level for similar product
in HP
High Medium Low Sum
(H) (M) (L)
times 20 50 30 100

According to relative frequency approach, we have prior


probabilities as following:
p (H)=20/100=0.2
p (M)=50/100=0.5
p (L)=30/100=0.3
To get new information
To get new information:
HP plans to ask a professional market research company
(ABC) to forecast the sale level of the new product for it.
And ABC is famous for its forecast accuracy.

Posterior probability:
How to adjust HPs prior probabilities based on the accuracy
forecase of ABC.
p (H|forecase result)=?
p (M|forecase result)=?
p (L |forecase result)=?

Question: Does HP know the forecast result of sales


level before to pay ABC company?
History data of forecast accuracy
for ABC
Table 1: History data of forecast result for ABC
forecast

h m l Sum
Real sale level

H 18 1 1 20
M 5 40 5 50
L 3 3 24 30
Sum 26 44 30 100

QuestionHow to figure out conditional


probabilities according to Table 1?
Conditional Probabilities for ABC
(1) Conditional Probabilities if real sales level is high:
p (forecast h|H)=18/20=0.9
p (forecast m|H)=1/20=0.05
p (forecast l |H)=1/20=0.05

(2) Conditional Probabilities if real sales level is Medium:


p (forecast h|M)=5/50=0.1
p (forecast m|M)=40/50=0.8
p (forecast l |M)=5/50=0.1

(3) Conditional Probabilities if real sales level is Low:


p (forecast h|L)=3/30=0.1
p (forecast m|L)=3/30=0.1
p (forecast l |L)=24/30=0.8
1How to adjust prior
probabilities if forecast is h
We have following conditional probabilities:
p (forecast h|H)=18/20=0.9
p (forecast h|M)=5/50=0.1
p (forecast h|L)=3/30=0.1

How to get following post probabilities


p (H| forecast h)=?
p (M| forecast h)=?
p (L | forecast h)=?
1How to adjust prior
probabilities if forecast is h
Prior Prob. Con. Prob. Joint Prob. Post Prob.

p(fore hH)=0.90.2 p(H|fore h)=0.18/0.26


p(fore h|H)=0.9 =0.18 =0.692
p(H)=0.2
p(fore hM)=0.10.5 p(M|fore h)=0.05/0.26
p(fore h|M)=0.1 =0.05 =0.192
p(M)=0.5

p(fore hL)=0.10.3 p(L|fore h)=0.03/0.26


p(fore h|L)=0.1 =0.03 =0.115

p(L)=0.3

p(fore h)=0.18+0.05+0.03=0.26
2How to adjust prior
probabilities if forecast is m
We have following conditional probabilities:
p (forecast m|H)=1/20=0.05
p (forecast m|M)=40/50=0.8
p (forecast m|L)=3/30=0.1

How to get following post probabilities


p (H|forecast m)=?
p (M| forecast m)=?
p (L | forecast m)=?
2How to adjust prior
probabilities if forecast is m
Prior Prob. Con. Prob. Joint Prob. Post Prob.

p(fore mH)=0.050.2 p(H|fore m)=0.01/0.44


p(fore m|H)=0.05 =0.01 =0.022
p(H)=0.2
p(fore mM)=0.80.5 p(M|fore m)=0.4/0.44
p(fore m|M)=0.8 =0.4 =0.909
p(M)=0.5

p(fore mL)=0.10.3 p(L|fore m)=0.03/0.44


p(fore m|L)=0.1 =0.03 =0.068
p(L)=0.3

p(fore m)=0.01+0.4+0.03=0.44
3How to adjust prior
probabilities if forecast is l
We have following conditional probabilities:
p (forecast l|H)=1/20=0.05
p (forecast l|M)=5/50=0.1
p (forecast l|L)=24/30=0.8

How to get following post probabilities


p (H| forecast l)=?
p (M| forecast l)=?
p (L | forecast l)=?
3How to adjust prior
probabilities if forecast is l
Prior Prob. Con. Prob Joint Prob. Post Prob.

p(fore lH)=0.050.2 p(H|fore l)=0.01/0.3


p(fore l|H)=0.05 =0.01 =0.033
p(H)=0.2
p(fore lM)=0.10.5 p(M|fore l)=0.05/0.3
p(fore l|M)=0.1 =0.05 =0.167
p(M)=0.5

p(fore lL)=0.80.3 p(L|fore l)=0.24/0.3


p(fore l|L)=0.8 =0.24 =0.80

p(L)=0.3

p(fore l)=0.01+0.05+0.24=0.3
How HP predict the forecast
result of ABC before paying
forecast
h m l Sum
Real sales level

H 18 1 1 20
M 5 40 5 50
L 3 3 24 30
Sum 26 44 30 100

p(fore h) p(fore m) p(fore l)


Marginal
=26/100 =44/100 =30/100
Probability:
=0.26 =0.44 =0.3
Marg. Prob. Prior Prob. Cond. Prob. Joint Prob. Post. Prob.

p(H)=0.2 p(fore h|H)=0.9 p(fore hH)=0.18 p(H|fore h)=0.692

p(M)=0.5
p(fore h|M)=0.1 p(fore hM)=0.05 p(M|fore h)=0.192
p(fore h)=0.26
p(L)=0.3
p(fore h|L)=0.1 p(fore hL)=0.03 p(L|fore h)=0.115

p(H)=0.2 p(fore m|H)=0.05 p(fore mH)=0.01 p(H|fore m)=0.022

p(fore m)=0.44 p(M)=0.5


p(fore m|M)=0.8 p(fore mM)=0.4 p(M|fore m)=0.909

p(L)=0.3
p(fore m|L)=0.1 p(fore mL)=0.03 p(L|fore m)=0.068

p(H)=0.2 p(fore l|H)=0.05 p(fore lH)=0.01 p(H|fore l)=0.033


p(fore l)=0.3
p(M)=0.5
p(fore l|M)=0.1 p(fore lM)=0.05 p(M|fore l)=0.167

p(L)=0.3
p(fore l|L)=0.8 p(fore lL)=0.24 p(L|fore l)=0.80
How much should HP pay for ABC
before knowing its forecast result
The way:
How different forecast results improve the
EMV of HP.
HPs EMV without forecast
High 0.2
55
The EMV for the decision of MU is:
MU Medium 0.5
10 550.2+ 100.5-150.3=11.5
Low 0.3
-15
High 0.2 25
BD The EMV for the decision of BD is:
H Medium 0.5 250.2+ 300.5+ 100.3=23
30
P Low 0.3
10
High 0.2
40
The EMV for the decision of BA is:
BA Medium 0.5
20 400.2+ 200.5+ 50.3=19.5
Low 0.3
5
Fig.2-3 Completed decision tree (pay-
off and probability)
(1) HPs EMV if forecast is h
p(H|fore h)=0.692
55
MU p(M|fore h)=0.192 The EMV for the decision of MU is:

10 550.692+ 100.192 -150.115


p(L|fore h)=0.115
-15 =38.255
p(H|fore h)=0.692
25
BD p(M|fore h)=0.192 The EMV for the decision of BD is:

H 30 250.692 + 300.192 +100.115
P p(L|fore h)=0.115 =34.56
10
p(H|fore h)=0.692
40 The EMV for the decision of BA is:

BA p(M|fore h)=0.192 400.692 + 200.192 +50.115+
20 =32.095
p(L|fore h)=0.115
5

To get EMV with posterior


probability
(2) HPs EMV if forecast is m
p(H|fore m)=0.022
55
MU p(M|fore m)=0.909 The EMV for the decision of MU is:

10 550.022 +10 0.909-150.068


p(L|fore m)=0.068
-15 =9.28
p(H|fore m)=0.022
25
BD p(M|fore m)=0.909 The EMV for the decision of BD is:
H 30 25 0.022 + 300.909+ 10 0.068
P p(L|fore m)=0.068 =28.5
10
p(H|fore m)=0.022
40 The EMV for the decision of BA is:
BA p(M|fore m)=0.909 40 0.022 + 200.909+ 5 0.068
20 =19.4
p(L|fore m)=0.068
5
To get EMV with posterior probability
(2) HPs EMV if forecast is l
p(H|fore l)=0.033
55
MU p(M|fore l)=0.167 The EMV for the decision of MU is:
10 550.033+100.167-150.80
p(L|fore l)=0.80
-15 =-8.515
p(H|fore l)=0.033 25
BD p(M|fore l)=0.167 The EMV for the decision of BD is:
H 30 250.033+ 300.167+ 100.80
P p(L|fore l)=0.80 =13.835
10
p(H|fore l)=0.033 The EMV for the decision of BA is:
40
BA p(M|fore l)=0.167 400.033+ 200.167+ 50.80
20 =8.66
p(L|fore l)=0.80
5
To get EMV with posterior probability
HPs EMV with forecast before
paying
Marg. Prob. Action EMV with Post. Prob
38.255
MU
38.255 34.56
BD
p(fore h)=0.26
BA 32.095

9.28
MU
p(fore m)=0.44 28.5
BD 28.5
HP
BA
19.4

-8.515
p(fore l)=0.3
13.835 MU
BD 13.835

BA
8.66

The EMV with forecast: 38.2550.26+28.50.44+13.835 0.3=26.6368


The expected value of
imperfect information (EVII)
HPs highest EMV without forecast
The EMV for the decision of BD is:
250.2+ 300.5+ 100.3=23

HPs EMV with forecast before paying


The EMV with forecast:
38.2550.26+28.50.44+13.835 0.3=26.6368

Improvement of EMV
26.6368233.6368 (EVII)
The expected value of
perfect information (EVPI)
High 0.2
55
MU Medium 0.5
10 The EMV with perfect
Low 0.3
-15 information is :
High 0.2 25 550.2+ 300.5+ 100.3=29
BD Medium 0.5
H 30
P Low 0.3 Improvement of EMV with
10 perfect information:
High 0.2 29-23=6 (EVPI)
40
BA Medium 0.5
20
Low 0.3
5
Fig.2-3 Completed decision tree
(pay-off and probability)
Accuracy of New Information
Suppose: the forecast accuracy of ABC improves
as follows.
forecast

Real sales h m l Sum


level

H 19 1 0 20
M 2 46 2 50
L 2 1 27 30
Sum 23 48 29 100
7.6 Examples for Bayesian
analysis
Example 7: North Holt Farm
Example 8:
Example 7: North Holt Farm
(1/13)
A year ago a major potato producer suffered
serious losses when a virus affected the crop at
the companys North Holt farm.
Since then, steps have been taken to eradicate the
virus from the soil and the specialist who directed
these operations estimates, on the basis of
preliminary evidence, that there is a 70% chance
that the eradication program has been successful.
Example 7: North Holt Farm
(2/13)
The manager of the farm now has to decide
on his policy for the coming season and he
has identified two options:
(1) He could go ahead and plant a full crop of
potatoes. If the virus is still present an estimated
net loss of $20 000 will be incurred. However, if
the virus is absent, an estimated net return of $90
000 will be earned.
(2) He could avoid planting potatoes at all and
turn the entire acreage over to the alternative
crop. This would almost certainly lead to net
returns of $30 000.
Example 7: Decision matrix
without perfect information (3/13)
Decision
Situation of Virus Probability Plant potatoes Plant alternative

Virus is absent 0.7 $90 000 $30 000


Virus is present 0.3 -$20 000 $30 000
EMV $57 000 $30 000
For simplicity, we will assume that the decision maker is neutral to
risk so that the expected monetary value criterion can be applied.
Example 7: North Holt Farm
New Information (1/13)
The manager is now informed that Ceres Laboratories
could carry out a test on the farm which will indicate
whether or not the virus is still present in the soil.
The manager has no idea as to how accurate the
indication will be or the fee which Ceres will charge.
However, he decides initially to work on the assumption
that the test is perfectly accurate. If this is the case,
what is the maximum amount that it would be worth
paying Ceres to carry out the test?
Example 7: Decision tree without
/with perfect information (5/13)
Example 7: Assessing the value
of perfect information (6/13)
Example 7: The expected value of
imperfect information (EVII) (7/13)
Suppose that, after making further enquiries, the
farm manager discovers that the Ceres test is not
perfectly reliable.
If the virus is still present in the soil the test has only
a 90% chance of detecting it, while if the virus has
been eliminated there is a 20% chance that the test
will incorrectly indicate its presence.
How much would it now be worth paying for the test?
To answer this question it is necessary to determine
the expected value of imperfect information (EVII).
Example 7: To get the conditional
probability (8/13)
If the virus is still present in the soil the test has
only a 90% chance of detecting it.
P (Test indicates present | Virus is present) =0.9
P (Test indicates absent | Virus is present) =0.1

If the virus has been eliminated there is a 20%


chance that the test will incorrectly indicate its
presence.
P (Test indicates present | Virus is absent) =0.2
P (Test indicates absent | Virus is absent) =0.8
Example 7: Decision tree without/with
imperfect information (9/13)
Example 7: If test indicates virus
present (10/13)
Example 7: If test indicates virus
absent (11/13)
Example 7: Decision tree without/with
imperfect information (12/13)
Example 7: EVII (13/13)
Example 8: Thompson Lumber
(1/10)

Table 1: Payoff Table for Thompson Lumber


STATE OF NATURE
FAVORABLE UNFAVORABLE
ALTERNATIVE MARKET ($) MARKET ($) EMV ($)
Construct a large
200,000 180,000 10,000
plant
Construct a small
100,000 20,000 40,000
plant
Do nothing 0 0 0
Probabilities 0.50 0.50
Largest EMV
(2/10)
State of Nature
Alternative Favorable Unfavorable EMV
Market ($) Market ($)
Construct a large plant 200,000 -180,000 10,000
Construct a small plant 100,000 -20,000 40,000
Do nothing 0 0 0
EVwPI =
Perfect Information 200,000 0
100,000
Compute EVwPI
The best alternative with a favorable market is to build a large
plant with a payoff of $200,000. In an unfavorable market the
choice is to do nothing with a payoff of $0
EVwPI = ($200,000)*.5 + ($0)(.5) = $100,000
Compute EVPI = EVwPI max EMV = $100,000 - $40,000 = $60,000
The most we should pay for any information is $60,000
Thompsons Decision Tree (3/10)
EMV for Node = (0.5)($200,000) + (0.5)($180,000)
1 = $10,000
Payoffs
Favorable Market (0.5)
$200,000
Alternative with best
EMV is selected 1
Unfavorable Market (0.5)
$180,000

Favorable Market (0.5)


$100,000
Construct
Small Plant
2
Unfavorable Market (0.5)
$20,000

EMV for Node = (0.5)($100,000)


2 = $40,000 + (0.5)($20,000)

$0
Figure 3.3
Thompsons Complex Decision Tree:
Using Sample Information (4/10)
Thompson Lumber has two decisions two make,
with the second decision dependent upon the
outcome of the first:
First, whether or not to conduct their own marketing
survey, at a cost of $10,000, to help them decide
which alternative to pursue (large, small or no plant)
The survey does not provide perfect information
Then, to decide which type of plant to build
Note that the $10,000 cost was subtracted from each
of the first 10 branches. The, $190,000 payoff was
originally $200,000 and the $-10,000 was originally $0.
(5/10)

Thompsons Complex Decision Tree


First Decision Second Decision Payoffs
Point Point

Favorable Market (0.78)


$190,000
2 Unfavorable Market (0.22)
$190,000
Favorable Market (0.78)
Small $90,000
Plant 3 Unfavorable Market (0.22)
$30,000
No Plant
$10,000

1 Favorable Market (0.27)


$190,000
4 Unfavorable Market (0.73)
$190,000
Favorable Market (0.27)
Small $90,000
Plant 5 Unfavorable Market (0.73)
$30,000
No Plant
$10,000

Favorable Market (0.50)


$200,000
6 Unfavorable Market (0.50)
$180,000
Favorable Market (0.50)
Small $100,000
Plant 7 Unfavorable Market (0.50)
$20,000
No Plant
$0
(6/10)

Thompsons Complex Decision Tree


1. Given favorable survey results
(market favorable for sheds),
EMV(node 2) = EMV(large plant | positive survey)
= (0.78)($190,000) + (0.22)($190,000) = $106,400
EMV(node 3) = EMV(small plant | positive survey)
= (0.78)($90,000) + (0.22)($30,000) = $63,600
EMV for no plant = $10,000

2. Given negative survey results,


EMV(node 4) = EMV(large plant | negative survey)
= (0.27)($190,000) + (0.73)($190,000) = $87,400
EMV(node 5) = EMV(small plant | negative survey)
= (0.27)($90,000) + (0.73)($30,000) = $2,400
EMV for no plant = $10,000
Thompsons Complex Decision Tree
(7/10)

3. Compute the expected value of the market survey,


EMV(node 1) = EMV(conduct survey)
= (0.45)($106,400) + (0.55)($2,400)
= $47,880 + $1,320 = $49,200
4. If the market survey is not conducted,
EMV(node 6) = EMV(large plant)
= (0.50)($200,000) + (0.50)($180,000) = $10,000
EMV(node 7) = EMV(small plant)
= (0.50)($100,000) + (0.50)($20,000) = $40,000
EMV for no plant = $0
5. Best choice is to seek marketing information
(8/10)

Thompsons Complex Decision Tree


First Decision Second Decision Payoffs
Point Point
$106,400 Favorable Market (0.78)
$190,000
Unfavorable Market (0.22)
$190,000

$106,400
$63,600 Favorable Market (0.78)
Small $90,000
Plant Unfavorable Market (0.22)
$30,000
No Plant
$10,000
$49,200
$87,400 Favorable Market (0.27)
$190,000
Unfavorable Market (0.73)
$190,000
$2,400 $2,400 Favorable Market (0.27)
Small $90,000
Plant Unfavorable Market (0.73)
$30,000

No Plant
$10,000
$49,200

$10,000 Favorable Market (0.50)


$200,000
Unfavorable Market (0.50)
$180,000
$40,000

$40,000 Favorable Market (0.50)


Small $100,000
Plant Unfavorable Market (0.50)
$20,000
No Plant
$0
(9/10)
Complex Decision Tree

(1/10)
Expected Value of Sample Information
(10/10)

Thompson wants to know the actual value of


doing the survey
Expected value Expected value
with sample of best decision
EVSI = information, assuming without sample
no cost to gather it information

= (EV with sample information + cost)


(EV without sample information)

EVSI = ($49,200 + $10,000) $40,000 = $19,200


Thompson could have paid up to $19,200 for a market
study and still come out ahead since the survey actually
costs $10,000
Homework 6-1:
(2) In January a sales manager estimates that there
is only a 30% chance that the sales of a new
product will exceed one million units in the coming
year. However, he is then handed the results of a
sales forecast. This suggests that the sales will
exceed one million units. The probability that this
indication will be given when sales will exceed a
million units is 0.8. However, the probability that the
forecast will give this indication when sales will not
exceed a million units is 0.4. Revise the sales
managers estimate in the light of the sales forecast.
Homework 6-2:
(7) A company has just received some state of the art electronic
equipment from an overseas supplier. The packaging has been
damaged during delivery and the company must decide whether to
accept the equipment. If the equipment itself has not been damaged,
it could be sold for a profit of $10 000. However, if the batch is
accepted and it turns out to be damaged, a loss of $5000 will be
made. Rejection of the equipment will lead to no change in the
companys profit. After a cursory inspection, the companys engineer
estimates that there is a 60% chance that the equipment has not been
damaged. The company has another option. The equipment could be
tested by a local specialist company.
Q1: If their test is prefect reliable, how much would it be worth paying
for the perfect information?
Q2: If their test is not perfectly reliable and has only an 80% chance
of giving a correct indication, how much would it be worth paying for
the imperfect information?
Assume that the companys objective is to maximize expected profit
and risk neutral.
The end

You might also like