Professional Documents
Culture Documents
= 0.42/0.63 = 0.6667
P (low | high sales forecast )
= 0.03/0.63 = 0.0476
Steps to get posterior probability
(6/6)
(1) Construct a tree with branches representing all the possible
events which can occur and write the prior probabilities for these
events on the branches.
(2) Extend the tree by attaching to each branch a new branch
which represents the new information which you have obtained. On
each branch write the conditional probability of obtaining this
information given the circumstance represented by the preceding
branch.
(3) Obtain the joint probabilities by multiplying each prior
probability by the conditional probability which follows it on the
tree.
(4) Sum the joint probabilities.
(5) Divide the appropriate joint probability by the sum of the joint
probabilities to obtain the required posterior probability.
Example 2: assistants
forecast is medium (1/3)
Prior probability:
A companys sales manager estimates that there is a 0.2 probability
that sales in the coming year will be high, a 0.7 probability that they
will be medium and a 0.1 probability that they will be low:
p (High)=0.2, p (Medium)=0.7, p (Low)=0.1
New information 1:
She then receives a sales forecast from her assistant and the
forecast suggests that sales will be medium.
Posterior probability:
What should be the sales managers revised estimates of the
probability of (a) high sales, (b) medium sales and (c) low sales?
p (high | medium sales forecast)=?
p (medium| medium sales forecast)=?
p (low | medium sales forecast)=?
Example 2: assistants
forecast is medium (2/3)
The accuracy of the assistants forecast
medium:
By examining the track record of the assistants
forecasts she is able to obtain the following
probabilities:
p(fore m)=0.01+0.14+0.01=0.16
Example 3: assistants
forecast is low (1/3)
Prior probability:
A companys sales manager estimates that there is a 0.2 probability
that sales in the coming year will be high, a 0.7 probability that they
will be medium and a 0.1 probability that they will be low:
p (High)=0.2, p (Medium)=0.7, p (Low)=0.1
New information 1:
She then receives a sales forecast from her assistant and the
forecast suggests that sales will be low.
Posterior probability:
What should be the sales managers revised estimates of the
probability of (a) high sales, (b) medium sales and (c) low sales?
p (high | low sales forecast)=?
p (medium| low sales forecast)=?
p (low | low sales forecast)=?
Example 3: assistants
forecast is low (2/3)
The accuracy of the assistants forecast
low:
By examining the track record of the assistants
forecasts she is able to obtain the following
probabilities:
p(fore m)=0.01+0.14+0.01=0.21
Summary of Example 1-3:
Accuracy of assistants forecast:
(1) Conditional Probabilities if real sales level is high:
p (forecast h| H) = 0.9
p (forecast m| H) = 0.05
p (forecast l |H) = 0.05
p(M)=0.7
p(fore h|M)=0.6 p(fore hM)=0.42 p(M|fore h)=0.6667
p(fore h)=0.63
p(L)=0.1
p(fore h|L)=0.3 p(fore hL)=0.03 p(L|fore h)=0.0476
p(L)=0.1
p(fore m|L)=0.1 p(fore mL)=0.01 p(L|fore m)=0.0625
p(L)=0.1
p(fore l|L)=0.6 p(fore lL)=0.06 p(L|fore l)=0.2857
7.2 Bayes theorem
If the events A and B are not
independent, the multiplication rule is:
p (A and B) = p (A) p (B|A), then
p (B|A)= p (A and B) / p (A)
Decision
Conditional Probability:
In the past when sales turned out to be high the forecast had
correctly predicted high sales on 75% of occasions.
p (high forecast | sales came to be high) =0.75
However, in seasons when sales turned out to be low the
forecast had wrongly predicted high sales on 20% of
occasions.
p (high forecast | sales came to be low) =0.2
Example 4: Posterior
probability (7/9)
Example 4: Decision tree with
posterior probability (8/9)
Example 4: Decision matrix
with posterior probability (9/9)
Decision
Posterior probability:
How to adjust HPs prior probabilities based on the accuracy
forecase of ABC.
p (H|forecase result)=?
p (M|forecase result)=?
p (L |forecase result)=?
h m l Sum
Real sale level
H 18 1 1 20
M 5 40 5 50
L 3 3 24 30
Sum 26 44 30 100
p(L)=0.3
p(fore h)=0.18+0.05+0.03=0.26
2How to adjust prior
probabilities if forecast is m
We have following conditional probabilities:
p (forecast m|H)=1/20=0.05
p (forecast m|M)=40/50=0.8
p (forecast m|L)=3/30=0.1
p(fore m)=0.01+0.4+0.03=0.44
3How to adjust prior
probabilities if forecast is l
We have following conditional probabilities:
p (forecast l|H)=1/20=0.05
p (forecast l|M)=5/50=0.1
p (forecast l|L)=24/30=0.8
p(L)=0.3
p(fore l)=0.01+0.05+0.24=0.3
How HP predict the forecast
result of ABC before paying
forecast
h m l Sum
Real sales level
H 18 1 1 20
M 5 40 5 50
L 3 3 24 30
Sum 26 44 30 100
p(M)=0.5
p(fore h|M)=0.1 p(fore hM)=0.05 p(M|fore h)=0.192
p(fore h)=0.26
p(L)=0.3
p(fore h|L)=0.1 p(fore hL)=0.03 p(L|fore h)=0.115
p(L)=0.3
p(fore m|L)=0.1 p(fore mL)=0.03 p(L|fore m)=0.068
p(L)=0.3
p(fore l|L)=0.8 p(fore lL)=0.24 p(L|fore l)=0.80
How much should HP pay for ABC
before knowing its forecast result
The way:
How different forecast results improve the
EMV of HP.
HPs EMV without forecast
High 0.2
55
The EMV for the decision of MU is:
MU Medium 0.5
10 550.2+ 100.5-150.3=11.5
Low 0.3
-15
High 0.2 25
BD The EMV for the decision of BD is:
H Medium 0.5 250.2+ 300.5+ 100.3=23
30
P Low 0.3
10
High 0.2
40
The EMV for the decision of BA is:
BA Medium 0.5
20 400.2+ 200.5+ 50.3=19.5
Low 0.3
5
Fig.2-3 Completed decision tree (pay-
off and probability)
(1) HPs EMV if forecast is h
p(H|fore h)=0.692
55
MU p(M|fore h)=0.192 The EMV for the decision of MU is:
9.28
MU
p(fore m)=0.44 28.5
BD 28.5
HP
BA
19.4
-8.515
p(fore l)=0.3
13.835 MU
BD 13.835
BA
8.66
Improvement of EMV
26.6368233.6368 (EVII)
The expected value of
perfect information (EVPI)
High 0.2
55
MU Medium 0.5
10 The EMV with perfect
Low 0.3
-15 information is :
High 0.2 25 550.2+ 300.5+ 100.3=29
BD Medium 0.5
H 30
P Low 0.3 Improvement of EMV with
10 perfect information:
High 0.2 29-23=6 (EVPI)
40
BA Medium 0.5
20
Low 0.3
5
Fig.2-3 Completed decision tree
(pay-off and probability)
Accuracy of New Information
Suppose: the forecast accuracy of ABC improves
as follows.
forecast
H 19 1 0 20
M 2 46 2 50
L 2 1 27 30
Sum 23 48 29 100
7.6 Examples for Bayesian
analysis
Example 7: North Holt Farm
Example 8:
Example 7: North Holt Farm
(1/13)
A year ago a major potato producer suffered
serious losses when a virus affected the crop at
the companys North Holt farm.
Since then, steps have been taken to eradicate the
virus from the soil and the specialist who directed
these operations estimates, on the basis of
preliminary evidence, that there is a 70% chance
that the eradication program has been successful.
Example 7: North Holt Farm
(2/13)
The manager of the farm now has to decide
on his policy for the coming season and he
has identified two options:
(1) He could go ahead and plant a full crop of
potatoes. If the virus is still present an estimated
net loss of $20 000 will be incurred. However, if
the virus is absent, an estimated net return of $90
000 will be earned.
(2) He could avoid planting potatoes at all and
turn the entire acreage over to the alternative
crop. This would almost certainly lead to net
returns of $30 000.
Example 7: Decision matrix
without perfect information (3/13)
Decision
Situation of Virus Probability Plant potatoes Plant alternative
$0
Figure 3.3
Thompsons Complex Decision Tree:
Using Sample Information (4/10)
Thompson Lumber has two decisions two make,
with the second decision dependent upon the
outcome of the first:
First, whether or not to conduct their own marketing
survey, at a cost of $10,000, to help them decide
which alternative to pursue (large, small or no plant)
The survey does not provide perfect information
Then, to decide which type of plant to build
Note that the $10,000 cost was subtracted from each
of the first 10 branches. The, $190,000 payoff was
originally $200,000 and the $-10,000 was originally $0.
(5/10)
$106,400
$63,600 Favorable Market (0.78)
Small $90,000
Plant Unfavorable Market (0.22)
$30,000
No Plant
$10,000
$49,200
$87,400 Favorable Market (0.27)
$190,000
Unfavorable Market (0.73)
$190,000
$2,400 $2,400 Favorable Market (0.27)
Small $90,000
Plant Unfavorable Market (0.73)
$30,000
No Plant
$10,000
$49,200
(1/10)
Expected Value of Sample Information
(10/10)