You are on page 1of 3

36-217, Spring 2015

Homework 3
Due February 12

1. You choose n random points in succession on the unit circle. (That is, on the perimeter not in the
disk.) The position of each point is chosen uniformly over the circle, and the positions of all points are
independent. Let A be the event that all the points are contained in some semicircle. Let Ai denote
the event that all the points lie in the semicircle beginning at the i-th point and going clockwise for
180deg, i = 1 . . . , n.
(a) Express A in terms of the Ai s.
(b) Are the Ai s disjoint?
(c) Show that
 n1
1
P (A) = n
.
2
See the Figure below for examples.
You can assume that the probability that, for any two points, the line connecting them
goes through the center of the circle is 0.
2. A hunter has two hunting dogs. One day, on the trail of some animal, the hunter comes to a place
where the road diverges into two paths. He knows that each dog will choose will choose the correct
path with probability p, independently of the other. The hunter decides to let each dog choose a
path and, if they agree, take that one, and if they disagree, to randomly pick a path. Is this strategy
better than just letting one of the two dogs decide on a path?
3. Suppose we want to generate the outcome of a flip of a fair coin but that all we have at our disposal
is a biased coin which lands on heads with some unknown probability 0 < p < 1, which needs not be
equal to 1/2. Consider the following procedure for accomplishing our task:
(a) Flip the coin.
(b) Flip the coin again.
(c) If both flips land heads or both land tails returns to step (a)
(d) Let the result of the last flip be the result of the experiment.
Show that the result is equally likely to be either heads or tails. (Hint: you may want to use the
following result, proved in class: suppose A and B are disjoint events of positive probability such that
P (A) + P (B) < 1 and that independent trials of the experiments are performed. Then, the probability
that A is observed before B is
P (A)
.
P (A) + P (B)
A simpler procedure would be to continue to flip until the last two flips are different and then let
the result be the outcome of the last flip. Show this procedure, however, will not result in equally
likely outcomes if p 6= 1/2.
4. A simplified model for the movement of the price of a stock supposes that on each day the stocks
price either moves up one unit with probability p or it moves down one unit with probability 1 p.
The changes on different days are assumed to be independent.
1

(a) What is the probability that after two days the stock will be at its original price?
(b) What is the probability that after three days the stocks price will have increased by one unit?
(c) Given that after three days the stock price has increased by one unit, what is the probability
that it went up on the first day?
5. Sarah and Dick go target shooting. Suppose that each of Sarahs shots hits the target with probability
0 < p1 < 1, while each shot of Dicks hits the target with probability 0 < p2 < 1. Suppose they shoot
simultaneously at the same target and that the events that they each hits the target are independent.
If the wooden target is knocked over (indicating that it was hit), what is the probability that
(a) Both shots hit the target?
(b) Dicks shot hit the target?
6. Consider a sequence of mutually independent trials, where each trial is equally likely to result in
any of the outcomes 1, 2 or 3. The sequence ends as soon as the three outcomes have appeared at
least once. Given that the outcome 3 is the last of the three outcomes to occur, find the conditional
probability that
(a) the first trial results in outcome 1;
(b) the first two trials both result in outcome 1.
Hint: use the events F1 , S1 and L3 ,the events that the first trial results in outcome 1, that the second
trial results in outcome 1 and that the last trial results in outcome 3, respectively.
7. More on Bayes theorem: prior, posterior and probability updating. Let A1 , A2 , . . . , An be
a partition of the sample space and B and C be two events of positive probabilities and such that
P (B C) > 0.
(a) The event A and B are said to be conditionally independent given C if
P (A B|C) = P (A|C)P (B|C).
Read about conditional independence in section 1.5 of the textbook. Show that if A and B are
conditionally independent given C then
P (A|B C) = P (A|C).
(b) Show the following conditional version of Bayes teorem:
P (Ai |C)P (B|Ai C)
P (Ai |B C) = Pn
.
j=1 P (Aj |C)P (B|Aj C)
(c) (Prior and posterior probabilities.) Suppose that a box contains one fair coin and a coin with a
head on each side. Suppose also that one coin is selected at random and than, when it is tossed,
a head is obtained. Let B1 the event that the selected coin is fair, B2 the event that the coin
has two heads and H1 the event that a head is obtained when the selected coin is tossed. Use
Bayes theorem to compute
P (B1 |H1 ).
Compare P (B1 ), the prior probability that the selected coin is fair with P (B1 |H1 ), the posterior probability that the selected coin is fair (the posterior probability takes into account the
additional information that the outcome of the coin toss was observed to be heads).
2

(d) (Posterior probability updating.) Now suppose that the same coin is tossed a second time and
that another head is obtained. Let H2 be the event that a head is obtained when the selected
coin is tossed for the second time. Assume that H1 and H2 are conditionally independent given
B1 and B2 (this means that the outcomes of two tosses of the same coin are independent). We
are now interested in P (B1 |H1 H2 ), the conditional probability that the selected coin is the fair
coin given that both the first and the second toss resulted in heads. There are two equivalent
ways of calculating P (B1 |H1 H2 ), described below (notice that in both cases you will need to
use the fact that H1 and H2 are conditionally independent given both B1 and B2 ).
i. Compute P (B1 |H1 H2 ) using directly the definition of Bayes theorem given in class.
ii. Compute P (B1 |H1 H2 ) in two steps: first compute P (B1 |H1 ), the posterior probability
of B1 given H1 . This conditional probability now serves as a new prior probability for the
next stage of the experiment, in which the same coin is tossed a second time. In order to
compute the new posterior probability you have to use the conditional version of Bayes
theorem from part (b).
Compare P (B1 |H1 H2 ) with both P (B1 |H1 ) and P (B1 ). Explain.