You are on page 1of 3

MTH6141 Random Processes, Spring 2012 Solutions to Exercise Sheet 1

1. (a) Hopefully everyone was able to draw the transition graph. (b) i. P(X1 = 3 | X0 = 2) = p2,3 = 1/3 (read o the matrix). ii. P(X2 = 3 | X1 = 2) = p2,3 = 1/3. , by the iii. P(X2 = 3 | X1 = 2, X0 = 1) = P(X2 = 3|X1 = 2) = 1 3 Markov property. iv. P(X2 = 3, X1 = 2 | X0 = 1) = P(X1 = 2 | X0 = 1) P(X2 = 3 | 1 1 = 12 . X1 = 2, X0 = 1) = 1 4 3 v. P(X2 = 3 | X0 = 2) = P(X2 = 3, X1 = 1 | X0 = 2) + P(X2 = 3 1 1 +1 =4 +1 = 13 . 3, X1 = 3 | X0 = 2) = 1 3 4 3 3 9 36 i. The only way in which we can have Xt {2, 3} is for the process to alternate between states 2 and 3 for t steps. It follows that if t is even P(Xt {2, 3} | X0 = 2) = p2,3 p3,2 . . . p2,3 p3,2 = (p2,3 p3,2 ) = while if t is odd P(Xt {2, 3} | X0 = 2) = p2,3 p3,2 . . . p2,3 = p2,3 (p3,2 p2,3 )
t1 2 t 2

2. (a) Hopefully everyone was able to draw the transition graph. (b)

1 6

t 2

1 = 3

1 6

t1 2

ii. Let F = min{i : Xi = 4} (with the convention F = if Xi = 4 for all i). If F t then Xt = 4 since once the process reaches state 4 it can never leave it. It follows that
t

P(Xt = 4 | X0 = 2) =
i=1

P(F = i | X0 = 2)

We have that P(F = i | X0 = 2) = So 1 11 + P(Xt = 4 | X0 = 2) = + 3 63 1 1 6


2

0
1 6
i1 2

if i is even
1 3

if i is odd

1 + + 3

1 6

t1 2

1 . 3

Using the formula for the sum of a geometric progression we get t 2 1 1 2 if t is even 5 6 P(Xt = 4 | X0 = 2) = t+1 2 1 1 2 if t is odd 5 6 (c) As t we have that P(Xt {2, 3} | X0 = 2) 0 P(Xt = 4 | X0 = 2) 2 . 5 So the process will eventually leave states 2 and 3 (ending up in either 1 or 4). The probability that it ends up in state 4 is 2/5. 3. (a) We take as state space {0, 1, 2, 3, 4, 5}, where state i indicates that the level of discount is 10i%. Let Xt be the level of discount a customer has in year t. The Xt form a Markov chain (since in a given year the event a claim is made is independent of what happens in previous years). We set X1 = 0 as there is no discount initially. The transition probabilities are pi,0 = p for all i, pi,i+1 = 1 p for 0 i 4, p5,5 = 1 p, all others 0. (b) Hopefully everyone was able to draw the transition graph. (c) The probability that X4 = 2 given that X1 = 0 can be found by considering all paths from state 0 to state 2 of length 3. The only such path is 0, 0, 1, 2 and so P(X4 = 2 | X1 = 0) = p0,0 p0,1 p1,2 = p(1 p)2 . (d) The discount is 0 in any year if and only if a claim was made in the previous year. Hence P(X16 = 0) = p as there is probability p of making a claim in any given year. More formally you could say P(X16 = 0) = p0,0 P(X15 = 0) + p1,0 P(X15 = 1) + + p5,0 P(X15 = 5) = p(P(X15 = 0) + P(X15 = 1) + + P(X15 = 5)) =p where the second line comes from the fact that pi,0 = p for all i. 4. For each ordered pair of rooms i, j which are joined by a passage take a state labelled (i, j ). Now if X0 , X1 , X2 , . . . is the original (non-Markov) process let Yn be the ordered pair (Xn1 , Xn ) (that is, a state now expresses both 2

your current room and the room you have just come from). The process Y1 , Y2 , . . . is a Markov chain since if Yt = (i, j ) then Yt+1 is chosen uniformly at random from all pairs (j, k ) where k is joined to j and k = i (the last condition is because we forbid backtracking). This distribution is unchanged by any extra knowledge about Yt1 , Yt2 , . . . and so we do have a Markov chain. 5. (a) Using basic linear algebra we get that the eigenvalues of P are 1 and 1 1/2 with eigenvectors 1 and 1 . It follows that: 1 /2 P = So P5 = 1 1 1 1 2 1 0 0
1 32 1 3 2 3

1 1 1 1 2

1 0 0 1 2
2 3

1 3 2 3

2 3
1 3 1 3

2 3

2 3

1 48 1 96

2 3 2 3

1 48 1 96

(b) The matrix P 5 gives the 5-step transition probabilities so we can read o the matrix that 1 17 1 (5) P(X5 = 1 | X0 = 1) = p11 = + = 3 48 48 1 1 31 (5) P(X5 = 1 | X0 = 2) = p21 = = 3 96 96 (c) Similarly to part (a) we have that P
100

1 1 1 1 2

1 0 0 2100

1 3 2 3

2 3

2 3

1 3 1 3

2 32100 1 32100

2 3 2 3

2 32100 1 32100

and so P(X100 = 1 | X0 = 1) = P(X100 1 2 + 3 3 2100 1 1 = 1 | X0 = 2) = 3 3 2100

(d) The probabilities in part (b) are fairly close and those in part (c) are very close. This is saying that the process forgets its starting state in the sense that P(X100 = 1 | X0 = i) does not depend very much on i. If you were to calculate P(Xn = 1 | X0 = 1) and P(Xn = 1 | X0 = 2) for even larger n they would be even closer and in the limit as n they are equal. 3

You might also like