You are on page 1of 2

Worksheet # 2

Statistics 150, Pitman, Spring 2013

Topics: Conditional independence, Markov property, transition probability matrices, matrix methods, reversibility. Reading: Secs 4.1, 4.2, 4.3, 6.5. Topic notes linked to above. Note: If you have strong background in linear algebra you may appreciate Secs 4.4, 4.5, 4.6 and associated exercises. But this material is beyond scope of the present course. Exercises: 4.1, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 4.10, 6.9 , 6.10, plus 1,2,3 below. Homework: 6.9 (need extra assumptions, to be stated, for the limit probabilities), 4.9, 4.10, and from this sheet: 3 e), 3 m). 1. Random walk on a graph. This is the Markov chain of Exercise 6.9 with dij = dji {0, 1} for all i, j in some nite set of states S . Say there is an (undirected) edge between i and j i dij = dji = 1. The graph structure G = (S, E ) is the set of states S with the set E of unordered pairs of states which form the edges. Under what condition on the graph is this random walk an irreducible Markov chain? 2. Random chess moves. Let S be the set of squares on an 8 8 chessboard labeled a1, . . . , a8, b1, . . . , b8, . . . , h1, . . . h8. For each of the chess pieces except a pawn (i.e. king, queen, rook, bishop, knight), dene a corresponding graph on S by dij = 1 i the piece can get from square i to square j in a single move (see Wikipedia: Chess. So random walk on this graph describes random moves of the piece on an empty chessboard. For each piece, (a) what is the number of communicating classes of states in the associated Markov chain? (b) for each communicating class, what is its period? (c) for each communicating class, what is its steady state distribution?

3. Steady-state implies positive recurrence. This exercise oers a simplied approach to the results in Section 6.3.2. Suppose a Markov chain (Xn ) with nite or countable state space S and transition matrix P is irreducible (i.e. for all i and j there n exists n with Pij > 0) and that there exists a steady-state probability distribution (i.e. i 0, i i = 1 and P = ). Write Pi for probabilities and Ei for expectations conditioned on X0 = i and P := i i Pi for steady-state probabilities. Let Ti be the least n 1 such that Xn = i, with the convention Ti = is there is no such n. Show as simply as possible, and without using any results from Section 6.3.2, that for all i, j S and n 1: (a) = P n (b) i > 0 (c) P (Xn = i) = i (d) P (Xm = i for 0 m n 1) = P (Xm = i for 1 m n) (e) P (X0 = i, Xm = i for 1 m n 1) = P (Xm = i for 1 m n 1, Xn = i) (f) P (X0 = i, Ti n) = P (Ti = n) (g) i Pi (Ti n) = P (Ti = n) (h) i Ei (Ti ) = P (Ti < ) (i) Ei (Ti ) < (i.e. every state is positive recurrent: Lemma 6.3.2) (j) Ej (Ti ) < (k) Pj (Ti < ) = 1 (l) Ei (Ti ) = 1/i ( and hence is unique : Theorem 6.3.5 ) (m) Compute Ei (Ti ) for random knight moves on an empty chessboard, and i = b1 (usual starting square for one of the white knights).

You might also like