You are on page 1of 5

Stat 150 Stochastic Processes

Spring 2009

Lecture 18: Markov Chains: Examples


Lecturer: Jim Pitman

A nice collection of random walks on graphs is derived from random movement of a chess piece on a chess board. The state space of each walk is the set of 8 8 = 64 squares on the board:

8 7 6 5 4 3 2 1

a b c d e f g h

Each kind of chess piece at a state i on an otherwise empty chess board has some set of all states j to which it can allowably move. These states j are the neighbours of i in a graph whose vertices are the 64 squares of the board. Ignoring pawns, for each of king, queen, rook, bishop and knight, if j can be reached in one step from i, this move can be reversed to reach i from j . Note the pattern of black and white squares, which is important in the following discussion. The King interior 8 possible moves corner 3 possible moves edge 5 possible moves

d s  d T 'd E d cd d

T  E
1

T  E d d cd

Lecture 18: Markov Chains: Examples

In the graph, i j means that j is a kings move from i, and i is a kings move from j . Observe that N (i) := #( possible moves from i) {3, 5, 8}. From general discussion of random walk on a graph in previous lecture, the reversible N (i) where equilibrium distribution is i = :=
j

N (j ) = (6 6) 8 + (4 6) 5 + 4 3

Question: Is this walk regular? Is it true that m : P m (i, j ) > 0 for all i, j ? Yes. You can easily check this is so for m = 7. Rook and Queen Very similar treatment. Just change the values of N (i). Both walks are regular because P 2 is strictly positive: even the rook can get from any square to any other square in two moves. Bishop 32 white squares, 32 black squares. Bishop stays on squares of one colour.

d s d

 d d d d
Is this chain regular? No, because the bishops walk is reducible: there are two disjoint sets of states (B and W ). For each i, j B , N ; P N (i, j ) > 0. In fact, N = 3. Similarly for each i, j W, P 3 (i, j ) > 0, hence P N (i, j ) > 0 for all N 3. But B and W do not communicate.

B . B . . . W . .
1

32 33

64

The matrix is decomposed into two: B B, W W . If the bishop

Lecture 18: Markov Chains: Examples

starts on a black square, it moves as if a Markov chain with state space B , and the limit distribution is the equilibrium distribution for the random N (i) as before, but now = j B N (j ), with 32 walk on B , with i = terms, rather than a sum over all 64 squares. This is typical of a Markov chain with two disjoint communicating classes of states. Knight

Is it regular? (No) After an even number of steps n, P n (i, j ) > 0 only if j is of the same color as i. No matrix power of P is strictly > 0 at all entries = not regular. Periodicity Fix a state i. Look at the set of n : P n (i, i) > 0. For the knights move {n : P n (i, i) > 0} = {2, 4, 6, 8, . . . }. You cannot return in an odd number of steps because knights squares change colour B W B at each step. Denition: Say P is irreducible if for all states i, j , n : P n (i, j ) > 0. This condition is weaker than regular, which requires the n to work for all i, j . Deal with periodic chains: Suppose P is irreducible, say state i has period d {1, 2, 3, . . . } if the greatest common divisor of {n : P n (i, i) > 0} = d. E.g., d = 2 for each state i for the knights walk. Fact (Proof see Feller Vol.1): For an irreducible chain, every state i has the same period. d = 1: called aperiodic d = 2, 3, . . . : called periodic with period d In periodic case, d disjoint sets of states, C1 , . . . , Cd (Cyclically moving subclasses): P (i, j ) > 0 only if i Cm , j Cm+1 (m + 1 mod d).

Lecture 18: Markov Chains: Examples

 C1  . . .  C3   C2 

Note If i : P (i, i) > 0, then i has period 1. Then, assuming P is irreducible, all states have period 1, and the chain is aperiodic. Fact An irreducible, aperiodic chain on nite S is regular. Death and immigration chain Population story. State space S = {0, 1, 2, . . . }. Let Xn S represent the number of individuals in some population at time n. Dynamics: Between times n and n + 1, each individual present at time n dies with probability p and remains with probability q := 1 p. add an independent Poisson() number of immigrants. Problem Describe limit behavior of Xn as n . Natural rst step: write down the transition matrix. For i 0, j 0, condition on number of survivors:
i

P (i, j ) =
k=0

i k ik j k q p e 1(k j ) k (j k )!

Now try to solve the equations P = . Very dicult! Idea: Suppose we start at state i = 0, X0 = 0, X1 Poi() X2 Poi(q + ) by thinning and + rule for Poisson X3 Poi((q + )q + ) . . . 1 qn Xn Poi((1 + q + + q n1 )) = Poi Poi(/p) 1q

Lecture 18: Markov Chains: Examples

So given X0 = 0, we see Xn Poi(/p).


d

If we start with Xn > 0, it is clear from the dynamics that we can write
Xn = X n + Yn = survivors of the initial population Xn Yn = copy of the process given (Xn = 0)

where

Note that
Xn 0,
d

Yn Poi(/p)

Conclusion: Xn Poi(/p) no matter what X0 is. Ad hoc analysis special properties of Poisson. General idea: What do you expect in the limit? Answer: stationary distribution P = . Here is Poi(/p). Does P = ? That is, does X0 Poi(/p) imply X1 Poi(/p) . . . ? Check: Say X0 Poi(), then by thinning and addition rules for Poisson, as before X1 Poi(q + ). So X1 = X0 = q + =
d

= 1q p

Therefore, the unique value of which makes Poi() invariant for this chain is = /p. In fact, this is the unique stationary distribution for the chain. This follows from the previous result that no matter what the distribution of X0 , the distribution of Xn converges to Poi(/p). Because if was some other stationary distribution, if we started the chain with X0 , then Xn for every n, and the only way this can converge to Poi(/p) is if in fact = Poi(/p).

You might also like