You are on page 1of 13

Chapter 4: Discrete Time Markov Chains (DTMC)

1 Definitions and Examples

1.1 Definitions

Definition 1.1 A stochastic process {Xn , n ≥ 0} is called a DTMC with state space S = {0, 1, 2, ∙ ∙ ∙}, if

P (Xn+1 = j|Xn = i, Xn+1 = in−1 , ∙ ∙ ∙ , X1 = i1 , X0 = i0 } = P (Xn+1 = j|Xn = i) = Pij

for all states (i0 , ∙ ∙ ∙ , in−1 ), i, j, n ≥ 0.

Notes:
P
(i.) The value Pij is called one-step transition probability from i to j. Pij ≥ 0, for all i, j ≥ 0; Pij = 1
j
for all i = 0, 1, ∙ ∙ ∙.

(ii.) A DTMC has stationary transition as P (Xn+1 = j|Xn = i) = P (X1 = j|X0 = i) for all n, i and j.

(iii.) Let P = [Pij ] denote the matrix of (one-step) transition probabilities Pij , so that

0 1 2 ∙∙∙
 
0  P00 P01 P02 ∙∙∙
 
 
1  P10 P11 P12 ∙∙∙
 
..  .. .. .. 
.  . . . 

 
 
i  Pi0 Pi1 Pi2 ∙∙∙
 
.. .. .. ..
. . . .

The sum in each row is 1.

(iv.) Is a DTMC completely characterized by P? After all it is a conditional probability and we don’t
know P (X0 = i0 , ∙ ∙ ∙ , Xn = in ). We also need to specify P (X0 = i) = ai , a row vector also known
as the initial distribution of the DTMC. With that,

P (X0 = i0 , ∙ ∙ ∙ , Xn = in ) = P (Xn = in |Xn−1 )P (X0 = i0 , ∙ ∙ ∙ , Xn−1 = in−1 )

1
2

= Pin−1 ,in P (X0 = i0 , ∙ ∙ ∙ , Xn−1 = in−1 )

= ai0 Pi0 ,i1 ∙ ∙ ∙ Pin−1 ,in .

Many times we know the initial state X0 = i0 , then ai0 = 1.

(v.) A DTMC can also be represented graphically by its transition diagram, which is a directed graph
with one node for each state and a directed arc from node i to j is Pij > 0.

(vi.) The procedure to define (or formulate a stochastic process) a DTMC: (1) Define the state Xn
(verbally) and the state space S of the process; (2) Argue for the Markov proterty P (Xn+1 = j|Xn =
i, Xn+1 = in−1 , ∙ ∙ ∙ , X1 = i1 , X0 = i0 } = P (Xn+1 = j|Xn = i); (3) Calculate P (Xn+1 = j|Xn = i)
which needs to be stationary, i.e., P (Xn+1 = j|Xn = i) = Pij . You can represent a Markov chain
using a diagram with notes, arrows, and probabilities, or the one-step transition probability matrix
P.

1.2 Examples of Markov Chains

Two state DTMC S = {1, 2} with

1 2
 
1 α 1− α
 
2 1−β β

(i.) Weather forecast model: Let Xn be the weather on day n, and Xn = 1 if sunny and Xn = 2
rainy. α = 0.8 and β = 0.3.

(ii.) Let Xn = 1 if the price goes up on day n, and Xn = 0 otherwise.

(iii.) Clinical experiment for determining which of two drugs is better. Drug 1(2) is effective with
probability p1 (p2 ). Ethical reasons compel us to use the better drug more often. To ensure
this, the playing the winner rule is used. The first patient is given either drug 1 or 2 at random.
If the nth patient is given i and it is observed to be effective (ineffective) for that patient, then
the same (other) drug is given to patient n + 1. Let Xn be the drug administered to the nth
patient that is chosen from a completely randomized pool, then it is a two state DTMC with
α = p1 and β = p2 . Here n is not time, but a patient index.

(iv.) A machine can be either up or down on the nth day. Pup,up = pu and Pdown,down = pd .

Slot Machine Each play costs $1 and successive plays are independent. You win $j with pj , j =
0, 1, ∙ ∙ ∙ , L, and you play until bankruptcy or until you have at least $K where K < L. Let Xn be
the amount you have after nth play if the amount is < K and Xn = K otherwise. Then it is a MC
3

with

0 1 2 ∙∙∙ K −1 K
 
0 1 0 0 ∙∙∙ 0 0 
 P
L 
1  p0 p1 p2 ∙∙∙ pK−1 pj 
 j=K 
 
 P
L 
 
2 0 p0 p1 ∙∙∙ pK−2 pj 
 j=K−1 
 
.. 
 . .. .. .. 
.  ..

. . ∙∙∙ . 
 
K 0 0 0 ∙∙∙ 0 1

Random Walks We know that, for any iid Xn where P (Xn = j) = pj , {Sn , n ≥ 0} where S0 = 0 is a
random walk. Since Pij = P (Sn+1 = j|Sn = i) = P (Xn+1 = j − i) = pj−i , it is a Markov chain
with

∙∙∙ −2 −1 0 1 2 ∙∙∙
 
∙ ∙∙∙ ∙ ∙ ∙ ∙ ∙ ∙∙∙
 
 
−2  ∙ ∙ ∙ p0 p1 p2 p3 p4 ∙∙∙
 
 
−1 
∙∙∙ p−1 p0 p1 p2 p3 ∙∙∙
 
 
0 ∙∙∙ p−2 p−1 p0 p1 p2 ∙∙∙
 
 
1 ∙∙∙ p−3 p−2 p−1 p0 p1 ∙∙∙
 
 
2 ∙∙∙ p−4 p−3 p−2 p−1 p0 ∙∙∙
 
∙ ∙∙∙ ∙ ∙ ∙ ∙ ∙ ∙∙∙

Special cases: Xn can only take the values of {−1, 0, 1} or {−1, 1}. For instance, consider a
particle that moves on a doubly infinite one-dimensional lattice where the lattice points are labeled
∙ ∙ ∙ , −2, −1, 0, 1, 2, ∙ ∙ ∙. The motion of the particle from time n to time n + 1 is as described below:
Pi,i+1 = pi , Pi,i−1 = qi , Pii = ri = 1 − pi − qi . Then

∙∙∙ −2 −1 0 1 2 ∙∙∙
 
∙ ∙∙∙ ∙ ∙ ∙ ∙ ∙ ∙∙∙
 
 
−2  ∙ ∙ ∙ r−2 p−2 0 0 0 ∙∙∙
 
 
−1 
∙∙∙ q−1 r−1 p−1 ∙ ∙ ∙∙∙
 
 
0 ∙∙∙ 0 q0 r0 p0 ∙ ∙∙∙
 
 
1 ∙∙∙ 0 0 q1 r1 p1 ∙∙∙
 
 
2 ∙∙∙ 0 0 0 q2 r2 ∙∙∙
 
∙ ∙∙∙ ∙ ∙ ∙ ∙ ∙ ∙∙∙
4

More examples:

(i.) Discrete-Time Queue: At most one car arrives at a service station at each time n = 0, 1, 2, ∙ ∙ ∙
and with probability p there will be an arrival. Cars get served FCFS. If a car is in service at
time n, its service can finish at time n + 1 with q. Let Xn be the number of cars in the station
at time n observed after arrivals and service functions are completed. Then it is a random
walk with S = {0, 1, 2, ∙ ∙ ∙}, pi = Pi,i+1 = p(1 − q), p0 = P0,1 = p, qi = Pi,i−1 = q(1 − p),
ri = Pi,i = pq + (1 − p)(1 − q), r0 = P0,0 = 1 − p.

0 1 2 3 4 5 ∙∙∙
 
0 1−p p 0 0 0 0 ∙∙∙
 
 
1  (1 − p)q pq + (1 − p)(1 − q) p(1 − q) 0 0 0 ∙∙∙
 
 
2 0 (1 − p)q pq + (1 − p)(1 − q) p(1 − q) 0 0 ∙∙∙
 
 
3 0 0 (1 − p)q pq + (1 − p)(1 − q) p(1 − q) 0 ∙∙∙
 
.. .. .. .. .. .. ..
. . . . . . . ∙∙∙

(ii.) Urn Models: two urns (A and B) with a total of N whites and N blacks, each with N
balls. An experiment consists of picking one ball at random from each urn and interchanging
the balls. Let Xn be the number of whites in urn A after nth trial. Assume that X0 =
N . Pi,i+1 = P (pick a white from B with N − i whites, a black from A with N − i blacks) =

N , 0 ≤ i < N , Pi,i−1 = N N, Pi,i = 2 Ni N . So it is a random walk with r0 = 0,


N −i N −i i i N −i
N

p0 = 1, pi = Pi,i+1 , qi = Pi,i−1 , ri = Pi,i , rN = 0, qN = 1. This model was used initially to


model the diffusion of gases across a permeable membrane.

0 1 0 0 ∙∙∙ i−1 i i+1 ∙∙∙


 
0 0 1 0 0 ∙∙∙ 0 0 0 ∙∙∙
 
 
1  N1 N1 2 N1 N −1 N −1 N −1
0 ∙∙∙ 0 0 0 ∙∙∙
 N N N 
..  . .. .. .. .. .. .. .. .. 
.  ..

. . . . . . . . 
 
 
i  ∙ ∙ ∙ ∙ ∙∙∙ i i
N N 2 Ni N −i
N
N −1 N −i
N N ∙∙∙ 
..  .. .. .. .. .. .. .. .. .. 
. . . . . . . . . .

A Simple Random Walk: When ri = 0, pi = p and qi = 1 − p, then Xn ∈ {1, −1} and it is called
a simple random walk (either goes up one step or down one step). Example: Gambler’s Ruin (a
simple random walk with barriers): two gamblers , A and B, with combined fortune of $N , bet $1
each on the toss of a coin (with probability p for head). A (B) wins $1 if tail (head). The game
ends when either gambler is broke. Let Xn be the fortune of gambler A after the nth toss, then the
5

game ends if Xn = 0 or Xn = N . In either case, Xn+1 = Xn (assume the coin will be tossed, but
with no money exchange for convenience).

0 1 2 ∙∙∙ N −2 N −1 N
 
0 1 0 0 ∙∙∙ 0 0 0
 
 
1 q 0 p ∙∙∙ 0 0 0
 
..  .. .. .. .. .. .. 
. . . . . . . 
 
 
 
N − 10 0 0 ∙∙∙ q 0 p
 
..
N 0 0 0 . 0 0 1

It applies to an intoxicated person taking a step toward home (at place 0) with q or towards a bar
(place at N ) with q in a random manner. He stays there forever as soon as he reaches either home
or the bar. This is called the “drunkard’s walk”.

(i.) Deterministic Demand, Random Supply: A newspaper uses D tons of newsprint everyday
and supply is random. If on a particular day, there are less than D rolls in the warehouse, the
newspaper buys the shortfall from a more costly supplier to meet the demand for that day. We
want to model the inventory of newsprint rolls at the newspaper warehouse. Let Qn be the number
of rolls received from the inexpensive source at the end of day n (assume P (Qn = k) = ak . Imagine
order a fixed amount but yield is random) and Yn be the inventory in the morning (after receiving
Qn−1 ) before demand occurs. So

 Y −D+Q , if Yn > D,
n n
Yn+1 = max{Yn − D, 0} + Qn =
 Q , if Yn ≤ D,
n


 P (Y − D + Q = j|Y = i) = P (Q = j − i + D) = a if i > D,
n n n n j−i+D ,
Pij =
 P (Q = j) = a , if i ≤ D.
n j

0 1 2 3 ∙∙∙ D D+1 ∙∙∙


 
0  a0 a1 a2 a3 ∙∙∙ aD aD+1 ∙∙∙
 
 
1  a0 a1 a2 a3 ∙∙∙ aD aD+1 ∙∙∙
 
..  .. .. .. .. .. .. 
.  . . . . . . 
 
 
 
D  a0 a1 a2 a3 ∙∙∙ aD aD+1 ∙∙∙
 
 
D+1
 a0 a1 a2 ∙∙∙ aD−1 aD ∙∙∙

 
 
D+2 a0 a1 ∙∙∙ aD−2 aD−1 ∙∙∙
 
.. .. .. ..
. . . .
6

(ii.) Random Demand, Reliable Supply: (The first part for Assignment as 4.1) Demand in each
day Dn is iid with pj . Excess demand is lost. Orders are placed in the morning before demand
occurs according to the (s, S) policy and arrive immediately. Let Xn be the available inventory in
the morning before ordering and demand occurs.

 X − min{D , X }, if Xn > s, do not order
n n n
Xn+1 =
 S − min{D , S}, if Xn ≤ s, order up to S.
n

0 1 ∙∙∙ s s+1 ∙∙∙ S−1 S


 
P
0  pj pS−1 ∙∙∙ pS−s pS−s−1 ∙∙∙ p1 p0 
 j≥S 
 P 
1 
 j≥S pj pS−1 ∙∙∙ pS−s pS−s−1 ∙∙∙ p1 p0 

 
 
..  .. .. .. .. .. .. .. 
.  . . . . . . . 
 
 P 
 
s  pj pS−1 ∙∙∙ pS−s pS−s−1 ∙∙∙ p1 p0 
 j≥S 
 P 
s+1  pj ps ∙∙∙ p1 p0 ∙∙∙ 0 0 
 j≥s+1 
 
..  .. .. .. .. .. .. .. 
.  . . . . . . . 
 
 
 P 
S −1
 j≥S−1 pj pS−2 ∙∙∙ pS−s−1 pS−s−2 ∙∙∙ p0 0 
 P 
 
S pj pS−1 ∙∙∙ pS−s−2 pS−s−3 ∙∙∙ p1 p0
j≥S

If we define Yn as the inventory after delivery, but before demand occurs. Then

 Y − D , if Y − D > s, do not order
n n n n
Yn+1 =
 S, if Y − D ≤ s, order up to S.
n n

s+1 s+2 s+3 ∙∙∙ S−1 S


 
s+1  p0 0 0 ∙∙∙ 0 pˉ0 
 
 
s+2  p1 p0 0 ∙∙∙ 0 pˉ1 
 
 
s+3 
 p2 p1 p0 ∙∙∙ 0 pˉ2 

..  .. .. .. .. .. 
 
.  . . . . . 
 
 
S −1
 p(S−1)−(s+1) p(S−1)−(s+2) p(S−1)−(s+3) ∙∙∙ p0 pˉ(S−1)−(s+1) 

 
S pS−(s+1) pS−(s+2) pS−(s+3) ∙∙∙ p1 pˉS−(s+1) + p0

Note that for Yn+1 = S, an order should be placed in period n + 1 if Yn < S. However, If Yn = S,
it is possible that no order is placed as there was no demand in period n. An order was placed at
the beginning of a period if and only if when j = S.
7

(1) Genetics: physical characteristics (skin color, hair color, height, etc.) are passed from parents to
their children through genes. Questions of interest include composition of the gene pool, dominant or
extinct. (2) Sociology: the effect of social status (e.g., economic status, family name) or social traditions
(e.g., societies put emphasis on male heirs). Similar questions as in genetics. (3) Manpower Planning:
organizations need to know the evolution of the composition of people at different levels. The same as
population of ages. (4) Telecommunication: When transmitting voices (telephone), data (computer),
images (video), all messages are divided into parts of equal length packets. Collision occurs when some
users packets have to be transmitted in the next slots. The number of blocked users forms a MC.
Two questions: (1) If at time 0 the process is in state i, what is the probability that the process will be
in state j at time n? (2) Suppose that the process has been in operation for a long time, what is the
porportion of time the process stays in each state?

2 Transient Behavior: marginal distribution of Xn


Suppose that P (X0 = i) = ai is the initial probability. The marginal distribution of Xn can be written
as
(n)
X X X (n)
aj = P (Xn = j) = P (Xn = j|x0 = i)P (X0 = i) = P (Xn = j|x0 = i)ai = Pij ai .
i∈S i∈S i∈S

(n)
Pij is called the n-step transition probabilities. Intuitively, the probability of going from state i to j in
n steps should be made up of the probability of going from state i to some intermediate state in k steps
and then from that state to j in the next n − k steps.

Theorem 2.1 Chapman-Kolmogorov Equations The n + m-step transition probabilities satisfies the fol-
lowing equations:
(n+m)
X (n) (m)
Pij = pir prj for all i, j ∈ S.
r∈S

Proof:
(n+m)
Pij = P (Xn+m = j|X0 = i)
X
= P (Xn+m = j|X0 = i, Xn = r)P (Xn = r|X0 = i),
r∈S
X
= P (Xn+m = j|Xn = r)P (Xn = r|X0 = i), from Markov property
r∈S
X
= P (Xm = j|X0 = r)P (Xn = r|X0 = i), from time-homogenerity
r∈S
X (n) (m)
= Pir Prj
r∈S

(n)
Let P(n) = [Pij ] be a matrix of the n-step transition probabilities with P(1) = P. Then The Chapman-
Kolmogorov Equations imply P(n) = P(k) P(n−k) and the following.
8

Theorem 2.2 P(n) = Pn . That is, the n-step transition matrix can be found by multiplying P (the
1-step transition matrix) by itself n times.

Proof: For n = 2, P(2) = P(1) P(1) = PP = P2 . Assuming that P(n−1) = Pn−1 , from Chapman-
Kolmogorov Equations, P(n) = P(n−1) P(1) = Pn−1 P = Pn .
(n)
Let a(n) = (aj ) be the row vector of marginal probabilities and a(0) = a = (aj ).

Theorem 2.3 The probability the Markov chain is in state j after n transitions is a(n) = aPn . That is,
(n) P (n)
aj = ai Pij .
i∈S

3 Limiting Behavior
Note that (1) Pn will have identical rows as n is large enough and (2) a(n+1) = a(n) P. For n large
enough, a(n) ≈ a(n+1) and should be the identical rows. If we call it as π, then it is the steady state
probability vector and satisfies π = πP. Below is another argument.
Suppose that the process has been in operation for a long time and is in steady state. Then, the probability
that the process is in state i in a period, denoted as πi , should be the same regardless of the initial state.
Furthermore, πi is equal to the proportion of the time the process stays in each state in steady state.
Consider a process over 0, 1, ..., N .

• The average time (the number of time periods) the process spends in state i is πi N = the average
number of jumps out of state i = the average number of jumps into i.

• Among the πj N times the process leaves state j, only Pji of them jumps from j to i. So the total
number of jumps from state j to state i is πj N Pji .
P P
• The average number of jumps into i is πi N = πj N Pji , or πi = πj Pji (rate out of i = rate into
j j
i, the balance equation).

i Rate out of i = Rate into i


0 π0 = π0 P00 + π1 P10 + π2 P20 + ∙∙∙ + π M PM 0
1 π1 = π0 P01 + π1 P11 + π2 P21 + ∙∙∙ + π M PM 1
2 π2 = π0 P02 + π1 P12 + π2 P22 + ∙∙∙ + π M PM 2
.. .. ..
. . = .
M πN = π0 P0M + π1 P1M + π2 P2M + ∙∙∙ + πM PM M
P
M
πi = π0 + π1 + π2 + ∙∙∙ + πM
i=0

P
At least one of the M + 1 equations is redundant. Add another equation πi = 1. Under some
i
conditions, the solution is unique.
9

P
• Matrix representation: π = πP and πi = 1.
i

Examples

Weather: Rain (0) or dry (1): P00 = 0.3 and P10 = 0.2. Note that there is a redundant equation.
 
0.3 0.7
(π0 , π1 ) = (π0 , π1 )  ,
0.2 0.8
State 0 : π0 = π0 P00 + π1 P10 = 0.3π0 + 0.2π1 , or π1 = (0.7/0/2)π0 = 3.5π0 ,

π0 + π1 = π0 + 3.5π0 = 4.5π0 = 1, that is, π0 = 2/9 and π1 = 7/9.

Stock Market: If whether the stock price goes up (Xn = 1) or down (Xn = 0) tomorrow depends on
what happened today and yesterday, then Xn is not a Markov chain. However, Yn = (Xn−1 , Xn )
is a Markov chain. The possible transitions are (∙, i) → (i, ∙). Note that you can write out the
balance equations using the matrix formulation π = πP or using the transition diagram (easier if
P is sparse).

π0,0 = 0.55π0,0 + 0.5π1,0 , ⇒ π1,0 = 0.9π0,0 ,

π0,1 = 0.45π0,0 + 0.5π1,0 = (0.45 + 0.5 × 0.9)π0,0 = 0.9π0,0 ,


1
π1,0 = 0.4π0,1 + 0.3π1,1 ⇒ π1,1 = (0.9 − 0.4 × 0.9)π0,0 = 1.8π0,0 ,
0.3

Given that π0,0 + π0,1 + π1,0 + π1,1 = (1 + 0.9 + 0.9 + 1.8)π0,0 = 23/5π0,0 = 1, π0,0 = 5/23.
π0,1 = π1,0 = 0.9π0,0 = 9/46, and π1,1 = 1.8 × 5/23 = 9/23. The percentage of the days that the
stock price goes up is π1,1 + π1,0 or π1,1 + π0,1 = 27/46.
10

Inventory problem: Suppose (s, S) = (2, 5) and demand in each day follows (p0 , p1 , p2 , p3 , p4 , p5 ) =
(0.05, 0.1, 0.15, 0.25, 0.4, 0.05). Let Yn be the inventory level at the beginning of day n after ordering,
but before demand realizes. Then the state space is {3, 4, 5} and at most two events occur between
observations of Yn and Yn+1 , Dn is realized and an order is placed in n + 1.

 Y − D , if Y − D > 2, do not order (only demand occurred),
n n n n
Yn+1 =
 5, if Y − D ≤ 2, order up to 5 (both events occurred).
n n

   
p0 0 1 − p0 0.05 0 0.95
   
   
P =  p1 p0 1 − p0 − p1  =  0.1 0.05 0.85 
   
p2 p1 1 − p1 − p 2 0.15 0.1 0.75

π4 = 0.05π4 + 0.1π5 , or π5 = 9.5π4 ,

π3 = 0.05π3 + 0.1π4 + 0.15π5 = 1/0.95(0.1π4 + 0.15π5 ) = 1/0.95(0.1π4 + 0.15 × 9.5π4 ) = 1.6π4 .

π3 + π4 + π5 = (1.6 + 9.5 + 1)π4 = 1 and π4 = 0.083. Therefore, π3 = 1.6π4 = 0.132 and


π5 = 9.5π4 = 0.785.

(i.) Let Z be the number of units sold in a day and Y the inventory level after ordering in a day.
Then
5
X
E(Z|Y = 3) = E(Z|Y = 3, Dn = k)P (Dn = k)
k=0
= 0 × p0 + 1 × p1 + 2p2 + 3(p3 + p4 + p5 ) = 0.1 + 2 × 0.15 + 3 × 0.7 = 2,
5
X
E(Z|Y = 4) = E(Z|Y = 4, Dn = k)P (Dn = k)
k=0
= 0 × p0 + 1 × p1 + 2p2 + 3p3 + 4(p4 + p5 )

= 0.1 + 2 × 0.15 + 3 × 0.45 = 2.95,


X5
E(Z|Y = 5) = E(Z|Y = 5, Dn = k)P (Dn = k)
k=0
= 0 × p0 + 1 × p1 + 2p2 + 3p3 + 4p4 + 5p5

= 0.1 + 2 × 0.15 + 3 × 0.45 + 4 × 0.4 + 5 × 0.05 = 3.

So the average sales for a day is


5
X
E(Z) = E(Z|Y = i)P (Y = i) = 2.5π3 + 2.95π4 + 3π5 = 2.93.
i=3

If the selling price for each item is $10,000, the average revenue for a day is 29, 300.
11

(ii.) Let B be the demand lost in a day. Then


5
X
E(B) = E(B|Y = i)P (Y = i) = (1 × p4 + 2 × p5 )π3 + (1 × p5 )π4
i=3
= (0.4 + 2 × 0.05) × 0.132 + 0.05 × 0.083 = 0.07.

(iii.) Let X be the average inventory at the end of a day. Then X = Y − Z and
5
X 5
X
E(X) = E(X|Y = i)P (Y = i) = E(Y − Z|Y = i)πi
i=3 i=3
5
X
= [i − E(Z|Y = i)]πi = (3 − 2.5)π3 + (4 − 2.95)π4 + (5 − 3)π5 = 1.72.
i=3

The average inventory cost is 1.72h.

(iv.) In general, if you have a Markov chain and there is a cost rate Ci every time the state is i,
P
the average cost for the system is C i πi .

(v.) Let Xn be the inventory before ordering. Then, Ω = {0, 1, 2, 3, 4, 5} and



 S − min{S, D } = 5 − D , if X ≤ 2,
n n n
Xn+1 =
 X − min{X , D }, if Xn > 2,
n n n

and
   
p5 p4 p3 p2 p1 p0 0.05 0.4 0.25 0.15 0.1 0.05
   
   
 p5 p4 p3 p2 p1 p0   0.05 0.4 0.25 0.15 0.1
0.05 
   
   
 p5 p4 p3 p2 p1 p0   0.05 0.4 0.25 0.15 0.1 0.05 
P =  = 
   
 p 3 + p 4 + p5 p2 p1 p0 0 0   0.7 0.15 0.01 0.05 0 0 
   
   
 p4 + p 5 p3 p2 p1 p0 0   0.45 0.25 0.15 0.1 0.05 0 
   
p5 p4 p3 p2 p1 p0 0.05 0.4 0.25 0.15 0.1 0.05
 
p5 p4 p3 p2 p1 p0
 
 
 p5 p4 p3 p2 p1 p0 
 
 
 p5 p4 p3 p2 p1 p0 
=  
 
 p 3 + p 4 + p5 p4 + (p2 − p4 ) p3 + (p1 − p3 ) p2 + (p0 − p2 ) p 1 − p1 p0 − p0 
 
 
 p4 + p5 p4 + (p3 − p4 ) p3 + (p2 − p3 ) p2 + (p1 − p2 ) p1 + (p0 − p1 ) p0 − p0 
 
p5 p4 p3 p2 p1 p0

P
Solving for π = πP and πj = 1, we have

π0 = p5 + (p3 + p4 )π3 + p4 π4 = 0.05 + 0.65π3 + 0.4π4 ,

π1 = p4 + (p2 − p4 )π3 + (p3 − p4 )π4 = 0.4 − 0.25π3 − 0.15π4 ,


12

π2 = p3 + (p1 − p3 )π3 + (p2 − p3 )π4 = 0.25 − 0.15π3 − 0.1π4 ,

π3 = p2 + (p0 − p2 )π3 + (p1 − p2 )π4 = 0.15 − 0.1π3 − 0.05π4 ,

π4 = p1 − p1 π3 + (p0 − p1 )π4 = 0.1 − 0.1π3 − 0.05π4 ,

π5 = p0 − p0 π3 − p0 π4 = 0.05 − 0.05π3 − 0.05π4 .

From state 4, we obtain π3 = 1 − 10.5π4 . Then,

π0 = 0.05 + 0.65π3 + 0.4π4 = 0.7 − 6.425π4 ,

π1 = 0.4 − 0.25π3 − 0.15π4 = 0.15 + 2.475π4 ,

π2 = 0.25 − 0.15π3 − 0.1π4 = 0.1 + 1.475π4 ,

π3 = 1 − 10.5π4 ,

π4 = π4 ,

π5 = 0.05 − 0.05π3 − 0.05π4 = 0.52π4 .

Summing up both sides, we have 1 = 1.95 − 11.455π4 or π4 = 0.083. Therefore, π0 = 0.167,


P
π1 = 0.355, π2 = 0.223, π3 = 0.129, and π5 = 0.043. E(X) = iπi = 1.73.

4 Summary
• Def: A discrete time stochastic process Xn : n ≥ 0 with state space S = 0, 1, ... is a Markov chain if

P (Xn+1 = j|Xn = i, Xn+1 = in−1 , ∙ ∙ ∙ , X1 = i1 , X0 = i0 } = P (Xn+1 = j|Xn = i) = Pij .

It has the Markov and stationary properties.

• You can represent a Markov chain with a diagram with notes, arrows, and probabilities, or the
one-step transition probability matrix P.

• The procedure to define (or formulate a stochastic process) a DTMC: (1) Define the state Xn
(verbally) and the state space S of the process; (2) Argue for the Markov property P (Xn+1 = j|Xn =
i, Xn+1 = in−1 , ∙ ∙ ∙ , X1 = i1 , X0 = i0 } = P (Xn+1 = j|Xn = i); (3) Calculate P (Xn+1 = j|Xn = i)
which needs to be stationary, i.e., P (Xn+1 = j|Xn = i) = Pij .

• Two questions.

(n)
(i.) Pij is the (i, j)th element in Pn . This is derived from the Chapman-Kolmogorov Equation
P(n+m) = P(n) P(m) .
P
(ii.) π can be obtained by solving π = πP and πi = 1. If P is sparse, you can use the diagram.
Otherwise, work with the matrix form.
13

The meaning of πi : the long run proportion of time the process is in state i, steady state prob-
ability, equilibrium probability, stationary probability. It means the probability the process
is in state i at any point in time.

You might also like