You are on page 1of 14

Faculty of Science and Engineering School of Mathematical Sciences

MAB210
Probability and Stochastic Modelling 1 Section 4
Introduction to Markov Chains

MAB210-2-13

Section 4

Preliminaries: An exercise P4.1 Example An insurer's No Claim Discount scale for motor insurance policyholders has five levels of discount: 0%, 10%, 20%, 30%, 40%. The rules for moving between these levels are as follows: at the end of a claim-free-year, a policyholder moves in the next year to the next higher level of discount, or remains on 40% discount; at the end of a year with just one claim, a policyholder moves back two levels from 40%, or one level from the others, or remains on 0%; at the end of a year with two or more claims, a policyholder moves back to, or remains on, 0%.

For each policyholder the probability that the number of claims in one year, , for , is given by

Identify the possible changes in discount level from one year to the next, and the probabilities of those changes.

MAB210-2-13

Section 4

Formal Lecture Materials


Introduction to Markov Chains
4.1 Setting up a matrix The example in the Section 4 preliminaries considers the probabilities of moving from one level of discount to another level from one year to the next. The easiest way to write out these probabilities is by putting them in a matrix, where the row denotes the level of discount for this year and the column gives the level of discount for next year. The entries in the matrix are conditional probabilities.

Class Exercise: Set up the matrix of conditional probabilities through the following steps: (i) Identify the possible levels of discount (done for you!). This gives the number of rows & columns of the matrix. Mark your rows and columns rows give the current level of discount of the policy-holder, columns give the level of discount for next year Work through the matrix a row at a time. In the first row, the current level of discount is 0%. First identify which levels of discount are not possible next year if you are on 0% this year. These entries in the matrix are 0. For example, it is not possible to go from 0% to 20% in one year. So the entry for the 1st row, 3rd column in the matrix is 0. For those levels next year that are possible, identify for each one what must happen in order to move from 0% this year to that level next year. On ouv d n f d w u pp n ou w b b o write down the probability of it happening. This is the probability you enter in the matrix. For example no . Repeat for each row of the matrix. Remember that each row is a particular level of discount for the current year. For example, the second row of the matrix is for a 10% level in the current year. It is not possible to stay on 10% next year so the 2nd column of the 2nd row has

(ii) (iii)

(iv)

(v)

MAB210-2-13

Section 4

0 entered. It is possible to go from 10% to 20%: for that to happen there must be no claims this year. The probability of no claims is 3 4 so that is the entry in the 2nd row, 3rd column of the matrix. Proceeding in these steps finally gives the matrix of conditional probabilities as Next Year

This Year

The entries in each row have the same condition they are conditional on the same level of discount this year. Note that therefore the rows sum to 1 because the probabilities are all conditional on the same condition the same starting level this year.

MAB210-2-13

Section 4

4.2 Markov chains using the matrix Consider again the example of Section 3 of the binary signal passing through stages. The example assumes that the probability of the signal changing or staying the same from one stage to the next depends only on its state at that stage. The signal takes on integer or counting values (in this case one of the values 0, 1) and the probabilities of changing or staying the same from one stage to the next do not change over time. This is an example of a Markov chain. It has only 2 states (0, 1) whereas a general Markov chain can have any countable number of states, either a finite or an infinite number. The Markov property in general is that the process only remembers the last thing that happened to it, so that we only need to consider the probabilities of changes in the process from one time point to the next. Example: In the binary signal example, let the example assumes that denote the state of the signal at stage . Then

and

and we used conditional probability to find

Obviously we can continue to use conditional probability to find the probabilities of different values of from a particular starting signal, but continuing as above is going to get very messy. However we can consider the matrix with rows denoting the current state of the signal, and columns denoting the state of the signal at the next stage, and with the entries being the probabilities of going from the row to the column state at the next stage.

MAB210-2-13

Section 4

Now consider

The top LH element of is , which is the probability of going from 0 to 0 in two stages. Similarly the top RH element of is the probability of going from 0 to 1 in two stages. The elements of are giving the probabilities of all possibilities in two stages. Similarly the elements of stages. Notice that we can denote give the probabilities of all possibilities in three by and that

The above applies no matter how many states the system could be in that is, no matter what the size of the matrix. The elements of give the probabilities of going from state to state in two stages, while the elements of give the probabilities of going from state to state in three stages. For example, suppose in the binary signal example. Then

so that Note that

. This is also equal to

Thus by using matrices we can calculate the probabilities through stages more efficiently. Also the properties of the process can be investigated by examining the behaviour of the matrix.
MAB210-2-13 Section 4 6

In general, is called the matrix of transition probabilities, or the transition matrix, or the matrix of 1-step probabilities. Note that this example is unusual in having a symmetric ; most transition matrices are not symmetric.

MAB210-2-13

Section 4

4 W

pp n w

nw k p

u p

ng

x;

ng down

Consider the binary signal passing through stages again but now consider

Note what is happening:

and similarly for

The system is forgetting its starting point and settling down to what is known as equilibrium behaviour. Not all Markov chains do this, and some have periodic behaviour i.e down bu o b v ou If a Markov chain does down o qu b u w fo g w d n there is a very important result (not proved here!) that states that the equilibrium probabilities given by satisfy subject to . Example: In the binary signal with , the equilibrium probabilities satisfy

That is,

These both give the same equation, namely,

But we also know that

and so

so that

MAB210-2-13

Section 4

We will always need to use the fact that the sum of the probabilities is 1. The rows of the matrix sum to 1 and so it is a singular matrix (does not have an inverse).

MAB210-2-13

Section 4

4 Equ b u

ng down n he insurance example:

Now let us look again at the example in P4.1. The insurance company will need to know what happens if they use this model. In particular they would like to know what proportion of their clients will be on the various levels of insurance. They can find this out in the long run (or on average) by considering that the system is n qu b u o d down

Let

We need to solve

We will work from the simplest equations and will not have to consider the first equation. From (5)

From (4)

MAB210-2-13

Section 4

10

From (3)

From (2)

We also have

. . We can now easily

Therefore and so work out the other values by substitution.

H n f n qu b u o d down o p n n xp 10.3% of the clients will be on 0% discount, 11.4% on 10% and 44% on the top level.

[Note that the equations are usually quite straightforward to solve as most Markov chains have at least some zero entries in the matrix. Even though the above is a set of equations in 5 unknowns, the arithmetic was not particularly messy]

MAB210-2-13

Section 4

11

4.5 The steps in setting up the matrix Summarising the steps described in (4.1), Identify the states of your Markov chain and write them down as the rows and columns of your matrix; Work through the matrix row by row; For each row, identify which states are not possible at the next step or stage and enter 0 in those positions; Then for the possible states at the next stage, identify what must happen to move to that state, and hence calculate or identify the probability of this happening.

Sometimes the last bullet point just involves identifying which probability goes in which position, and sometimes it involves more work. Class Exercise: The probability that an office staff member who is at work one day, will fall sick and be absent the next day is 0.01. If an office staff member is absent ill one day, the probability that he/she returns to work the next day is 0.9. (i) (ii) Over a long period of w ff b p opo on of time absent due to illness? Suppose the above probabilities apply to office staff members in general, and that they fall sick and recover independently of each other. Consider an office with two office staff members. (1) Set this situation up as a Markov chain, and (2) If both staff members are at work today, what is the probability that both will be absent ill in two days time? (3) In the long run, what is the probability that both staff members will be absent on the same day?

MAB210-2-13

Section 4

12

Some Quick and Summary Questions for You


1. Three of the following twelve statements are incorrect. Which are incorrect? (a) The transition matrix of a Markov chain gives the probabilities of transitions from each state to another state in one stage or step. (b) The stage or step of a Markov chain can be a point in time or space but consecutive points do not need to have equal distances between them. (c) Consider a queue of customers at a supermarket checkout. The times at which the queue changes size could be used as the stages for some Markov chain. (d) Consider the transition matrix of a Markov chain (i) all the transition probabilities are conditional (ii) the rows sum to 1 (iii) the columns sum to 1 (iv) the matrix is symmetric (e) In the transition matrix of a Markov chain, and is the finishing state after 1 step/stage. , is the starting state

(f) A frog is jumping (or resting) amongst 3 lily pads according to a Markov chain with the matrix given by 1 2 3

(i) (ii) (iii) (iv)

the frog never rests on pad 2 except in between stages the minimum number of jumps to return to pad 2 is 2 it is impossible for the frog to get to pad 3 from pad 1 the probability of getting from pad 2 to pad 3 in 2 stages is 1 4

MAB210-2-13

Section 4

13

2.

Suppose that in February, if it rains one day the probability that it is fine 2 the next is 1 2 and if it is fine, the probability that it is fine the next is 3. This is a Markov chain (i) (ii) (iii) Identify its transition matrix. If it rains on Friday what is the probability that it's fine on Sunday? Is it correct to say that to find the long run probabilities of days being fine or rainy in February, one equation is ?

3. For a particular month, the following sequence of the weather on 24 consecutive days was observed, where F = fine and R = rain (in the sense of at least some precipitation). FFFFFRRFFRRRFFFRFFFRRRRF (i) (ii) (iii) Answers: 1. (d) (iii) and (iv), (f) (iii) 2. (i) Transition matrix
7 (ii) 12

What is the number of FF pairs in this sequence? Use this sequence of observations to estimate the probability of a day with rain following a fine day. Use this sequence of observations to estimate the probability of a fine day following a day with rain.

F R

(iii) No 3. (i) 9 4 (ii) 13 (iii) 2 5

MAB210-2-13

Section 4

14

You might also like