You are on page 1of 11

Lecture 16: Introduction to Epistemic Game Theory

Alexander Wolitzky
Stanford University
Economics 180: Honors Game Theory

The last two lectures of this course will provide introductions to major areas of game
theory that we dont have time to cover in depth.

Todays lecture is on epistemic game

theory: modeling knowledge in games, and studying the implications of these models for
the predictions of game theory. Next lecture is on evolutionary game theory: the study of
games played by large populations of mechanicalplayers, and of the evolution of behavior
in these games.
We cover two topics in epistemic game theory: modeling knowledge and common knowledge and robustness of equilibria to incomplete information. We will see that the concept
of common knowledge,which we have mentioned informally at several points in the class,
has several important and surprising properties. In particular, common knowledge will be
important for understanding when equilibria are robust to adding a small amount of incomplete information to the game (which is an important issue, because in any application we
cannot be sure that we perfectly observe the game being played).

Knowledge and Common Knowledge

As in the rest of the course, we model a players information by an information partition (i.e.,
a partition into information sets): Assume that the set of possible states of the world is an
arbitrary nite set

. (A state ! 2

represents a possible move by nature, or equivalently

an assignment of types to all players.) There is a common prior distribution p on

. Each

player i has an information partition Hi of

, with the interpretation that when the true

state is !, all player i knows is that the true state is somewhere in the set hi (!)
(what it means for Hi to be a partition is that, for all !; ! 0 2

, either hi (!) = hi (! 0 ) or

hi (!) \ hi (! 0 ) = ;). Assume without loss of generality that p (!) > 0 for all ! 2 .
As usual, player is posterior belief at information set hi is that
p (!)
p (!)
=
if ! 2 hi ;
0
p (hi )
! 0 2hi p (! )
p (!jhi ) = 0 if ! 2
= hi :

p (!jhi ) = P
An event E is a subset of

. The event player i knows E is denoted Ki (E) and is

given by
Ki (E) = f! : hi (!)

Eg :

This makes sense: the states ! at which player i knows that event E has occurred are exactly
those states ! at which, for any state ! 0 that player i nds possible when the true state is
!, event E has occurred at ! 0 . Similarly, the event everyone knows Eis denoted KN (E)
and is given by
KN (E) =

n
\

Ki (E)

i=1

!:

n
[

hi (!)

i=1

Next, the event everyone knows that everyone knows Eis denoted KN2 (E) and is given
by
KN2 (E) = KN (KN (E)) =
Inductively, let
KNm (E) =

and let
KN1

!:

n
[

!:

n
[

hi (!)

KN (E) :

KNm

i=1

hi (!)

i=1

(E) =

1
\

(E) ;

KNm (E) :

m=1

We can now dene common knowledge.

Denition 1 Event E is common knowledge at state ! if ! 2 KN1 (E).


2

Thus, event E is common knowledge if everyone knows that everyone knows that. . . E
has occurred.
Lets illustrate the concept of common knowledge with a famous logic puzzle:
A desert island is inhabited by a primitive tribe of skilled logicians who share the inviolable social norm that, if one of them should ever learn that she has black dot on her
forehead, she must kill herself the following night. In fact, everyone in the tribe has a black
dot on her forehead. However, the tribe has sensibly banned all reective surfaces from the
island, as well as all discussion of the dots, so that, while everyone knows whether everyone
else has a black dot on her forehead, no one knows whether she has one herself.
One day, a shipwrecked sailor washes up on the island. He regains consciousness surrounded by the tribespeople, and asks, Why do some people on this island have black dots
on their foreheads? He then enjoys a lovely visit to the island, until the morning of the
100th day after his arrival, when he wakes up to nd all the tribespeople dead.
Question: How big was the tribe?
Answer:

Reason by induction on the number of dots, m.

If m = 1, the sailors

announcement informs the tribesperson with the dot that she has a dot (as she can see that
no one else has one), so she kills herself the rst night after his arrival. If m = 2, then each
of the two tribespeople who see only one person with a dot reason that that person would
have killed herself during the rst night if the true number of dots had been m = 1, and
therefore they kill themselves during the second night. By induction, if the number of black
dots is m, the m people who see only m
killed themselves during the m

1 people with dots reason that they would have

1st night if the true number of dots had been m

1, and

therefore kill themselves during the mth night. Since everyone kills themselves on the 100th
night, it must be that the tribe had 100 members.
The apparent paradox in the puzzle is that all the sailor says is that m

1, which is

something that everyone already knew. In fact, letting ! be the state where everyone has a
dot and letting E be the event that someone has a dot, even before the sailors announcement
it was the case that
! 2 KN99 (E) ;
3

so that the event E was already 99th-order mutual knowledge. But, crucially, it was also
the case that
!2
= KN100 (E) ;
so that the event E was not 100th-order mutual knowledge, and therefore not common
knowledge. The point of the logic puzzle is that it can make a big dierence whether a fact
is high-order mutual knowledge or common knowledge.

Agreeing to Disagree

It turns out that common knowledge is important for more than designing fun logic puzzles.
Our main interest in this lecture will be in the connection between common knowledge and
the robustness of equilibria to incomplete information. However, we would be remiss not
to present the rst and most famous theorem about common knowledge, Robert Aumanns
agreeing to disagree theorem. The theorem says that rational players cannot agree to
disagree about the probability of an event, meaning that if their posterior beliefs about
an event are common knowledge, then they must be equal.

The intuition is that if a

player knows that her opponents belief is dierent from her own, she should realize that
her opponent must have some information that she does not have, and she should revise her
own belief to take this into account. (However, we will see that this intuition is incomplete,
because Aumanns theorem fails if players know each others beliefs but do not commonly
know them.)
Theorem 1 (Aumann) Suppose it is common knowledge at ! that player is posterior
probability of event E is qi and that player js posterior probability of event E is qj . Then
qi = qj .
The following lemma is key. In what follows, let M be the nest common coarsening,
or meet, of the partitions Hi : this means that M is a partition of
hi (!)

M (!) for all i 2 N and ! 2 ;

with the property that

where M (!) is the element of M containing !, and that in addition there is no ner partition
of

with this property. The lemma says that M (!) is the smallest event thats common

knowledge at !.
Lemma 1 Event E is common knowledge at ! if and only if M (!)
Proof. Note that
KN (M (!)) =

!0 :

n
[

hi (! 0 )

M (!)

i=1

E.

= M (!) :

(This says that M (!) is mutually known whenever it occurs, in which case we say that M (!)
is a public event.)

Hence, KNm (M (!)) = M (!) for all m, and therefore KN1 (M (!)) =

M (!). Thus, if M (!)

E then we have
! 2 M (!) = KN1 (M (!))

KN1 (E) ;

where the last inclusion uses the simple fact that if X

Y then KN1 (X)

KN1 (Y ).

Conversely, suppose that E is common knowledge at !, and suppose toward a contradiction that there exists ! 0 2 M (!) such that ! 0 2
= E. Then there exists a sequence of states
! 0 ; : : : ; ! L such that ! 0 = !, ! L = ! 0 , and ! l 2 hi(l) (! l 1 ) for some player i (l) 2 N , for all
l 2 f1; : : : ; Lg. But then player i (L) does not know E at state ! L 1 , and by induction on l
we see that ! 2
= KNL (E), contradicting the hypothesis that E is common knowledge at !.
Given Lemma 1, the idea of the proof of Aumanns theorem is that if the playersposteriors are common knowledge at !, then they must be constant on M (!) (by Lemma 1),
and therefore they must all equal

p(E\M (!))
.
p(M (!))

Proof of Aumanns Theorem. Let M (!) =

hki , where the hki are elements of player

is partition Hi . Since it is common knowledge at ! that player is posterior probability of


event E is qi , Lemma 1 implies that player is posterior probability of event E remains qi at
every state ! 0 2 M (!). (To see this, let Fi be the event that player is posterior probability
of E is qi . Lemma 1 says that if Fi is common knowledge at !, then M (!)
p E \ hki
for all k:
qi =
p hki
Hence,
p E \ hki = qi p hki
5

for all k;

Fi .) Therefore,

and summing over k yields


p (E \ M (!)) = qi p (M (!)) :
But by the same reasoning for player j,
p (E \ M (!)) = qj p (M (!)) :
Hence, qi = qj .
Here is an example that shows that it is not enough to assume the players know each
others beliefs (rather than commonly knowing them). Let
= f! 1 ; ! 2 ; ! 3 ; ! 4 g ; p =

1 1 1 1
; ; ;
4 4 4 4

; H1 = ff! 1 ; ! 2 g ; f! 3 ; ! 4 gg ; H2 = ff! 1 ; ! 2 ; ! 3 g ; f! 4 gg :

Consider the event E = f! 1 ; ! 4 g. At state ! 1 , we have


1
q1 (E) = p (f! 1 ; ! 4 g j f! 1 ; ! 2 g) = ;
2

1
q2 (E) = p (f! 1 ; ! 4 g j f! 1 ; ! 2 ; ! 3 g) = :
3
In addition, player 1 knows that player 2s information set is f! 1 ; ! 2 ; ! 3 g, so she knows that
q2 (E) = 13 . And player 2 knows that player 1s information set is either f! 1 ; ! 2 g or f! 3 ; ! 4 g,

and in either case q1 (E) = 12 . So we have an example where players know each others beliefs,

yet they disagree. The reason why this example does not contradict Aumanns theorem is
that beliefs are not common knowledge at ! 1 : in particular, at ! 1 player 2 doesnt know
that player 1 knows that q2 (E) = 31 , as player 2 nds state ! 3 possible, and at state ! 3
player 1 believes that with probability
probability

1
2

1
2

the state is ! 3 , in which case q2 (E) = 13 , and with

the state is ! 4 , in which case q2 (E) = 1.

Robustness to Incomplete Information: The Email


Game

The black dots puzzle and Aumanns theorem show that assuming that something is common
knowledge can have very strong implications. When we write down a game, we are always
6

implicitly assuming that the description of the game is common knowledge. The topic
for the rest of the lecture is whether the predictions of game theory such as the prediction
that players play an equilibrium of the game we write down is overly sensitive to this
assumption.

We start with an example that suggests that our predictions may be very

sensitive in this way, and then present a result suggesting that things may not be as bad as
they rst seem.
Consider the following pair of payo matrices.
A
A

8; 8

B 1; 10
The set of states is

= f0; 1; 2; : : :g.

10; 1

0; 0

0; 0

B 1; 10

B
10; 1
8; 8

In state 0, payos are given by the matrix on the

right; in every other state, payos are given by the matrix on the left. Player 1s information
partition is
f0g ; f1; 2g ; f3; 4g ; : : : ; f2n

1; 2ng ; : : : :

Player 2s information partition is


f0; 1g ; f2; 3g ; : : : ; f2n; 2n + 1g ; : : :
The prior probability of state 0 is 32 , and the prior probability of state n

1 is 13 " (1

")n 1 .

The interpretation is as follows: Player 1 observes whether the true payo matrix is the
one on the left or the one on the right. If the payo matrix is the one on the left, she sends
a message (an email) to player 2 telling him this, but the message fails to get through with
probability ". If the message does get through, then player 2 sends a conrmation message
back to player 1, and this message also fails to get through with probability ". The players
keep sending conrmation messages back and forth, until a message fails.
Note that in state n > 0, n messages have been sent and n

1 messages have been

received. In particular, for high n, player 2 knows the payos are given by the left matrix
(where (A; A) is a strict Nash equilibrium), knows that player 1 knows that he knows this,
and so on for strings of length less than n. However, for no n is it common knowledge that
payos are given by the left matrix.

As in the black dots puzzle, this failure of common

knowledge makes a big dierence for behavior, as the following result shows.
7

Proposition 1 For all " > 0, the email game has a unique Bayesian Nash equilibrium. In
it, both players always play B.
Proof. We argue by induction on n that in any BNE both players play B in state n.
In state 0, player 1 knows payos are given by the right matrix. Since B in a dominant
strategy in the right matrix, she plays B. Now, the probability of state 0 given information
set f0; 1g is more than 21 , so since player 1 plays B in state 0, player 2 gets expected payo
at least
1
1
(8) + (0) = 4
2
2
from playing B and gets expected payo at most
1
1
( 10) + (8) =
2
2

from playing A, so he plays B at information set f0; 1g (and therefore at state 0).
Now suppose that in any BNE both players play B in state n. Note that the probability
of state n given information set fn; n + 1g is
1
" (1
3

1
" (1
3
n 1

")

")n
+

1
" (1
3

")

1
2

1
> :
"
2

Thus, the player on move at information set fn; n + 1g faces opposing action B with prob-

ability greater than 12 , and knows that payos are given by the left matrix, so her expected
payo from playing B is at least
0;
while her expected payo rom playing A is at most
1
1
( 10) + (8) =
2
2

1:

She therefore plays B at information set fn; n + 1g. Given this, repeating the same argument
implies that the other player plays B at information set fn + 1; n + 2g.

Robustness to Incomplete Information: Common rBelief

At rst glance, the email game suggests that equilibria can be very sensitive to small
failures of common knowledge: at state n, it is mutual n

1st-order knowledge that payos

are given by the left matrix, where (A; A) is a strict equilibrium, yet (A; A) is never played in
equilibrium. However, whether this conclusion is valid depends on whether or not we should
really consider this failure of common knowledge to be small. An alternative denition of
what it means to be closeto common knowledge is common r-belief.
Denition 2 The event, player i r-believes event E is denoted Bir (E) and is given by
Bir (E) = f! : p (Ejhi (!))

rg :

r;1
The event, everyone r-believes event E is denoted BN
(E) and is given by
r;1
BN

(E) =

n
\

Bir (E) :

i=1
r;m
Inductively dene BN
(E) by
r;m
r;1
r;m
BN
(E) = BN
BN

(E) :

r;1
The event, event E is common r-belief is denoted BN
(E) and is given by
r;1
BN

(E) =

1
\

r;m
BN
(E) :

m=1

Note that in the email game, at no state is it common r-belief that payos are given by
the left matrix, for any r

1
.
2

Indeed, this failure of common r-belief is the key reason why

the (A; A) equilibrium is not robust in the email game: as the next result shows, strict
equilibria are always robust to relaxing the assumption that payos are common knowledge
to the assumption that payos are common r-belief for high enough r.
Proposition 2 Fix a complete information game G and a strict Nash equilibrium s of G.
For any " > 0 there exist r < 1 and q < 1 such that for any r > r, any q > q, and any
9

~ in which with probability q it is common r-belief that payos


incomplete information game G
~ such that
are as in G, there is a Bayesian Nash equilibrium s~ of G
p (! : s~ (!) = s) > 1

":

Proof. Let
E = f! : it is common r-belief at ! that payos are as in Gg :
Suppose every player i plays si at every state ! 2 Bir (E). Then at every state ! 2 Bir (E),
player i assigns probability at least r to payos being as in G and the state being in Bjr (E) for
all j 6= i (as for every ! 0 2 E, every player j r-believes E), and therefore assigns probability
at least r to facing opposing strategy prole s i .

Hence, playing si in state ! 2 Bir (E)

yields expected payo at least


rui (si ; s i ) + (1

r) u;

where ui is the payo function in G and u is the minimum feasible payo in the game (where
the minimum is taken over players, action proles, and states of the world), while playing
an alternative action s0i yields expected payo at most
rui (s0i ; s i ) + (1

r) u;

where u is the maximum feasible payo in the game. Since s is a strict equilibrium of G,
playing si in state ! 2 Bir (E) is optimal for r close enough to 1.
It remains only to dene si (!) at states ! 2
= Bir (E). This may be done by taking an
arbitrary Bayesian Nash equilibrium of the game where players are constrained to play si at
states ! 2 Bir (E) (which exists as the game is nite).
To tie this result back to the email game, consider a truncated version of that game where
player 2 does not respond after receiving n messages, so that the state is capped at 2n.
Then state 2n is common 1 "-belief when it occurs, as player 2 knows the state is 2n, player
2 believes that player 1 believes that the state is 2n with probability 1

", player 2 believes

that player 1 believes that player 2 believes that the state is 2n with probability 1

", and

so on. Hence, the above result says that there is an equilibrium in which (A; A) is played
10

in state 2n in the truncated email game.1 Thus, the sensitivity of the (A; A) equilibrium to
the failure of common knowledge in the email game may be traced to the unbounded state
space in the game.

Technically, it doesnt exactly say this, because the result only covers the case where there is a single

complete information game that payos are known to be close to. But it can easily be generalized to cover
the email game: see pp. 566-567 of Fudenberg and Tirole for details.

11

You might also like