Professional Documents
Culture Documents
Larry Ausubel
Matthew Chesnes
Updated: Januray 1, 2005
1.1
Preferences
Define the set of possible consumption bundles (an nx1 vector) as X. X is the set of
alternatives.
Usually all elements of X should be non-negative, X should be closed and convex.
Define the following relations:
: Strictly Preferred,
: Weakly Preferred,
: Indierent.
If x
y then x y, y x.
If x y then x y, y x.
Usually we assume a few things in problems involving preferences.
Rational Assumptions
Completeness: A consumer can rank any 2 consumption bundles, x y and/or y x.
Transitivity: If x y, y z, then x z. The lack of this property leads to a money
pump.
Continuity Assumption
is continuous if it is preserved under limits. Suppose:
{yi }ni=1 ! y and {xi }ni=1 ! x.
If for all i, xi yi , then x y and is continuous.
The continuity assumption is violated with lexicographic preferences where one good
matters much more than the other. Suppose good 1 matters more than good 2 such
that you would only consider the relative quantities of good 2 if the quantity of good
1 was the same in both bundles. For example:
xn1 = 1 +
1
, y1n = 1.
n
(1, 100).
x, y 6= x ) y
y >> x, ) y
x.
x.
x.
is convex if:
y x, z x and y 6= z =) y + (1
)z
y x, z x and y 6= z =) y + (1
)z x.
x.
0 : ae x}.
Observe that the set in the definition of u() is nonempty, since by monotonicity, we
can choose > max{x1 , . . . , xL }. By continuity, the minimum is attained and has the
property e x. We conclude that u() can be used as a utility funtion that represents
. QED.
See graph [G-1.4] in notes. e is just the unit vector. For any given bundle, we can
take a multiple of e to get to something that we are indierent between the bundle
and x. Suppose x = (6, 3) and we find that (6, 3) (4, 4) then u(x) = 4. So the utility
function can map any bundle into a number so we have created a way to move from
something real, like a persons preferences, to something more abstract, like a utility
function.
2.1
{x 2 X : x y}.
{x 2 X : y x}.
Indierence Curve:
{x 2 X : x y}.
Thus the indierence curve is the intersection of the Upper Contour Set (UCS) and
the Lower Contour Set (LCS). See graph in notes [G-2.1].
Equivalent Definitions.
Strictly Convex Preferences =) Upper Contour Set is Strictly Convex.
Convex Preferences =) Upper Contour Set is Convex.
Continuous Preference Relation =) UCS and LCS are Closed (contain their boundries.)
See graph in notes for convex preferences which are not strictly convex [G-2.2].
See graph of lexicographic preferences. With these preferences, the UCS and LCS are
not closed and thus preferences do not satisfy continuity. [G-2.3].
Utility functions. Key properties.
Concave if u(x + (1
)y)
u(x) + (1
2.2
Price Vector:
B
B
p=B
@
p1
p2
..
.
pL
C
C
C 2 <L++ .
A
Proposition: (MWG 50-52, 3.d.1-2). If is a rational, continuous, and locally nonsatiated preference relation, then the consumers maximization problem has a solution
and exhibits:
1) Homogeneous of degree 0:
x(p, w) = x(p, w) 8 p 2 <L++ , w > 0, > 0.
2) Walras Law (Binding budget constraint).
w = px 8 x 2 x(p, w).
3) If is convex, then x(p, w) is a convex set.
3.1
L = u(x) + (w
px).
ru(x ) p.
x [ru(x )
p] = 0.
= .
@x1
p1
@x2
p2
This says that the marginal utility of consuming an additional unit of each good must
be equal and this is also equal to , the shadow price of the constraint, or in this
problem, the Marginal Utility of Wealth. At the corner solution in G-3.3,
@u(x ) 1
@u(x ) 1
>
.
@x1
p1
@x2
p2
But the consumer cannot increase his consumption of x1 anymore. He would actually
prefer to consumer negative amounts of x2 .
Theorem of the Maximum. (MWG p. 963) Consider the following maximization
problem:
maxx2<N f (x, ),
s.t. x 2 C().
Where is just a parameter. What happens to the optimal solution x when we change
? The theorem (Berge) states: Suppose f (, ) is continuous and C() is non-empty
and compact 8 . Further suppose f (, ) and C() vary continuously with . Then
the arg max correspondence is upper hemicontinuous in and the value function is
continuous in .
A correspondence H() is upper hemi-continuous if xk ! x, k ! , and xk 2 H(k )
8 k IMPLIES:
x 2 H().
We can also say that H has a closed graph and images of compact sets are bounded. So
see graph G-3.4 for a upper hemicontinuous correspondence which is NOT continuous.
x1 is a solution at each stage (each change in ), but in the limit, there is an additional
maximizer, x2 . This means that H is upper hemi-continuous (additional maximizers
are ok as long as you dont lose any) but not continuous (the set of maximizers has
changed). Note that v(x ) is continuous at all stages since v(x1 ) = v(x2 ). So the arg
max is upper hemi-continuous an the value function is continuous.
In the consumers maximization problem, if we consider the price vector, p, as our
parameter, the theorem of the maximum says that the indirect utility function is
continuous in p and the walrasian demand correspondence is upper hemi-continuous
in p (and it will be continuous as long as preferences are strictly convex).
4.1
@xn
n=1
xn = rf (x1 , x2 , . . . , xN ).
Or in matrix notation:
rf (x) x = rf (x).
Proof: Since f is homogeneous of degree r:
f (tx1 , tx2 , . . . , txN ) = tr f (x1 , x2 , . . . , xN ).
tr f (x1 , x2 , . . . , xN ) = 0.
rtr 1 f (x1 , x2 , . . . , xN ) = 0.
Evalute at t = 1,
N
X
f (x1 , x2 , . . . , xn )
n=1
xn
N
X
f (x1 , x2 , . . . , xn )
n=1
QED.
@xn
@xn
rf (x1 , x2 , . . . , xN ) = 0.
xn = rf (x1 , x2 , . . . , xN ).
rf (x) x = f (x).
10
4.2
Matrix Notation
Assuming the Walrasian Demand is a function (ie, if the utility function is strictly
quasi-concave), we have the demand vector:
2
3
x1 (p, w)
6 x2 (p, w) 7
6
7
x(p, w) = 6
7.
..
4
5
.
xL (p, w)
Wealth Eects:
6
6
6
6
Dw x(p, w) = 6
6
6
4
Price Eects:
6
6
Dp x(p, w) = 6
6
4
@x1 (p, w)
@w
@x2 (p, w)
@w
..
.
@xL (p, w)
@w
@x1 (p, w)
...
@p1
..
..
.
.
@xL (p, w)
...
@p1
7
7
7
7
7.
7
7
5
@x1 (p, w)
@pL
..
.
@xL (p, w)
@pL
pk
7
7
7.
7
5
@xl (p, w)
@xl (p, w)
+w
= 0 f or l = 1...L.
@pk
@w
Or in matrix notation:
(Dp x(p, w))p + (Dw x(p, w))w = 0.
This is showing how the demand for one good changes as all the other prices change.
Proof follows directly from eulers equation above noting that x(p, w) is h.o.d. 0 in
(p, w).
Note that the price elasticity of demand for good l with respect to price k is defined
as :
@(log xl )
@xl pk
"lk =
=
.
@(log pk )
@pk xl
And the elasticity of demand for good l with respect to wealth w is:
"lw =
@(log xl )
@xl w
=
.
@(log w)
@w xl
11
k=1
Proposition 2.E.2 (Cournot Aggregation). x(p, w) has the property for all (p, w):
L
X
l=1
pl
@xl (p, w)
+ xk (p, w) = 0 f or k = 1...L.
@pk
Or in matrix notation:
p Dp x(p, w) + x(p, w)T = 0T .
The proof follows by dierentiating Walras law wrt prices. This is showing how the
demand for all goods changes as the price of one good changes. If you increase the
price of one good by one percent, what happens to the demand for all other goods?
Since Walras law still holds, aggregate demand must fall.
Define the budget share of consumers expenditure on good l as :
bl (p, w) =
pl xl (p, w)
.
w
bl "lk + bk = 0.
l=1
Proposition 2.E.3 (Engle Aggregation). x(p, w) has the property for all (p, w):
L
X
l=1
pl
@xl (p, w)
= 1.
@w
Or in matrix notation:
p Dw x(p, w) = 1.
The proof follows from dierentiating Walras law wrt w. In elasticities:
L
X
bl "lw = 1.
l=1
Which says :The weighted sum of the wealth elasticities is equal to one. If you know
the wealth eects on L 1 of the goods, the Lth wealth eect is automatically implied.
12
4.3
)p2 ] x w1 + (1
}
|
{z
)w2 .
}
p1 x + (1
)p2 x w1 + (1
)w2 .
p0
w0
This step is a bit tricky because we initially took the convex combination of a pair
of variables (p and w) but we can separate them like we do here because the indirect
utility function is homogeneous of degree 0 in (p, w). Thus
)p2 x (1
)w2 .
p1 x w1 AN D/OR p2 x w2 .
At least one of these must hold, if they were both strictly greater, we would violate
the original inequality. If p1 x w1 , x is in the budget set for (p1 , w1 ). If p2 x w2
then x is in the budget set for (p2 , w2 ). Note that v(p, w) is the value function of the
consumers maximization problem which defines the maximum attainable utility. Thus
if x is aordable at either (p1 , w1 ) or (p2 , w2 ), it must be that:
u(x) v(p1 , w1 ) AN D/OR u(x) v(p2 , w2 ).
Which implies:
u(x) max{v(p1 , w1 ), v(p2 , w2 )}.
But x was just some consumption bundle aordable at (p0 , w0 ). If we were to choose
the optimal bundle at (p0 , w0 ), then we have:
v(p0 , w0 ) max{v(p1 , w1 ), v(p2 , w2 )}.
Which is precisely the definition of quasi-convexity.
4.4
> 0.
See Graph G-4.2. The left shoe/right shoe is the obvious example. Indierence curves
are corners.
Homothetic Preferences: all indierence curves are related by proportionate expansion.
Thus:
x y () x y for every > 0.
G-4.2 is also homothetic. See G-4.3 for another example. If you can take an indierence
curve and multiply it by a constant and end up on another indiererence curve (for all
bundles), then preferences are homothetic. A continuous preference relations, , on
X 2 <L+ is homothetic implies that admits a utility function that is homogeneous of
degree 1. Consider Cobb-Douglas:
u(x1 , x2 ) = x1 x12 .
This utility function is homogeneous of degree 1 (and also homothetic). Note we can
take a monotonic transformation of this utility function and lose homotheticity. (Say
add 1).
Quasi-Linear Preferences. We say that preferences are Q-linear with respect to a good
and WLOG, let this be good 1. Denote the consumption set as X = < x <L+ 1 , so,
x1 2 <,
14
xl 2 <+ f or l = 2, . . . , L.
Preferences are Q-linear if:
1) All indierences curves are parallel displacements of each other. So,
x y =) x + ( , 0, 0, . . . , 0) y + ( , 0, 0, . . . , 0) 8 .
2) Good 1 is desirable:
x + ( , 0, 0, . . . , 0)
x, 8
> 0.
So the indierence curves must be parallel shifts of each other (see G-6.1). Characterisations of Q-linear preferences: a rational and continuous preference relation, , on
X = ( 1, 1) x <L+ 1 is Q-linear if admits a utility function of the form:
u(x1 , x2 , . . . , xL ) = x1 + (x2 , x3 , . . . , xL ).
So u() is linear in x1 .
15
u.
Here e(p, u) is the expenditure function which is the minimum amount of money required to attain a given level of utility. We can also rephrase the problem as:
h(p, u) = arg minx2X {p x},
subject to:
u(x)
u.
Here h(p, u) is the hicksian demand correspondence which says how much of each good
do you purchase. Like the walrasian demand function, x(p, w), the only dierence is x
depends on wealth, and h depends on utility. See graph G-5.1 for a picture of these two
problems: Utility Maximization Problem (UMP) and the Expenditure Minimization
Problem (EMP).
Proposition 3.E.3 (MWG pg 61). Suppose u() is a continuous utility function representing any locally non-satiated preference relation, on the consumption set X 2 <L+
and u is any attainable utility level. The the hicksian demand correspondence, h(p, u),
exhibits the following properties:
1) Homogeneous of Degree 0 in prices:
h(p, u) = h(p, u) 8 p 2 <L+ , attainable u, > 0.
2) No Excess Utility:
u(x) = x 8 x 2 h(p, u).
So this just says the contraint binds (from local non-satiation).
3) If is convex, then h(p, u) is a convex set.
u.
Or:
e(p0 , u) = p0 x0 .
= p1 x0 + (1 )p2 x0 .
e(p1 , u) + (1 )e(p2 , u).
Which is the definition of concavity. Note the last line follows from the fact that if x0
is only optimal at p0 , then any other set of prices (p1 or p2 ) along with x0 should lead to
at least as much expenditure as finding the optimal bundles say, x1 and x2 , at prices,
p1 and p2 .
5.1
Identities:
1) e(p, v(p, w)) = w.
2) v(p, e(p, u)) = u.
3) h(p, u) = x(p, e(p, u)).
4) x(p, w) = h(p, v(p, w)).
17
So identity 1 follows from the idea that if you start with wealth, w, and prices, p, fixed,
you can find the value function that maximizes your utility, v(p, w), or the indirect
utility function. Plugging this into your expenditure function at prices p, should get
you right back where you started with your initial wealth since maximizing utility and
minimizing expenditure yield the same solution.
Identity 2 says that for a given utility level u, we find the minimum expenditure required
to reach that level. Then the value of that expenditure v(p, e(p, u)) gives us back our
original level of utility, u.
Note that the hicksian demand functions are often refered to as compensated demands.
If we are holding wealth fixed and changing prices, consider the partial of the walrasian
demand. If we are holding utility fixed and changing prices, consider the partial of the
hicksian demands (note that wealth would have to implicitly change for utility to
remain the same as prices change - hence the compensated demand.
5.2
Envelope Theorem
@V (a)
@L(x(a), (a), a)
=
@aj
@aj
.
x(a), (a)
In other words, how much does the value function change when you change a parameter? Answer: only look at the direct eect of the parameter through the value
function.
See graphs G-5.2 and G-5.3 in notes which gives some intuition. Because we normally
think that around a maximizer, the function is fairly flat, then if you miss the solution
by a bit, you dont pay too much for your mistake because the range of the miss is
very small.
18
Proposition 3.G.1 (MWG 68-69). Suppose u() satisfies the usual properties. Then:
hl (p, u) =
@e(p, u)
f or l = 1 . . . L.
@pl
Or,
h(p, u) = rp e(p, u).
Proof: In the EMP, the expenditure function is the value function of the minimization:
e(p, u) = minx2X {p x}.
The lagrangian of the EMP is:
L = p x + (u(x)
u).
= xl
x=h(p,u), = (p,u)
QED.
19
= hl (p, u).
x=h(p,u)
@v(p, w)/@pl
, f or l = 1 . . . L.
@v(p, w)/@w
Or in matrix notation:
x(p, w) =
1
rp v(p, w).
rw v(p, w)
Compare this with the previous result regarding the hicksian demand:
h(p, u) = rp e(p, u).
So the dierence is that the walrasian demands are a function of wealth so if you change
prices, there are WEALTH eects, and hence the scaling. In the hicksian demand, we
implicitly allow wealth to vary as we hold utility constant so the result does not need
to be scaled.
Proof. Consider the lagrangian of the utility maximization problem:
L = u(x) + (w
px).
Where,
v(p, w) = max L.
Thus,
@v(p, w)
@L(x, , p, w)
=
@pl
@pl
But
xl
x=x(p,w), = (p,w)
.
x=x(p,w), = (p,w)
@v(p, w)
.
@w
So,
@v(p, w)
=
@pl
@v(p, w)
xl (p, w).
@w
Or,
xl (p, w) =
@v(p, w)/@pl
, f or l = 1 . . . L.
@v(p, w)/@w
@L
)1
@x1
x1 p1
x2 p2 ).
p1 = 0.
= 1.
But since
x(p, w) =
6.1
Prop 3.E.4. Consider a utility function with the usual properties. If h(p, u) is single
valued then h(p, u) must satisfy the compensated law of demand. For all p0 , p00 pairs,
(p00
p0 )[h(p00 , u)
h(p0 , u)] 0.
Proof: For any p >> 0, the consumption bundle, h(p, u), is optimal in the EMP such
that is achieves a lower expenditure than any other bundle that oers utility u. Thus
we have two inequalities:
p00 h(p00 , u) p00 h(p0 , u),
and,
p0 h(p00 , u)
p0 h(p0 , u).
p0 h(p0 , u).
Note this makes sense because we are subtracting something large from something small
to get something REALLY small and on the right we have something small subtracted
from something large so if the first inequality was , then after these operations, we
must still use the . Rearranging:
p00 h(p00 , u)
(p00
p00 h(p0 , u)
p0 h(p00 , u) + p0 h(p0 , u) 0.
p0 )[h(p00 , u)
h(p0 , u)] 0.
As required. QED.
See graph [G-6.2] for a simple graph of this law of demand. Note when we change one
21
6.2
Net Substitutes
if
@hl
@pk
8 p >> 0, 8 u
Net Complements
if
@hl
0
@pk
8 p >> 0, 8 u
Gross Substitutes
if
@xl
@pk
Gross Complements
if
@xl
0 8 p >> 0, w > 0
@pk
0 8 p >> 0, w > 0
So if goods are gross substitutes but net complements, it just means that the wealth
eect is greater than the substitution eect
6.3
T
atonnement
If all goods are gross substitutes for all bidders, then the Walrasian tatonnement
yields a competitive equilibrium. Argument is as follows: Consider an initial price
vector, p(0) = (0, 0, . . . , 0). At each time t, assign a Walrasian Auctioneer to ask each
consumer i, (i = 1, . . . , N ), to report her demand xi (p(t), w) and compute aggregate
demand as:
N
X
x(p(t)) =
xi (p(t), w).
i=1
22
Sl f or l = 1, . . . , L.
}
Excess Demand
Where Sl is the supply of good l. Observe that excess demand is always non-negative
because:
1) At p(0), excess demand must be positive.
2) The price of good l stops increasing when excess demand is zero.
3) Since all goods are gross substitutes, even if one good converges to its optimal
price, as the other prices increase, demand (and therefore price) of good l can only
rise since the goods are gross substitutes.
This example is supposed to show a real-world example of what we have been doing
and there is an example in the lecture notes about an electricity auction. There is
much criticism of the Walrasian Auctioneer because it goes against some of the main
foundations of economics. For instance, if agents knew that their consumption decisions
aected the prices directly, they could no longer be considered to be operating in
a perfectly competitive (price taking) environment. See lecture notes for more or
hopefully more will be covered in the next lecture.
23
f or l, k = 1, . . . , L.
Or in matrix notation:
Dp h(p, u) = Dp x(p, w) + Dw x(p, w)x(p, w)T .
Proof:
Start with the identity:
hl (p, u) = xl (p, e(p, u)).
Dierentiate with respect to pk :
@hl (p, u)
@xl (p, w) @xl (p, w) @e(p, u)
=
+
.
@pk
@pk
@e(p, u)
@pk
Note that w = e(p, u) and Prop 3.G.1 says
xk (p, w), we have:
@e(p, u)
= hk (p, u) = xk (p, e(p, u)) =
@pk
@hl (p, u)
@xl (p, w) @xl (p, w)
=
+
xk (p, w).
@pk
@pk
@w
24
Substitution
@xl (p, w)
xk (p, w) .
| @w {z
}
Income
So we have the change in the (Walrasian) demand for good l from a change in the price
of good k decomposed into a substitution eect and an income eect. Note the income
eect is weighted by the quantity of good k that the agent actually consumes. So if
the agent consumes more, the price change eects him more.
Corollary: If x(p, w) is a Walrasian Demand derived from the usual u(), then the Slutsky Substitution Matrix defined below must be negative semi-definite and symmetric.
2
@x1 (p, w) @x1 (p, w)
@x1 (p, w) @x1 (p, w)
+
x1 (p, w) . . .
+
xL (p, w)
6
@p1
@w
@pL
@w
6
..
..
..
Dp h(p, u) = 6
.
.
.
6
4 @xL (p, w) @xL (p, w)
@xL (p, w) @xL (p, w)
+
x1 (p, w) . . .
+
xL (p, w)
@p1
@w
@pL
@w
Proposition. For any good l, there exists a good k such that good l and good k are
net substitutes.
Proof: This follows from 3.G.2 which says Dp h(p, u)p = 0, or the product of the
Slutsky Substitution Matrix and the price vector is 0, and proposition 3.E.4 which
@hl (p, u)
says
0. So think of one row of the Slutsky Substitution matrix. If the
@pl
element corresponding to the partial of the hicksian with respect to its own price is
non-positive, and the whole thing, when multiplied by p, is equal to zero, then there
@hl (p, u)
must exist some good k such that
0, or l and k are NET substitutes.
@pk
7.1
Integrability
When can a demand function be rationalized? That is, under what conditions must
x(p, w) have in order to guarantee that x(p, w) is derived from utility maximizing
behavior? Or more precisely, what conditions must x(p, w) satisfy in order to guarantee
that there exists an increasing quasiconcave utility function u() such that x(p, w) is
the Walrasian demand function obtained from u() ?
To answer this question, we first need to define the concept of Path Independence.
This is a condition that the line integral along any path from a point A to a point B
gives the same value. Think of climbing a mountain from a point at the bottom to a
point at the top B and noting that the path you take will not eect you reaching the
25
7
7
7.
7
5
summit. Let f (x) : <n 7! <n and t(z) : [0, 1] 7! <n . Then,
Z
Z 1
f (x)dx =
f (t(z)) t0 (z)dz = f (B)
c
|0
{z
}
f (A).
Line Integral
Another way of saying this is that the line integral along any closed path (A ! B ! A)
equals zero. See G-7.1.
From this, we have the following result: Theorem (Fundamental Theorem of Calculus
of Line Integrals). Let C be any piecewise smooth curve from a point A to a point B.
Then the line integral of r is path independent and:
Z
r dp = (pB )
(pA ).
c
Also, we get another Theorem: Given any real-valued function (), define f = r .
Then:
@fj
@fi
(p) =
(p) 8 p.
@pi
@pj
Proof: The second derivative is independent of the order of dierentiation.
@2
@2
(p) =
(p) 8 p.
@pi pj
@pj pi
Conversely, we have another result. Theorem: Given any function f () = (f1 , . . . , fL )
such that
@fj
@fi
(p) =
(p) 8 p,
@pi
@pj
then there exists a real-valued function () such that f = r .
So if h(p, u) has a symmetric substitution matrix, then there exists a function e(p, w)
such that h(p, u) = rp e(p, w). This is a sufficient condition.
Now we come to the Main Proposition on Integrability. Proposition: A continuouslyL
dierentiable function x : <L+1
++ 7! <+ , which satisfies Walras Law and such that
Dp h(p, u) is symmetric and negative semi-definite, IS the demand function generated
by some increasing, quasi-concave, utility function.
So the conditions on x(p, w) such that there exists a utility function with x(p, w) =
arg max u(x) s.t. px w are:
(1) Continuously Dierentiable.
(2) Satisfies Walras Law.
(3) Dp h(p, u) is Symmetric and Negative Semi-Definite.
26
The symmetry of the slutsky substitution matrix gives us the existence of the expenditure function and the negative semi-definitness gives us the concavity of the expenditure
function.
Sketch Proof of this main proposition: First recover e(p, u) from x(p, w) (observed).
For L = 2 and p2 = 1,
de(p1 )
= x1 (p1 , e(p1 )).
dp1
Initial condition w0 = e(p01 ). So for more than two goods, we just have the L partials
to solve. Again, the solution will exist (an expenditure function can be found) so long
as the slutsky substitution matrix is symmetric. The second step is to recover the
preference relation, , from e(p, u). Given an expenditure function e(p, u), define a
set:
Vu = {x : p x e(p, u) 8 p >> 0}.
Note that elements of Vu are at least as good as x so if we do this for all the possible
consumption bundles, we have defined our preference relation, . And were golden.
27
8.1
Welfare Evaluation
What is the eect on the consumer of a change in prices (say from p0 to p1 ). The idea
of the answer involves the distance from u0 = v(p0 , w) to u1 = v(p1 , w) as measured in
monetary units.
Consider the following money metric indirect utility function:
e(p, v(p, w)), p >> 0.
This gives us the wealth required to reach the utility level v(p, w) at prices p. So all we
have done is apply a monotonic tranformation to the indirect utility function which
should maintain the same preferences (note that e(p, u) is strictly increasing in u), and
we have something that is denominated in say, dollars, which represents utility. See
G-8.1. To measure changes in wealth, we need to evaluate at either the old or new
prices. This involves shifting back the new budget constraint to the old indierence
curve (or vice versa) and looking at the vertical distance between the two parallel lines
(assuming p2 = 1).
Consider the two interesting cases of welfare changes:
Equivalent Variation (EV) - Old Prices:
EV (p0 , p1 , w) = e(p0 , u1 )
e(p0 , u0 ) = e(p0 , u1 )
w.
e(p1 , u0 ) = w
e(p1 , u0 ).
See G-8.2 for the EV and G-8.3 for the CV. Note we always assume the price of good
1 changes (falls in the graphs), and that the price of good 2 is constant and equal to 1.
So the EV can be thought of as the dollar amount that the consumer would be indifferent about accepting in lieu of the price change. Hence:
EV = e(p0 , u1 )
e(p0 , u0 ).
EV = e(p0 , u1 )
w.
w + EV = e(p0 , u1 ).
Apply v(p0 , ),
Similarly, the CV can be thought of as the net revenue of a planner who must compensate the consumer for a price change after it occurs, bringing the consumer back
28
e(p1 , u0 ).
CV = w
w
Apply v(p1 , ),
v(p1 , w
e(p1 , u0 ).
CV = e(p1 , u0 ).
CV ) = v(p1 , e(p1 , u0 )).
v(p1 , w
CV ) = u0 .
and p1 = (p11 , p 1 ).
And keep in mind that w = e(p0 , u0 ) = e(p1 , u1 ). And now rewrite EV as:
EV (p0 , p1 , w) = e(p0 , u1 )
= e(p0 , u1 )
e(p0 , u0 )
e(p1 , u1 )
p=p0
= e(p, u1 )
=
p=p1
p01
p11
p01
p11
29
@e(p, u1 )
dp1
@p1
h1 (p1 , p 1 , u1 )dp1
Equivalently,
CV (p0 , p1 , w) = e(p1 , u1 )
= e(p0 , u0 )
e(p1 , u0 )
e(p1 , u0 )
p=p0
= e(p, u0 )
Z
p=p1
p01
@e(p, u0 )
dp1
@p1
p11
p01
h1 (p1 , p 1 , u0 )dp1
p11
See graphs G-8.4 for the picture of the EV as the area under the hicksian demand for
good 1 (at utility level 1) between the two prices. The graph for the CV would be the
same but its the area under the hicksian demand for good 1 at utility level 0.
In general, by path independence,
0
EV (p , p , w) =
h(p, u1 )dp,
CV (p0 , p1 , w) =
h(p, u0 )dp,
where C is a curve from p to p . Note we used the very specific case of a change in
the price of good 1 only above, while here, we generalize for ANY two price vectors.
Next, define a third measure of consumer welfare, Consumer Surplus (CS):
CS =
p01
p11
x(p1 , p 1 , w)dp1 .
So CS is the area under the walrasian demand for good 1 at wealth w. See G-8.5 and
G-8.6 for a pictures of (X, P ) space where the hicksian and walrasian demand curves
are plotted together. For normal goods (G-8.5) the hicksian demand curves are steeper
than the walrasian demand curve (via Slutsky) while for inferior goods (G-8.6), the
hicksian demands are more shallow than the walrasian demand. Hence, as seen in the
areas in the graph:
Normal Good: CV < CS < EV.
Inferior Good: EV < CS < CV.
Finally, if there are NO INCOME EFFECTS, then CV = CS = EV, as in the case of
quasi-linear preferences.
30
9.1
Example 3.I.1. This example demonstrates the deadweight loss (DWL) associated
with a commodity tax versus having a lump sum tax that raised the same amount of
revenue.
Consider a commodity tax on good 1 such that:
p0 = (p01 , p 1 ).
p1 = (p01 + t, p 1 ).
Revenues from this tax equal T = tx1 (p1 ). The consumer is made worse o provided
that the equivalent variation (EV) is less than T , the amount of wealth the consumer
loses under the lump sum tax. Recall:
EV = e(p0 , u1 )
e(p0 , u0 ).
Thus,
T
T
Z
EV =
EV = e(p0 , u0 )
e(p0 , u1 )
T.
EV = e(p1 , u1 )
e(p0 , u1 )
T.
p01 +t
p01
h1 (p1 , p 1 , u1 )dp1
th1 (p01 + t, p 1 , u1 ).
EV =
p01 +t
p01
0.
EV
EV
0.
T.
So what the government would have to pay the consumer, the EV, is smaller (more negative) than the lump sum tax so there must be a DWL associated with the commodity
tax. This can also be seen using the CV:
CV = e(p1 , u1 )
e(p1 , u0 ).
Thus,
CV
T = e(p1 , u0 )
e(p1 , u1 )
T.
CV
T = e(p1 , u0 )
e(p0 , u0 )
T.
31
CV
T =
p01 +t
p01
h1 (p1 , p 1 , u0 )dp1
th1 (p01 + t, p 1 , u0 ).
T =
p01 +t
p01
0.
0.
T.
Again, a DWL. Thus, see G-9.1 for graphs of the deadweight losses. Note in general,
its the area to the left of the hicksian demands between the two prices less the tax
revenue which is the rectangle.
Another example is a monopoly. See G-9.2. Here, we assume Q-linear preferences so
x(p, w) = h(p, u). The DWL of the monopoly price versus the competitive price is
shaded in the graph.
In general, if you are comparing two possible policies which will result in two possible
price vectors, p1 or p2 , then use the EV to compare. Consider:
EV (p0 , p1 , w) = e(p0 , u1 )
e(p0 , u0 ).
EV (p0 , p2 , w) = e(p0 , u2 )
e(p0 , u0 ).
So,
EV (p0 , p1 , w)
EV (p0 , p2 , w) = e(p0 , u1 )
e(p0 , u2 ).
So this allows for direct comparison such that p1 is better than p2 () EV (p0 , p1 , w) >
EV (p0 , p2 , w). With the CV this is impossible because,
CV (p0 , p1 , w)
CV (p0 , p2 , w) = e(p2 , u0 )
e(p1 , u0 ),
and with two dierent price vectors, we cant say anything more about which is preferable.
Thus, EV is a transitive measure of welfare while CV may in intransitive. This also
means that EV is a valid money-metric indirect utility function while CV is not.
9.2
Revealed Preference
So far we have used a preference based approach to demand instead of a choice based
one. To use actual choices, we develope the Weak Axiom of Revealed Preferences
(WARP).
32
Definition: (2F1) x(p, w) satisfies the WARP if, for any two price wealth pairs (p1 , w1 )
and (p2 , w2 ):
p1 x(p2 , w2 ) w1 , x(p1 , w1 ) 6= x(p2 , w2 ) ) p2 x(p1 , w1 ) > w2 .
See graphs G-9.3 thru G-9.5 for a graphical interpretation. Basically, we have to have
choice consistency. If x(p2 , w2 ) is aordable at prices p1 and wealth w1 , but we still
choose x(p1 , w1 ), then it must be the case that at prices p2 , and wealth w2 , the bundle
x(p1 , w1 ) is not aordable since we would have chose it over x(p2 , w2 ) because it was
prefered at prices p1 and wealth w1 .
The WARP is equivalent to the compensated law of demand. Proposition 2.F.1 says
that if x(p, w) is h.o.d. 0 in (p, w) and satisfies Walras law,
(p2
p1 ) [x(p2 , w2 )
x(p1 , w1 )] 0.
This inequality is strict if the bundles are dierent. Note when we say compensated,
we mean that wealth is not held constant here it does not have something to do with
hicksian demands.
Moreover, the compensated law of demand implies that the substition matrix is negative semi-definite. Thus, if x(p, w) satisfies h.o.d. 0, walras law, and WARP, then the
Slutsky matrix is negative semi-definite. So what are we missing? SYMMETRY.
Definition 3.J.1 introduces the Strong Axiom of Revealed Preferences (SARP) which
adds in transitivity of revealed preferences.
Finally, Proposition 3.J.1 says that if the function x(p, w) satisfies the SARP, then
there is a rational preference relation, , that rationalizes x(p, w). In other words, for
all (p, w), we have x(p, w) y for every y 6= x(p, w) with y 2 Bp,w . So while WARP
lacked symmetry of the slutsky matrix, SARP gives us everything we need including
symmetry.
33
10
10.1
xi (p, wi ).
Where p is a common price vector and wealth is particular to each consumer. Then
aggregate demand might be defined as:
x(p, w) =
n
X
xi (p, wi ).
i=1
But generally, not only aggregate wealth, but the distribution of wealth matters in
aggregating demand.
Consider two individuals with wealth w1 and w2 . They each have a certain income
eect for each of the L goods and aggregate wealth, w = w1 + w2 . Now consider two
other individuals with wealth:
w10 = w1 + .
w20 = w2
Note that w10 + w20 = w, the same aggregate but these two individuals might have
dierent wealth eects for the dierent goods. Thus aggregate demand must take this
into account.
From the above example, it is clear that there is no such thing as an aggregate slutsky
equation or an aggregate slutsky substitution matrix. In general, for individuals with
strictly convex preferences, the only restrictions on aggregate demand are that it must
be hod(0), continuous, and satisfy a version of walras law.
So what restrictions do we need on aggregate demand such that it is completely characterized by aggregate wealth ? Pretty specific ones:
Proposition 4B1. Aggregate demand can be expressed as a function of aggregate wealth
if and only if all consumers have preferences admitting an indirect utility function of
the following form:
vi (p, wi ) = ai (p) + b(p)wi .
Note that the first term can be dierent for each individual but the coefficient on the
individuals wealth must be the same across individuals. This is called the Gorman
Form Indirect Utility Function. Examples of preferences which admit a function
such as this are:
1) Preferences of all consumers are identical and homothetic.
2) Preferences of all consumers are quasilinear with respect to the same good (NO
wealth eects).
34
The reasoning behind 4B1 is Roys identity. Recall that walrasian demand is the partial
of the indirect utility with respect to the price divided by the partial with respect to
wealth. The denominator must be the same for things to aggregate nicely.
Finally, if every consumer has homothetic (but dierent) preferences, then aggregate
demand satisfies the WARP. We dont get symmetry and negative semi-definiteness of
the substitution matrix though (it doesnt even exist!)
Definition xi (p, wi ) satisfies the UNcompensated law of demand if:
(p2
p1 ) [xi (p2 , wi )
xi (p1 , wi )] 0.
10.2
Production Set Notation. A production set Y is the set of all feasible production plans.
This notation allows for multiple outputs and there is no need to have distinct input
and output sets.
Define F () as the transformation which satisfies:
Y = {y 2 <L : F (y) 0},
and the boundry of Y is described by F (y) = 0. See G-10.1. Note that y1 and y2 could
be inputs and/or outputs. It just depends on where your point y is. At the point a,
y1 is an output (its positive) but y2 is zero. This is not in the production set because
you produce something out of nothing (usually not possible!). The point b is in the
production set but here, both y1 and y2 are inputs and we have NO output ... usually
not a very good production plan! In two dimensions, it looks like usually we will be
finding production plans (reasonable) in the NW and SE quadrants along the frontier.
Define the Marginal Rate of Transformation of good l for good k as y 2 Y as:
M RTlk (y) =
@F (y)/@yl
.
@F (y)/@yk
Along the frontier, this is just the slope of F (y). Note depending on which y you
evalute this at, you could get either the marginal product of an input or the marginal
rate of substitution between two inputs.
The other notation frequently used in the theory of the firm is Production Function
notation. Here, we have a production funciton f (z) defined as the maximum quantity
35
of output, q, that can be produced using a given input vector, z. This restricts the
attention to single output and forces us to have distinct input and output sets. See
G-10.2.
Define the Marginal Rate of Technical Substitution (MRTS) as:
M RT Slk (z) =
@f (z)/@zl
.
@f (z)/@zk
0, then y = 0.
36
0, y 2 Y . We sometimes call Y a
11
11.1
)y 0 2 Y .
11.2
(p) = maxy p y,
such that,
F (y) 0.
It can also be written:
y(p) = arg maxy p y,
such that,
F (y) 0.
Where (p) is the profit function of the firm and y(p) is the supply function (with
inputs as negative quantities).
The one concern with this type of problem is that unlike in the consumers maximization problem where income was bounded, we have not explicitly assumed that the firm
has finite resources. In fact with non-decreasing returns to scale and some y such that
p y > 0, this problem is unbounded and the firm should expand output forever.
Lagrangian:
L = p y + ( F (y)).
L=py
FOC:
pl =
F (y).
@F (y )
.
@yl
Or in matrix notation:
p = rF (y ).
37
pf (z)
w z.
@f (z)
wl ,
@zl
@f (z)
@zl
wl )zl = 0.
Note that the CS condition comes in when the firm would rather consume less than 0
units of the input but cannot.
Proposition 5C1. Suppose () is the profit function and y() is the supply correspondence. If Y is closed and satisfies free-disposal, then:
(0) () is continuous in p (if finite) and y() is upper-hemicontinuous in p.
(1) () is hod 1.
(2) () is convex.
(3) If Y is convex, then
Y = {y 2 <L : p y (p) 8 p >> 0}
(4) y() is hod 0 (if you double both input and output prices, profits remain
unchanged).
(5) If Y is convex, then y(p) is a convex set for all p. Also if Y is strictly convex,
then y() is a continuous function of p (single valued).
(6) Hotellings Lemma: If y(p) is single valued then () is dierentiable at p and:
r(p) = y(p).
(7) If y() is dierentiable at p then the substitution
2
@y1
6 @p1 (p) . . .
6
..
..
Dy(p) = r2p (p) = 6
.
.
6
4 @yL
(p) . . .
@p1
matrix:
@y1
(p)
@pL
..
.
@yL
(p)
@pL
7
7
7,
7
5
y)(p0
38
p)
0.
Or,
@yl (p)
@pl
0.
Recall that y is just a feasible bundle in output OR inputs. So for outputs, this
has the usual interpretation. But for inputs, it is also valid because inputs are
measured in negative numbers so increasing the price of an input still results in
the firm reducing their demand for that input.
Another result is integrability: The functions y() are supply functions generated
by some convex production set, Y , if they are hod 0 and their substitution matrix
is symmetric and postive semi-definite. In other words, y needs to be the gradient
of a profit function.
11.3
w z,
such that,
f (z)
q.
w z,
such that,
f (z)
q.
Where C(w, q) is the cost function and z(w, q) is the conditional factor demands.
Proposition 5C2. Suppose that the production function f () is continuous and
strictly increasing. Then:
C(w, q)
z(w, q)
(1)
(2)
(3)
(4)
(1) hod 0 in w
(2) No excess production
(3) Upper hemi-continuous in (w, q)
(4) f : q-concave ) z(w, q) is a convex set
f : strictly q-concave ) z(w, q) is single valued.
hod 1 in w
Strictly increasing in q
Continuous in (w, q)
Concave in w
Further properties.
(1) Shephards Lemma: z(w, q) = rC(w, q).
39
...
(z, q) = rw z(w, q) = 6
6 ..
4 @zL
...
@w1
(3) As a consequence of (2), we have:
@zl (w, q)
0 8 l.
@wl
Or in the less babyish form:
(z 0
z)(w0
w) 0.
40
12
12.1
See G-12.1 Regarding the graph of the isoquant (f (z) = q) tangent to the isocost
(c(w0 , q) = w0 z), we have two regions. It is clear from the diagram that:
{z : f (z)
q} {z : w0 z
c(w0 , q)}.
So the upper contour set of the isoquant is a subset of the upper contour set of the
isocost. By definition, any vector of inputs that yields output of at least q will cost at
least c(w0 , q). In other words, anything that costs less than c(w0 , q) must also produce
less than q.
We can repeat this process by varying the input vector prices (w) and finding the
tangential point. (G-12.2) We find that this traces out the isoquant even if we didnt
know the isoquant to being with. We fix q and find the minimum cost at prices w of
producing that q. Thus, we can derive the isoquant (technology) just from the cost
function.
Formally, the upper contour set of the isoquant is:
\
{z 2 <L+ : w z c(w, q)}.
w>>0
Or,
{z 2 <L+ : w z
0:wz
Ie, we vary the input price vector and then find the maximium quantity attainable at
minimum cost.
Note that in the previous analysis, we assumed a convex isoquant. See G-12.3 for
a picture of a non-convex isoquant. However, even in this case, it is still possible
to recreate duality. The envelope formed by the isocost is not the original isoquant,
however it is the highest convex and monotonic curve that is weakly below the original
isoquant. Also, and most importantly, the areas that we miss (see G-12.4) are not
optimal anyway since there are lower cost ways to produce to the same level of output.
Statement of Duality: Start with a production function, f (z), which is continuous,
weakly increasing, and quasi-concave. Let c(w, q) be the cost function implied by f (z).
Then:
c(w, q) = minz2<L+ {w z : f (z) q}.
41
Now start with c(w, q) and construct f (z) using duality. Thus,
f (z) = max{q
0:wz
u},
by calculating:
{x 2 <L+ : p x
Or equivalently,
u(x) = max{u > 0 : p x
(2) Given a profit function, (p), we can determine the production set, Y , by
calculating:
Y = {y 2 <L : p y (p) 8 p >> 0}.
We saw this result earlier. Here y is a vector of inputs and outputs (a production
plan). So we have the set of all production plans which provide at most (p) for
any given price vector.
Finally see G-12.5 for a diagram connecting the UMP to the EMP.
In the consumers utility maximization problem, we max u(x) such that px w.
The maximized function is v(p, w), the indirect utility function, and the argument
which maximizes is the walrasian demand, x(p, w) (uncompensated demand).
42
43
13
13.1
See graphs G-13.1, G-13.2 and G-13.3 for plots of production functions and cost curves
for non-sunk and sunk costs. Notice that for non-sunk costs, the supply curve is equal
to the M C curve above the AC curve and zero elsewhere. For sunk costs, these should
NOT enter the decision process so the supply curve is equal to the marginal cost even
below the AC curve.
See G-13.4 for a graph showing the short run and long run total and average cost
curves. Note that in the short run, some inputs may be fixed and hence we have a
decision problem in the short run with a constraint (say z2 = z), then the short run
cost curve must lie above or touching the long run cost curve. The same is true for
average costs. Thus the envelope formed by the short run cost curves is the long run
cost curve. In the long run, all inputs may vary.
See G-13.5 for a simple graph of constant returns to scale. Same idea as the other
graphs.
Aggregate Supply
In the theory of the firm, individual supplies depend only on prices (not wealth) so
things work much better. In particular, if yj (p) represents the supply of firm j. The
aggregate supply is:
J
X
y(p) =
yj (p).
j=1
The substitution matrix for each individual supply is hod(0), symmetric, and positive
semi-definite. These properties also hold for aggregate supply. Consequently, aggregrate supply can be rationalized as arising from a single profit maximizing firm whose
production set is:
Y = Y1 + Y2 + + YJ .
Proposition 5E1. The aggregate profit attained by maximizing profit separately is the
same as that which would be obtained if the production units were to coordinate their
actions, when firms are price takers. Thus, we have the law of aggregate supply:
(p1
13.2
p2 ) (y(p1 )
y(p2 ))
0.
q p(q)
p(q) + q p0 (q)
|
{z
}
M arginal Revenue
Rearranging:
c(q).
c0 (q)
|{z}
= 0.
M arginal Cost
q p0 (q)
) = c0 (q).
p(q)
q dp
) = c0 (q).
p dq
1
p(q)(1 +
) = c0 (q).
(q)
p(q)(1 +
p(q) +
p(q)
) = c0 (q).
(q)
1
c0 (q) p(q)
)=
.
(q)
p(q)
1
p(q) c0 (q)
)=
.
(q)
p(q)
p dq
. So on the RHS of the last equation is the monopolists markup
q dp
or (P M C)/P . So markup is inversely proportional to the elasticity of demand.
If demand is very elastic (high ), the markup is low so the monopoly price is close to
the competitive price.
Where (q) =
See G-13.6 for a plot of the monopolists situation. Notice the monopolist will only set
q on the elastic portion of the demand curve (above where M R = 0). Geometrically,
the point B, the point on the demand curve corresponding to the optimal price and
quantity is the midpoint of the competitive point (A) and the choke point (D), but
this is only if demand is linear and costs are linear.
45
cq
f ixed costs.
(q, c).
@(q, c)
= p(q) + qp0 (q)
@q
c.
@ 2 (q, c)
= p0 (q) + qp00 (q) + p0 (q) = 2p0 (q) + qp00 (q).
@q 2
@ 2 (q, c)
=
@q@c
1.
@ 2 (q(c), c)
.
@q@c
,
@ 2 (q(c), c) @ 2 (q(c), c)
.
@q@c
@q 2
dq
=
dc
Substitute in from above:
dq
=
dc
( 1)
dq
1
= 0
.
dc
2p (q) + qp00 (q)
Therefore:
dp(q)
dp(q) dq
p0 (q)
=
= 0
.
dc
dq dc
2p (q) + qp00 (q)
dp(q)
1
=
.
00
dc
2 + qp (q)/p0 (q)
46
And finally note that when demand is linear, p00 (q) = 0, so:
dp(q)
1
1
=
= .
0
dc
2 + q 0/p (q)
2
Which is intuitive from the graph. The increase in price resulting from a 10% increase
in the marginal cost is 5% (only under linear demand).
See G-13.7 for a graph of the government regulation solution to this problem. Note
that in order to eliminate the DWL all together, the government must regulate the
monopolist to set the price equal to the competitive price. However, if there are fixed
costs, this will mean the monopolist will be making loses. Thus, the regulator sets:
q reg = max{q : qp(q)
c(q)
0}.
Assuming there is a fixed cost, there will still be a DWL, but it will be much smaller
(see graph). Its the best we can do.
47
Preference Assumptions
Completeness: x y and/or y x.
Transitivity: x y, y z ) x z.
Continuity: {xi }n1 ! x, {yi }n1 ! y, xi yi ) x y.
Strongly Monotone: y
x and y 6= x ) y
Monotone: y >> x ) y
x.
x.
13.4
x.
)z x.
Properties of Functions
(*) Dp h(p, u) = Dp2 e(p, u): negative semidefinite, symmetric, and Dp h(p, u)p = 0.
e(p, u) (Expenditure Function)
(1) Hod(1) in p.
(2) Strictly increasing in u and weakly increasing in pl .
48
(3) Concave in p.
(4) Continuous in p and u.
y(p) (Supply Correspondence)
(1) Hod(0) in w.
(2) No excess production.
(3) Upper hemi-continuous in (w, q).
(4) If f (z) is quasi-concave, z(w, q) is a convex set, and if f (z) is strictly quasiconcave, z(w, q) is single valued.
(1)
(2)
(3)
(4)
Hod(1) in w
Strictly increasing in q.
Continuous in (w, q).
Concave in w.
(1)
(2)
(3)
(4)
(5)
C(w, 0) = 0.
Continous.
Strictly increasing q. Weakly increasing in w.
Hod(1) in w.
Concave in w.
49
13.5
(1) Non-empty, (2) Closed, (3) No free lunch, (4) Inaction, (5) Free disposal, (6)
Irreversibility.
(7) Nonincreasing RTS: y 2 Y and 2 [0, 1] then y 2 Y .
(8) Nondecreasing RTS: y 2 Y and > 1, then y 2 Y .
(9) Constant RTS: y 2 Y and
0, then y 2 Y .
13.6
Duality
Key Relationships:
(1) e(p, v(p, w)) = w.
(2) v(p, e(p, u)) = u.
(3) h(p, u) = x(p, e(p, u)).
(4) x(p, w) = h(p, v(p, w)).
Shephards Lemma: h(p, u) = rp e(p, u).
Roys Identity:
xl (p, w) =
@v(p, w)/@pl
, f or l = 1 . . . L.
@v(p, w)/@w
Slutskys Equation:
@hl (p, u)
@xl (p, w) @xl (p, w)
=
+
xk (p, w), f or l, k = 1 . . . L.
@pk
@pk
@w
Hotellings Lemma:
Shephards Lemma:
13.7
rp (p) = y(p).
rw C(w, q) = z(w, q).
Utility Maximization.
v(p, w) = max u(x) s.t. px w.
Expenditure Minimization.
e(p, u) = min px s.t. u(x)
u.
u.
wz.
0).
q.
q.
13.8
Concave: u(x + (1
)y)
Quasiconcave: u(x + (1
Quasiconvex: u(x + (1
native formulation:
u(x) + (1
)y)
)u(y).
@xi
51
xi = rF (x).
pk
k=1
@xl (p, w)
@xl (p, w)
+w
= 0 f or l = 1 . . . L.
@pk
@w
pl
@xl (p, w)
+ xk (p, w) = 0 f or k = 1 . . . L.
@pk
pl
l=1
@xl (p, w)
= 1.
@w
p0 ) [h(p00 , u)
h(p0 , u)] 0.
EV (p , p , w) = e(p , u )
e(p , u ) =
p01
p11
CV (p , p , w) = e(p , u )
e(p , u ) =
p01
p11
Consumer Surplus:
0
CS(p , p , w) =
h1 (p1 , p 1 , u1 )dp1 .
h1 (p1 , p 1 , u0 )dp1 .
p01
p11
x(p1 , p 1 , w)dp1 .
Weak Axiom of Revealed Preferences (WARP) does not provide symmetry of Dp h(p, u).
p1 x(p2 , w2 ) w1 , x(p1 , w1 ) 6= x(p2 , w2 ) =) p2 x(p1 , w1 ) > w2 .
(p2
p1 ) [x(p2 , w2 )
()
Strong Axiom of Revealted Preferences (SARP) adds transitivity and yields symmetry.
52
p0 ) [xi (p00 , wi )
xi (p0 , wi )] 0.
Law of Supply:
(p00
p0 )(y 00
y0)
(w00
w0 )(z 00
z 0 ) 0.
0.
13.9
p0 ) (y(p00 )
y(p0 ))
0.
A C 1 function, x(p, w), which satisfies walras law, and Dp h(p, u) is symmetric and negative semidefinite IS the demand function generated by some increasing quasiconcave
utility function.
If u(x) is homothetic, e(p, u) = (u) (p) =) h(p, u) = (u) (p).
Notes from PS 6:
@C(w, q) w
@ln C(w, q)
=
.
@w C(w, q)
@ln w
13.10
Two Problems
=) px w.
=) px w.
So x is aordable at w. FEASIBLE.
y
w.
y
is aordable at (p, w). But since x is optimal at (p, w):
y
=) u( ) u(x ).
y
=) u( ) u(x ).
=) u(y) u(x ).
54
Again by CRS:
But f (z)
1
( z, f (z)) 2 Y.
1
f (z) because f (z) is the maximum production at z. So,
f (z)
Since f (z)
f (z).
f (z),
f (z) = f (z),
and f is hod(1).
((=)
Start with an initial bundle ( z, q) 2 Y. By definition:
f (z)
q.
Thus,
q f (z) = f (z).
This implies:
( z, f (z)) 2 Y.
Since q f (z), by free disposal:
( z, q) 2 Y.
So Y is CRS.
55
14
14.1
FOCs:
f ixed costs .
1
.
1 (q1 )
p2 (q2 ) c
=
p2 (q2 )
1
.
2 (q2 )
So the more inelastic is the demand (i smaller), the larger the markup. See G-14.1 for
what this looks like. We basically have two separate markets to work in and set price
and quantity according to the classic monopoly setup in each market. The requirements
for this setup are:
(1) Ability to identify the markets, no resale.
(2) Need a self-selecting tari such that the pricing scheme leads to voluntary
self-selection into the dierent groups.
One might consider selling drugs in Canada and the US as a good example.
The Idea Behind Nonlinear Pricing (2 Part Tari )
Assume all consumers are identical and they pay a fixed amount E for the privilege to
buy at price p. So total cost to consumer is:
E + p(q) q.
56
Costco situation. See G-14.2. Optimal monopolist strategy is to set p = c and charge a
fixed fee E = CS, the consumer surplus. In this case, all demand above cost is served
so there is no DWL so in a way this is better than the simple monopoly outcome.
Now assume there are two types of consumers (Disneyland Dilema QJE 1971). See
G-14.3. If price equals marginal cost, then E is the same area as above. However this
is not optimal. Optimal scheme is to set p = c + > c. The new entry fee at the
higher price is shaded yellow and the overall gain to the monopolist is shaded green.
So the additional profit is q2 and the loss is coming from the resulting smaller entry
fee.
For a small enough , the monopolist is strictly better o charging a price above c.
The distortion created by the 2-part tari makes the monopolist better o. Note there
is a DWL associated with this type of pricing scheme.
Anonymous Nonlinear Pricing (2nd Degree Price Discrimination)
Assume 2 commodities, x, the monopolized good, and y, everything else. py = 1.
There are 2 types of consumers with utility:
u1 (x1 , y1 ) = u1 (x1 ) + y1 ,
u2 (x2 , y2 ) = u2 (x2 ) + y2 .
A menu is oered to each consumer. Choose either:
(r1 , x1 ), or (r2 , x2 ).
Where ri is the payment from the consumer including the entry fee. It is really the
revenue to the monopolist. And xi is the total quantity of the good received.
We need an assumption that is displayed in G-14.4. The Single Cross Property. Assume:
u02 (x) > u01 (x) 8 x 0, and, u1 (0) = u2 (0) = 0.
This implies:
u2 (x) > u1 (x) 8 x > 0.
The slopes of consumer 2s utility function is greater than consumer 1s and if they
both start at 0, consumer 2 must be extracting more utility than consumer 1 at all
levels of consumption.
Monopolists problem:
maxr1 ,r2 ,x1 ,x2 {r1 + r2
Subject to:
57
c(x1 + x2 )}.
(1) u1 (x1 )
r1
0.
(2) u2 (x2 )
r2
0.
(3) u1 (x1 )
r1
u1 (x2 )
r2 .
(4) u2 (x2 )
r2
u2 (x1 )
r1 .
r1 .
r2 > u2 (x1 )
| {z
r1 .
}
>0
Which means the LHS of (4), which is also (2), does NOT bind:
(2) : u2 (x2 )
r2 > 0.
u1 (x1 ).
So,
r2
r1 > u1 (x2 )
u1 (x1 ).
Rearranging:
u1 (x1 )
r1 > u1 (x2 )
r2 .
And this means that (3) does NOT bind. So (1) must bind.
58
And,
u02 (x2 ) = c.
So the high value consumer will buy the efficient quantity while the low value consumer
will buy less at a higher price.
This whole analysis assumes the monopolist sells to both types of consumers. One
should check to see if he could do better by just selling to type 2. Under this setting:
r2 = u(x2 ), where u0 (x2 ) = c.
Nonlinear/Nonuniform Pricing - 1st Degree Price Discimination
The monopolists problem:
maxr1 ,r2 ,x1 ,x2 {r1 + r2
c(x1 + x2 )}.
There is NO need for self selection here. We can identify perfectly what price to charge
to whom. Constraints (rationality):
Solution:
u1 (x1 )
r1 .
u2 (x2 )
r2 .
59
15
15.1
Equivalently, a NE is a mutual best response. That is, for every player i, si is a solution
to:
si 2 arg maxsi 2Si {ui (s1 , . . . , si 1 , si , si+1 , . . . , sn )}.
Definition: A Strict Nash Equilibrium (SNE) of a game G in pure strategies consists
of a strategy in which every player would be made worse o by unilaterally deviating.
So:
(s1 , . . . , si 1 , si , si+1 , . . . , sn ) 3
ui (s1 , . . . , si 1 , si , si+1 , . . . , sn ) > ui (s1 , . . . , si 1 , si , si+1 , . . . , sn ) 8si 2 Si , si 6= si .
Cournot Model of Oligopoly. Consider a model with n firms, each firm with constant
marginal cost, ci . The aggregate inverse demand function is P (Q). Each firm simultaneously and independently selects a strategy consisting of a quantity, qi 2 [0, a] where
60
P (a) = 0.
Payo Functions:
1 (q1 , q2 ) = q1 P (q1 + q2 )
c 1 q1 .
2 (q1 , q2 ) = q2 P (q1 + q2 )
c 2 q2 .
Strategies:
S1 = S2 = [0, a].
Assume c1 = c2 = c and linear demand, P (Q) = a
q1 solves:
Q. Solution: (q1 , q2 ) is a NE i
c] = maxq1 aq1
and, q2 solves:
q2 )
q1
c]}.
FOCs:
a
2q1
q2
c = 0. (1)
2q2
q1
c = 0. (2)
q1 + q2 = 0 =) q1 = q2 .
q2 + 2q2 = 0 =)
2q1
c = 0. (10 )
q1
3q1 = c a.
a c
.
q1 =
3
By symmetry:
q2 =
61
c
3
q12
q1 q2
cq1 ,
16
16.1
q2
2
c a
,
q2 + c
).
2
And this is firm 1s (and by symmetry, firm 2s) best response function:
R1 (q2 ) = (a
q2
c)/2.
R2 (q1 ) = (a
q1
c)/2.
These functions describe firm is best response to whatever firm j has chosen. See
G-16.2. Where R1 and R2 cross is the NE (mutual best response). Note in this graph,
we have set c = 0 and a = 1.
Bertrand Model of Oligopoly (1883) - Competition in Prices
Consider n firms each with constant marginal cost ci . Aggregate demand is Q(p).
Firms select prices pi 2 [0, a] where Q(a) = 0.
Payo functions when n = 2:
And,
8
>
< Q(p1 )[p1 c1 ] p1 < p2
1
1 (p1 , p2 ) =
Q(p1 )[p1 c1 ] p1 = p2
>
: 2
0
p1 > p 2
8
>
< Q(p2 )[p2 c2 ] p2 < p1
1
2 (p1 , p2 ) =
Q(p2 )[p2 c2 ] p2 = p1
>
: 2
0
p2 > p 1
c. Each
c].
But if p > c, each firm can profitably deviate by setting pi = p and capturing
the entire market earning:
m2 = D(p
)[p
c].
For small enough , m2 > m1 . Thus this uncutting continues until p = c and no
more profitable deviations exist.
Note that this NE, p1 = p2 = c with zero profits for each firm, is NOT a STRICT NE.
Notice that if either firm raises their price above p , they continue to earn zero profits.
Thus the NE is not strict. We only look at deviations of one player and see if they can
be as well o at another strategy (even if it is NOT nash!) Of course one firm setting
a price above marginal cost is not a best response, but it shows that the zero profit
NE is not strict in that firm 1 could earn zero profits with another strategy (which is
out of equilibrium).
Pollution Game
Consumers choose between 3 cars: A, B, and C. The cars are identical except for their
price and pollution emissions:
Model
A
B
C
Consumers have utility:
Price
Emissions (e)
$15, 000
100
$16, 000
10
$17, 000
0
u=v
E.
Where v is the reservation value of the car, p is the price paid, and E =
monetary equivalent of the aggregate pollution caused.
63
Pn
i=1 ei ,
While we would like to think the socially optimal car would be choosen (all with car
B if n 35), in actuality, the NE strategy is to choose car A. Note:
i (A, s i )
i (B, s i ) = 1000
90 = 910.
So the consumer is made strictly better o by switching from car B to car A. Free rider
problem. Solution is to make car A illegal to sell or to tax the dierence between the
social and private cost.
64
17
17.1
Definition: A strategy si (strictly) dominates s0i if for all possible strategy combintations of player is opponents, si yields a (strictly) higher payo than s0i to player
i.
We can use this type of idea to find NE by Iterated Elimination of Strictly Dominated
Strategies (IESDS). Note this only works for strictly dominated strategies such that
the payos are strictly larger in the dominant strategy. More on this soon.
Proposition 1. If IESDS yields a unique strategy n-tuple, this strategy n-tuple is the
unique strict NE.
Proposition 2. Every NE survives IESDS.
See G-17.1 for IESDS on a simple game. Note the resulting NE is unique and strict.
The deviator would be strictly worse o by unilaterally deviating.
Footnotes to these two propositions:
(1) Not every game can be solved using IESDS (See Battle of the Sexes).
(2) Sometimes, proposition 2 may be helpful even if IESDS does not yield a unique
NE. See G-17.2 which reduces to battle of the sexes after 2 rounds of IESDS.
(3) We need strictly dominated in the statement of propositions. Iteration of
weakly dominated strategies will not work. Consider the Bertrand game where
P = M C was a NE and profits were zero. If c = 10 and pi = 10, then pi is weakly
dominated by p0i = 20. If p0j = 25, this yields positive profit for firm i, while if
p0j = 15, this again yields 0 profit. So pi is weakly (but NOT strictly) dominated
by p0i .
The main point to take from this is that IESDS is order-independent. No matter how
we eliminate strategies, we end up with the same result. This would not apply when
using weakly dominated strategies.
Now back to the response graph for the cournot game shown in G-17.3. Recall c = 0
and a = 1. Notice the following:
1
1
1
is strictly dominated by q1 =
because q1 =
is the monopoly
2
2
2
quantity. Thus we shade the region (I) to show these strategies are dominated.
(I) q1 >
(II) The same is true for firm 2 (by symmetry) so region (II) is shaded.
1
1
(III) Since R1 (q2 ) in the shaded region will not be optimal (since q2 ), q1 <
2
4
1
is strictly dominated by q1 = . So shade region (III).
4
(IV) Repeat III for firm 2.
65
Notice the resulting picture is identical to the original, only scaled down. So we
can repeat this process and in the end, we zoom-in to the NE at R1 (q2 ) =
1
R2 (q1 ) =) q1 = q2 = .
3
66
18
18.1
Consider the game of matching pennies in G-18.1. Note there are NO pure strategy
NE in this game.
Definition: Let player i have K pure strategies available. Then a mixed strategy for
player i is a probability distribution over those K strategies. The strategy space for
player i is denoted:
Si = (si1 , . . . , siK ).
And the mixed strategy:
Note
PK
k=1
pi = (pi1 , . . . , piK ).
pik = 1 and 0 pik 1.
So back in our game. Suppose player I randomizes between H and T, playing H with
probability q. The means player I must be indierent between H and T (ie the expected
payos must be the same). Suppose also that player II randomizes between H and T
and plays H with probability r. Thus, the expected payo to player I from playing H
is
r(1) + (1 r)( 1) = 2r 1.
From playing T:
r( 1) + (1
r)(1) = 1
2r.
2r ) r =
1
. A similar argument
2
1 1
(p11 , p12 ) = (p21 , p22 ) = ( , ).
2 2
See G-18.2 for the diagram showing the response functions for the two players. Note
that if player II is playing each strategy with probability 1/2 then player I is indierent
between playing each strategy with any probability. He could play all heads, or all tails,
but will still end up with an expected payo of zero. The mixed strategy NE is at the
intersection (unique).
Theorem: Nash Existance Theorem (1950). Every finite game has at least ONE NE
(possibly in mixed strategies). Where a finite game is a game with a finite number of
players and a finite number of strategies for each player.
Fact: If, in a mixed strategy NE, player i places positive probability on each of two
strategies, then player i must be indierent between these two strategies (ie, they must
yield the same expected payo). Otherwise, the player should only play the strategy
with the higher expected payo.
67
Note also that whenever we are dealing with a mixed strategy NE, the NE cannot be
strict since by definition, there must exist strategies with equal expected payos.
In the battle of the sexes game in G-18.3, the man goes to the boxing match with
probability q and the woman goes with probability r. Thus the mans expected payo
from boxing and ballet implies:
r(2) + (1
|
{z
r)(0) = r(0) + (1
} |
{z
1
r)(1) =) r = .
}
3
q(1) + (1
|
{z
q)(0) = q(0) + (1
} |
{z
2
q)(2) =) q = .
}
3
Boxing
Boxing
Ballet
Ballet
So there are three mixed strategies in this game (2 pure and one mixed):
1 2 2 1
( , ), ( , ),
3 3 3 3
(1, 0), (1, 0),
(0, 1), (0, 1).
See G-18.4 for the best response plot. Notice there are three interesections!
Finally consider the graph in G-18.5 which shows the payos to player I and player II
in the battle of the sexes. Note that the mixed strategy NE is pareto dominated by
both of the pure strategies! This is NOT a general result but it shows that sometimes
the breakdown of an ability to bargain over pure strategies leads to a mixed strategy
which yields a lower expected payo for all involved.
68
19
19.1
K2
n=
KN
N
X
i=1
69
Ki ,
Now consider the best response correspondence, F . If i is a mixed strategy for player
i, then:
1
0
1
0
BR1 ( 2 , 3 )
1
F : @ 2 A 7! @ BR2 ( 1 , 3 ) A .
BR3 ( 1 , 2 )
3
So a fixed point will exists, ie
2 BR1 ( 2 ,
3 ),
2 BR2 ( 1 ,
3 ),
2 BR3 ( 1 ,
2 ),
Is F convex valued? Yes, mixed strategies allow you to convexify. Given two
strategies that yield the same expected payo, a convex combination of those
strategies will also yield the same expected payo.
Does F have a closed graph. Suppose (xn , y n ) ! (x, y) with y n 2 BRi (xn ) 8 n,
but y 2
/ BRi (x). So the limit point is not in the best response correspondence.
Then there exists > 0 and y 0 6= y such that:
ui (y 0 , x) > ui (y, x) + .
But this contradicts:
ui (y 0 , xn ) ui (y n , xn ), 8 n.
And this follows from the fact that y n is a best response to xn for all n. Thus the
graph of of F contains its limit points which means that F has a closed graph.
Thus, Kakutani holds and there exists a vector of mixed strategies with the property
that all players are playing their best response to all other players strategies. In
other words, the fixed point which is guaranteed by the Kakutani Theorem is a Nash
Equilibrium.
70
20
20.1
One of the requirements for the FP theorems to hold was we had to have a finite game
with finitely many strategies. Is this always the case? Usually, but there are at least
two important exceptions:
(1) Cournot Game with n-players. Assume, instead of linear demand and constant
MC, general demand and cost functions. Also assume:
n
o
M axq qP (Q + q) C(q) ,
has a unique solution for all Q > 0, where Q is the quantity produced by the
firms opponents. This means the best response relation is a function and not
a correspondence. In this case, we can invoke the Brouwer Fixed Point theorem
with:
X = {Pure Strategy Combinations of all Players}.
If we included mixed strategies, X would again be infinite, but in this case, we
have just N strategies.
(2) Bertrand game with unequal marginal cost. Recall that under equal marginal
cost, we happen to have one unique NE. With N = 2 and c1 < c2 there is
ALMOST a NE :
P2 = c2 , P1 = c2 .
However, firm 1 could profitably deviate by setting P10 = c2
the strategy space (say in cents), we get two NE:
P2 = c 2 , P 1 = c 2
/2. If we discretize
0.01,
P2 = c2 + 0.01, P1 = c2 .
See G-20.1 for a picture showing why we usually see odd numbers of NE. This is clearly
not ALWAYS the case but it is likely.
20.2
Consumers live uniformly along the interval [0, 1]. 2 Firms located at x = 0 and x = 1.
Each produce the same good at the same cost, c. Consumers have a transportation
cost of t per unit travelled to reach a firm. See G-20.2. Each consumer buys 0 or 1
unit with u(0) = 0 and u(1) = v > 0. Firm 1 charges p1 and firm 2 charges p2 . A
consumer located at x will receive:
v
p1
71
and,
v
p2
t(1
Equation of the marginal man (indierent between going to firm 1 and 2):
v
p1
x =
tx = v
p2
p2
p2
t(1
x).
p1 = tx t + tx.
p1 + t
1 p2 p1
= +
.
2t
2
2t
c) x = (p1
x ) = (p2
c) (1
1 p2 p1
c)[ +
].
2
2t
c)[
1
2
p2
p1
2t
].
1 p2 p1
1
] + (p1 c)(
) = 0.
[ +
2
2t
2t
p2 p1
p1 c
1+
=
.
t
t
t + p2 p1 = p1 c.
t + p2 + c
p1 =
.
2
p2 =
t + p1 + c
.
2
p1 = p2 = t + c.
This is very similar to the median voter theorem. The distance the firms are from each
other is a type of horizontal product dierentiation.
Hotellings 1929 Error. Technically, the problem with Hotellings original model was
the assumption of linear costs which could result in discontinuous jumps in the demand
schedule as shown in the notes. Invoking quadratic cost curves (umbrellas) eliminates
this possibility. However, the fundamental error is not just in the discontinuities.
Having the linear costs meant that in equilibrium, two firms would choose to locate
right next to each other on the line and split the market evenly. He referred to this as
the Principal of Minimal Dierentiation. Invoking quadratic costs yielded the exact
opposite conclusion: Two firms choose to locate at exactly the opposite ends of the line
and thus the Principal of Maximal Dierentiation. The two forces at work here is first
the position of the marginal man. Two firms located along the line at distinct points
would both have an incentive to move towards their neighbor, thus shifting over the
72
marginal man and gaining sales volume. However (the second force), moving closer to
your neighbor also makes the products more substitutable (less dierentiated) and thus
drives down prices and profits. In an extreme case, with bertrand price competition, it
is easy to see that two firms would want to be as far away from each other as possible
because if they had to compete, prices would be driven down to marginal cost and
both firms would make zero profit. Hotelling ignored the price competition idea in
his original analysis and thus didnt see how a simple adjustment to the cost schedule
could yield precisely the opposite results. The reason that the change to the cost
schedule yields this idea is because moving away from your neighbor now results in
higher profits because of the product dierentiation eect. Before (with linear costs),
this eect was too small and thus Hotellings original conclusion. With quadratic costs,
it is now beneficial to the firms to move away from each other, losing the sales volume
from the shifting marginal man, but gaining positive profits from the increased product
dierentiation.
20.3
See G-20.3 for an extensive form game of Battle of the Sexes. Note we can solve it by
backwards induction and find that with sequential moves, there is only one NE.
Definition: Subgame Perfect Equilibrium. In an n-player dynamic game of complete
information, an n-tuple of strategies is said to form a Subgame Perfect Equilibrium
(SPE) if the strategies constitute Nash Equilibria in every subgame.
Definition: Information Set. A player does not know which node she is at. See
G-20.4. Draw a circle around the nodes to signify they are in the same info set.
Finally, it is important to note that the NE of dynamic games need not correspond
in any way to the NE of static games. See G-20.5, sequential matching pennies. In
the static game, there was one NE in mixed stratgies where both players randomized
and the expected payo to both players was 0. In the sequential game, the expected
payo to the player who moves second is +1 while the the payo to the first mover
is 1. The second players would NEVER randomize. Thus the first player can play
any mixed strategy, any (p, 1 p), and he will obtain the same payo. Thus there are
infinitely many NE in the dynamic game.
73
21
Consider the game in G-21.1 (Seltens Chain Store Paradox (1978). Note there are
TWO NE of this game. The obvious one is: (Acquiesce if Enter, Entrant Enters).
Clearly, under this strategy, no player has a profitable deviation. The other NE is:
(Fight if Enter, Stay Out). In this case, it is still clear that no player has a unilateral
deviation, but it relies on the fact that the incumbent will actually play Fight if in fact
he is faced with an entrant.
The second NE is NOT subgame perfect. It is not credible because if actually faced
with an entrant, the incumbant would not fight, he would rather acquiesce.
Note that the set of SPE is a subset of NE (G-21.2).
Now suppose there are a sequence of N entrants that the incumbant faces, one after
another. It might make intuitive sense to fight o the first few, develop a reputation,
and then not have to face future entrants. However, since there is a final round,
consider the game against the N th entrant. It is exactly as in G-21.1 so the only SPE
is (acquiesce, enter). Against the (N 1)th entrant, again, the incumbant has nothing
to gain from fighting so he will again, acquiesce. Repeating this argument backwards,
we see the only SPE is for all entrants to enter and the incumbant to acquiesce every
time. Fighting is never a credible strategy.
In a finite sequential game, the invokation of SPE has very strong implications on the
set of equilibria.
In infinite games or in games with an uncertain ending, this may be avoided though
there is still the problem of even infinite games having an ending when the universe
comes to an end.
74
22
22.1
Consider the 2 firm game shown in G-22.1 where firm 1 chooses its quantity first and
then firm 2, having observed firm 1s quantity, chooses its quantity. The payos are
thus:
(q1 [P (q1 + q2 ) c], q2 [P (q1 + q2 ) c]).
Firm 1 is the stackelberg leader and firm 2 is the follower. Suppose demand is P (Q) =
a Q. Firm 2 solves:
M axq2 (q2 |q1 ) ) q2 (q1 ) = R2 (q1 ) =
q1
2
And this is the cournot solution we had before. Now, firm 1 solves:
M axq1 (q1 ) = q1 (a
q2
q1
q2 (q1 ) =
c
2
c) ) q1 =
c
2
q1
2
a c
Plugging q1 into q2 (q1 ) yields q2 =
. Note this means the equilibrium quantity
4
and price is lower than in the cournot game for the stackelberg follower. The dynamics
really cost the second player.
22.2
Bargaining
(1
P1
(1
P3 )
2
0 ) P3 1 ) P3 = 1.
P 3 ) P2
P 3 ) ) P1 1
At I. S Oers: P1 = 1
75
) P2 = .
+
P3 ) P1 = 1
1
X
)i =
i=0
; 5 periods,
1
.
1+
2
1
X
i=0
)i =
1+
; 6
1
1
, B accepts if P
.
1+
1+
1+
, S accepts if P
1+
Though this is not a rigerous way to determine the equilibrium and the proof to show
it is unqiue is tedious (See Ausubel Notes), we can show that it is a SPE. First note
that the game starting periods 1 and 3 are identical as well as the games starting in
period 2 and 4.
So consider a game starting in an odd period where S oers P =
1
:
1+
1
1+
1+
1
1+
1+
. If
1
1
Does S have a profitable deviation? If S oers P =
, payo =
. If
1+
1+
1
S oers P >
, B will reject and oer a Nash bid yielding a payo to S of
1+
2
1
=
<
. So S would be strictly worse o.
1+
1+
1+
Thus B and S do not have profitable deviations in odd periods.
So consider a game starting in an even period where B oers P =
Does S have a profitable deviation? If S accepts: payo =
payo =
1
1+
1+
.
76
1+
1+
:
. If S rejects:
1+
, payo = 1
1+
1
. If B oers P <
, S will reject and oer a Nash bid yielding a payo
1+
1+
2
1
1
to B of (1
)=
<
. So B would be strictly worse o.
1+
1+
1+
Thus B and S do not have profitable deviations in even periods.
Thus we have shown that the strategies above constitute a NE in every subgame and
therefore constitute a SPE.
77
23
23.1
First note that an infinite horizon game is not repeated at all, it simply has a possibly
very long horizon but it could also end tomorrow. It may never end but you might
think of it as an auction for ONE item which never reaches an agreement.
An infinitely repeated game involves a dierent item each time and the same game is
repeated.
So far we have considered non-cooperative games in which players act only for themselves and the game is well laid out. We now turn to cooperative games where the
structure is not as formal and we dont know exactly what form of cooperation will
occur, but we can make some reasonable assumptions about what the solution will
look like.
Nash Bargaining Solution (NBS)
Consider the payo graph in G-23.1. Here we represent player 1s and player 2s payos
and draw in a Feasible Set of payo combinations. The solution to the problem
will be within this area. We also designate a Disagreement Point which results if no
agreement is decided upon. From that we can limit the feasible set to those payos
which are greater than or equal to the disagreement point as shown in the graph.
See G-23.2 for a graph of the alternating oer game we discussed previously. Here, the
disagreement point is at d = (0, 0) but there is a certain symmetry in the game.
Axioms of a Reasonable Solution:
(1) The solution should not depend on linear transformations of players utility
functions.
(2) The solution should be individually rational and pareto-optimal.
(3) There should be Independence of Irrelevant Alternatives.
(4) The solution should be symmetric if the game itself is symmetric.
So note that axioms (2) and (4) give us the solution E1 in graph G-23.2. The solution
set, with a possible point, E2 , in G-23.1 is a possible Nash Bargaining Solution.
A note on axiom (3). This means that say we set up the problem and solve for the
NBS. Then if we were to remove part of the feasible set (that does not include either
the solution or the disagreement point), then we should also get the same NBS in the
new problem.
78
Theorem: Suppose a feasible set is convex, closed and bounded above, then there
exists a unique solution satisfying the four axioms and it is given by:
M axx d (x1
d1 )(x2
d2 ).
23.2
Repeated Games
Definition: Let G be a static game. Then the T -period repeated game denoted
G(T, ) consists of the game G repeated T times. At each period t, the moves of all
players in all previous periods are known to all players. Payo to player i is then:
ui =
T
X
t 1
uit .
t=1
Where (1
) is just a normalization term which allows us to compare the payo in
the infinite game to the finite or static game.
Trigger Strategy Equilibria
First define a main equilibrium path to be an action suggested for every player i,
(i = 1, . . . , n), and for every period:
~s = (s11 , . . . , sn1 ), (s12 , . . . , sn2 ), (s13 , . . . , sn3 ), ...
|
{z
} |
{z
} |
{z
}
P eriod 1
P eriod 2
P eriod 3
Second, define (s1 , . . . , sn ) to be the NE of the static game, G. Then a Trigger Strategy
for player i in the repeated game G(T, ) is given by:
sit , if every player has played according to ~s in all previous periods (or t = 1)
it =
si , if a deviation has occurred in any previous period by any player
For
high enough, we will show that these trigger strategies constitute a SPE.
79
24
24.1
For a trigger strategy equilibrium to be subgame perfect, we must show that both on
the main equilibrium path and the punishment path, there is no incentive to deviate
at any point in time.
Clearly, if the punishment strategy is the static NE, then were ok on part of that.
So show the other condition, we need to do some work. Note the trigger occurs even
if YOU are the one to deviate.
Consider the example of the infinitely repeated cournot game with 2 players. Consider
the trigger strategies for each player:
8 a c
a c
<
if q1s = q2s =
8 s = 1, . . . , t 1
4
4
qit =
: a c else.
3
a c
So we play the joint monopoly solution, splitting q m =
along the main equilib2
a c
rium path, and then play the static Nash of q =
as a punishment.
3
1
X
t=0
ta
c
4
(a
c) =
1
(a
8
c)2 .
3(a
|
c)
8
(a
c
4{z
3(a
c)
c) +
}
Deviation P ayof f
9
(a
64
c)2 +
1
X
ta
t=1
(a
3
{z
c
3
P unishment P ayof f
1
(a
9
c)2 .
c)/4 i
= qi (a
(a
c)/4
c) =
}
qi
c).
When will the equilibrium path be attained? (Note that the game looks the same in
every period with the same strategies so considering a deviation at period 2 is exactly
the same as considering a deviation at period 1). We need E D . Or:
1
1
1
(a
8
c)2
9
(a
64
80
c)2 +
1
(a
9
c)2 .
1
8
9
1
+
.
1
64 1
9
9
.
17
So the players must not discount the future too much. Note a = 0 would imply the
future was worthless so you would alway deviate immediately. A = 1 means you
would definitely stay along the main path, but in general,
2 (0, 1).
Players must be sufficiently patient.
Definition: An n-tuple of payos (x1 , . . . , xn ) to the n players is called feasible if it
arises from the play of pure strategies or if it is a convex combination of payos of pure
strategies. In a 2 player 2x2 game (like prisoners dilema), plot the payo combinations
in payo space and connect and fill in. This is the feasible set.
Theorem: Folk Theorem. Let (e1 , . . . , en ) be the payos from a NE of the game
G and let (x1 , . . . , xn ) be any feasible payos from G. If xi > ei 8 i then 9 a SPE
of G(1, ) that attains (x1 , . . . , xn ) as the AVERAGE payos provided that is close
enough to 1.
See G-24.1 and G-24.2 for cournot and prisoners dilema. We draw the feasible set of
payos, include the (e1 , e2 ) point and construct the Folk Theorem Region as shown.
Notice we do not say for sure what the payo is going to be under Folk, but we simply
know that we can construct a set of trigger strategies with a large enough delta to
sustain any payo outcome in the Folk Theorem Region.
81
25
25.1
Consider the payo space for the PD game in G-25.1. Note the folk region contains
the point ( 1, 1). We can obtain that (average) payo in G(1, ) if:
1
X
( 1)
0+
t=0
1
X
( 4).
t=1
1
1
4
1
1
.
4
(C, RS)
(RS, RS)
(C, RS)
(RS, RS)
(C, RS)
We alternate between the strategies that give us payos on either end of the
convex combination.
Step 3: Write out strategies:
RS
if t is odd and no prior deviations
1t =
C if t is even or there was a prior deviation
RS if no prior deviations
2t =
C
otherwise
Step 4: Find the sustainable . We would have to check 4 conditions: Player
(I,II) x Period (Odd,Even).
Note we could also use a mixed strategy but the randomization mechanism must be
public. If it is private, the player may have an incentive to cheat and be completely
undetected.
25.2
Can we develop more sophisticated strategies to yield a lower required discount factor?
Yes, consider the repeated Cournot game and the flow diagram in G-25.2.
82
We start with collusion and each firm produces one half the monopoly quantity (a
c)/4. If there is a deviation, we dont revert to cournot nash, (a c)/3, but instead to
qpunish = (a c)/2. This is WAY too much output and results in negative profits for
both firms. We continue like this until the original deviator also sets q = (a c)/2 for
one period (which is like a signal to the other firm that it wants to collude again) and
in the next period, we have collusion.
Hence the punishment is much more HARSH than in the trigger strategy, however the
length of the punishment is very SHORT compared to the trigger.
Hence the critical discount factor in this setup is lower than in the trigger setup. Why?
Because in the trigger we revert to cournot nash forever so some of the punishment
is discounted into the future. With this new setup, we punish IMMEDIATELY! so
even firms with a relatively low discount factor (they do not value the future as much)
would be willing to collude in every period.
83
26
26.1
We will focus on the study of auctions for this part of the material. First some definitions.
Definition: First Price Auction. Every player i simultaneously submits a bid of bi .
Player i wins the item if he has the highest bid and then pays a price of bi .
Definition: Second Price Auction. Every player i simultaneously submits a bid of bi .
Player i wins the item if he has the highest bid and then pays a price of bj where bj is
the second highest bid.
Definition: English Auction. Ascending dynamic bids where winner pays amount of
highest bid. (Christies and E-Bay).
Definition: Dutch Auction. Descending Price Auction. Start high and lower the bid.
First bidder to claim the item wins and pays that price.
Example: Auction with Discrete Bids. Suppose there is one item up for auction in a
sealed bid, first price auction. There are only two allowable bids: 0 and 1/3. There
are two risk-neutral bidders each with private valuation,
ti U [0, 1].
Of course each bidder knows his own valuation but not the other. He only knows the
distribution of the others valuations. The highest bidder wins and pays his bid. In
the case of a tie, a coin is flipped to determine the winner.
A strategy in this case is a mapping from your type to an action. In this case,
we map from valuations to bids. Players bid against distributions of bids since true
opponents valuations are unknown.
Solution. Every Bayesian Nash Equilibrium has the following form:
0 if t1 2 [0, t1 ]
S1 (t1 ) =
1/3 if t1 2 (t1 , 1]
S2 (t2 )
0 if t2 2 [0, t2 ]
1/3 if t2 2 (t2 , 1]
So a strategy for player i is contingent on what his valuation is realized to be. See
G-26.1. For each player, we define a break-point, ti such that if his realized valuation
is above this break-point, he bids 1/3, else he bids 0. Note that t1 need not equal t2
but of course in this symmetric game, they will be equal. So far, all we can say is that
ti > 1/3. Why? Because if it was below and his realized valuation was between the
break-point and 1/3, he might be bidding 1/3 for an item which he valued less than
1/3.
84
P 2 Bids 1/3
1/3)
1/3) .
}
P 2 Bids 1/3
= 0.5 t2 t1
0) .
}
Observe that for t1 > t1 , player 1s expected utility from playing 1/3 is higher than
that for bidding 0. The opposite is also true. By continuity, it must be the case that:
E[u1 (1/3, S2 (t2 ); t1 )] = E[u1 (0, S2 (t2 ); t1 )].
Plugging in t1 = t1 and setting equal, we have:
(0.5(1
(0.5(1
t2 ) + t2 )(t1
t2 ) + t2 )(t1
1/3) = 0.5t2 t1
1/3) = 0.5t2 t1
t1 = 1/3t2 + 1/3
This argument could be repeated for player 2, but since the game is symmetric:
t2 = 1/3t1 + 1/3.
Two equations, two unknowns yields:
1
t1 = t2 = .
2
85
This defines the bidding strategies. Note that both players will also know this
information going into the game. The only source of incomplete information is the
other players actual realization of his valuation.
It might seem more intuitive that ti = 1/3. However, it is not just each players
own valuation that matters but the incomplete information regarding his opponents
valuation. Note a player does not bid 1/3 when his valuation is between 1/3 and
1/2. Why? Because his gains from winning the auction are too small compared with
bidding 0 and possibly still having a tie and winning the item on a coin flip. The
same argument would apply if the discrete bid choices where (0, 2/3). In this case, the
players would always bid ZERO! Why? Because from our previous argument, their
break-point would clearly be at least 2/3. Suppose it is exactly 2/3. Then if they
realized a valuation of 3/4, they would win (3/4 2/3) if they won the auction. This is
too small compared with the expected value of bidding 0. In fact, any discrete bidding
requirement larger than (0, 1/2) would result in no bidding. Why is this Nash? If this
information is known to both players, then clearly one could unilaterally deviate and
win the auction with certainty. But would the bidder really be better o? Suppose
discrete bids were (0, 2/3) and the player realized a valuation of 1. He would win (with
certainty) 1/3 from bidding 2/3 (since the other player is bidding 0). However, by
bidding 0, he gets (via the coin flip):
0.5 (1
0) = 0.5.
This is clearly higher than bidding 2/3, winning, and getting a payo of 1/3. So all
this hinges importantly on the fact that these players are risk-neutral. If the players
were a bit more risk averse, one might prefer the sure 1/3 payo to the 50/50 chance
of getting 1 or 0.
86
27
27.1
Definition: Let T1 , T2 be the sets of possible types for players 1 and 2. Define
(S1 , S2 ) to be the Bayesian Nash Equilibrium (BNE) if 8 t 2 T ,
X
S1 (t1 ) = arg max a1 2A1
u1 (a1 , S2 (t2 ); t1 ) p1 (t2 |t1 ) .
t2 2T2
t1 2T1
{z
{z
E[u1 ]
Where Si is a function from types to actions for player i, or usually valuations to bids
in an auction setting. Note we have assumed only two players and finitely many types,
but both of these assumptions can be relaxed.
Note that we have the conditional probability in the definition of a BNE because
usually types are correlated. (Ie, if I think the piece of art up for auction is a piece of
crap, odds are, so do you).
Solution to the Sealed-Bid First-Price Auction
Consider a two player game where vi U [0, 1] and bi 2 [0, 1]. Bidders simultaneously
and independently choose bi once they have realized their own vi . The others valuation
is unknown. The highest bidder wins the item and pays her bid.
Assume the bidding function is increasing in vi and the bidding function is the same
for each player.
Bidder i wins if:
bi > B(vj ) =) B 1 (bi ) > vj =) P r(B 1 (bi ) > vj ) = B 1 (bi ),
since we assumed the valuations where uniform [0, 1].
Define the expected payo to bidder i when her valuation is vi as:
i (vi , bi ) = (vi
bi )B 1 (bi ).
And also define the expected payo to bidder i from bidding optimally:
i (vi ) = M axbi i (vi , bi ).
87
Note the arg max of this last expression is B(vi ), the optimal bid. Thus,
i (vi ) = i (vi , B(vi )) = (vi
= B 1 (vi )
bi =B(vi )
= vi .
bi =B(vi )
Here we used the envelope theorem. Now write the dierence between at two valuations as the integral of its derivative:
i (vi )
i (0) =
vi
di
dvi =
dvi
vi
1
vi dvi = vi2
2
vi
1
= vi2 . (2)
2
1
B(vi ))vi = vi2 .
2
1
B(vi ) = vi .
2
1
B(vi ) = vi .
2
And this is our BNE. A bidding strategy from the players valuations to her bids. See
G-27.1 and G-27.2. Note that she always shades her bid to one half her actual valuation
in a first priced auction with two players.
vi
88
1
N
vi .
28
28.1
Winners pays second highest bid. We can determine the NE strategy by considering
the following table:
1
2
3
4
Payo If
b i b0
i
b0i < b i < vi
vi < b i b00i
b i > b00
i
Shades
(b0i
b
vi
0
0
0
Bidder i
< vi ) Sincerely (bi = vi ) Inflates (b00i > vi )
vi b i
vi b i
i
vi b i > 0
vi b i
0
vi b i < 0
0
28.2
Dynamic Auctions
English versus Second Price Sealed Bid. We have shown that in a second price sealed
bid auction, bidders bid their true valuations. In the end, the bidder with the highest
valuation wins but ends up paying only the valuation of the second highest bidder. In
an ascending clock English auction, the clock stops when there is only one bidder still
in the auction. Again, this will occur as soon as the going bid goes above the second
highest bidder. Hence the payo is the same. With private independent valuations,
the revenue generated from a English and Second Price Seal Bid auction is the same
(Revenue Equivalence).
Dutch versus First Price Sealed Bid. Recall that in a Dutch auction, the price starts
high and falls and the first bidder to ring in, wins the item and pays the price he
bid. In this setting, it is impossible for bidders to gain any information about their
opponents valuations. When someone bids, you get a bit of information, but its too
late, the auction is over. Strategically, this is equivalent to the first price auction and
it can be shown that the nash strategy is to bid:
bi =
1
n
vi .
The English and Second Price Sealed Bid, though they yield the same revenue, are
not really identical, since in an English auction, you are gaining information about the
others valuations as the auction goes on. In theory, you could continually revise your
valuation based on the number of bidders still active in the auction. The presence of
independent private valuations is really what is driving this revenue equivalence. The
auctions are really very dierent otherwise.
89
1)/n vi .
28.3
Usually, as in treasury auction, bidders submit entire demand schedules for items
instead of a round by round bidding process to save time.
Consider the Ausubel Auction for 5 identical items where 4 bidders marginal values
for having one, two, and three of the items are as follows:
Player I Player II
V1 = 123 V1 = 125
V2 = 113 V2 = 125
V3 = 103 V3 = 49
Player III
V1 = 75
V2 = 5
V3 = 3
Player IV
V1 = 85
V2 = 65
V3 = 7
We start the price low and raise it and the 4 players continually revise the number of
items they would like. Stop when supply equals demand.
So at a price of 50, I wants 3, II wants 2, III wants 1, and IV wants 2.
At a price of 75, I wants 3, II wants 2, III wants 0, and IV wants 1.
90
At this point demand is 6 and supply is 5. Now we ask, should the price continue to
rise above 85 so only players I and II receive the items? Consider player Is payo from
letting the price rise and attaining 3 items:
1 = (123
85) + (113
85) + (103
85) = 84.
And from dropping his demand from 3 to 2 when the price hits 75:
1 = (123
75) + (113
75) = 86.
So player I should drop out at a price of 75, thereby giving player IV an item but
maximizing his payo. Revenue is 75 4 = 300.
This is a problem since the items should have all gone to players I and II but instead
we have an inefficient allocation.
Consider Ausubels alteration: We award a good to a player as soon as they have
clinched at whatever price the bidding is currently at.
At a price of 64, player Is opponents demand 5 units, so it is still possible that player
I wont get any units. At a price of 65, player IV drops to 1 unit of demand so the
total demand of player Is opponents is 4 units. Hence player I is guaranteed one unit
at price of 65. He has clenched a unit at P = 65. The same happens at 75 and 85 and
in this case, it is in player Is interest to win all three units because of the non-uniform
pricing. Player Is payo becomes:
1 = (123
65) + (113
75) + (103
85) = 144.
And total revenue is 65 + 75 + 75 + 85 = 300. Revenue is the same but the allocation
is now more efficient. (Note player II clinched a unit at 75).
91
29
29.1
See G-29.1. We have two bidders with valuations Vb , Vs U [0, 1] where both the buyer
and seller have incomplete information about the others valuation. It can be shown
(see problem set) that trade occurs if:
Vb
Vs + 1/4.
This is inefficient because we should have trade as long as the buyers valuation is
higher than the sellers. The lack of information leads to this inefficiency.
29.2
c(, e),
Uf = y(, e)
w.
Where f is the winning firm and y is the marginal product of a worker of type and
with education e.
Note that education could be completely worthless for increasing productivity, but the
investment is still a signal of the workers type.
See G-29.2 for a non-cynical (education matters) and a cynical graph (education is
worthless) of productivity as a function of type and education.
The requirements for a Sequential Equilibrium or a Perfect Bayesian Equilibrium
are the following:
(1) Beliefs: R maintains a probability distribution over types - so R updates beliefs
after each move.
(2) Updating by Bayes Rule.
(3) Sequential Rationality: Each player must be optimizing according to his beliefs
and the information he has. Thus Rs choice of action must maximize his expected
utility and Ss choice of message must maximize his utility given his knowledge
of t as well as his anticipation of Rs response to m.
There are 3 types of equilibria: Pooling, Separating and Hybrid.
Pooling Equilibria
Both types of workers choose a common education level, ep . The firms beliefs after
observing ep are the same as his prior beliefs:
H,
with probability q
=
L, with probability 1 q
Separating Equilibria
The H type worker chooses e = es and the L type worker chooses e = eL . The firms
beliefs after observing es are :
H, with probability 1
=
L, with probability 0
93
H, with probability 0
L, with probability 1
q)
q0.
q0.
H,
with probability q 0
=
L, with probability 1 q 0
And after observing eL ,
=
H, with probability 0
L, with probability 1
y(H, e), e es
y(L, e) e < es
94
We need some self selection constraints (or compatibility constraints) to make sure
that each type of worker only chooses each level of education. Thus,
Low : w(eL )
c(L, eL )
w(es )
c(L, es ).
High : w(es )
c(H, es )
w(eL )
c(H, eL ).
We also know that under complete information, a low and high type would maximize
such that:
eL = arg max M axeL {y(L, e) c(L, e)},
and,
eH = arg max M axeH {y(H, e)
c(H, e)}.
We claim that eL = eL and es > eH . It is clear that the low type should definitely not
get more education than exactly what is necessary to be called a low type (possibly 0)
but they also shouldnt get any less because eL is optimal. Under the cynical graph,
the unconstrained choice for a high type would be an education level of 0, but this
would mean that a low type could easily jump up and look like a high type. This is
also true if es = . The low type could incur a relatively small cost and look like a
high type. Thus es must be large enough to make the low type just indierent between
getting education eL and education es .
Thus, there is a systematic bias where high type workers are getting TOO much education when it is not increasing productivity, but rather just creating a signal to the
firms.
Solution to the Hybrid Equilibrium
The firm can now update their probability distribution of the workers type based on
their education signal. Thus, they oer:
0
q y(H, eh ) + (1 q 0 )y(L, eh ), e eh
w(e) =
y(L, eL )
e < eh
Self selection constraints:
Low : w(eL )
High : w(eh )
c(L, eL ) = w(eh )
c(L, eh ).
c(H, eh )
c(H, eL ).
w(eL )
Now the low type is indierent between the dierent education signals.
Done.
95
Price Discrimination
3rd degree: non-uniform/linear pricing. Need the ability to identify markets, no resale,
and a mechanism for voluntary self-selection. Set prices as if you are operating in two
separate markets.
2nd degree: uniform/non-linear pricing. 2 part tari. Menu is oered such that utility satisfies individual rationality and incentive compatibility constraints. High value
consumer will buy at efficiency quantity (u0 (x) = c) while the low value pays a higher
price for less quantity.
1st degree: non-uniform/non-linear pricing. Menu oered (2 part tari). We can
identify which consumer is high valued, so no need for IC constraint. Just rationality.
29.4
Game Theory
Q : qi = (a
c)/3. Monopoly is (a
c)/2.
96
Nash Bargaining Solution: Axioms of a reasonable solution: (1) solution should not
depend on linear transformations for players utility functions. (2) Individually rational
and pareto-optimal. (3) Independence of Irrelevant Alternatives. (4) Symmetric is G
is symmetric.
NBS: M axx
(x1
d1 )(x2
d2 ).
Feasible Payo: a payo that arises from play of pure strategies or convex combinations
of those strategies.
Folk Theorem. Let (e1 , e2 ) be the NE payos and let (x1 , x2 ) be any feasible set of
payos with xi > ei . Then 9 a SPE of G(1, ) that attains (x1 , x2 ) as the average
payo provided that is sufficiently close to 1.
Bayesian Nash Equilibrium
S1 (t1 ) = arg max a1 2A1
t2 2T2
{z
E[u1 ]
bi )B 1 (bi ).
i (vi , bi ) = (vi
i (0) =
vi
di
dvi =
dvi
bi =B(vi )
vi
1
vi dvi = vi2
2
vi
1
= vi2 . (2)
2
1
B(vi ) = vi .
2
And this is our BNE.
If there were N players, the BNE would be:
B(vi ) =
1
N
vi .
1)/n vi .
q)y(L, ep ),
Separating Equilibrium The H type worker chooses e = es and the L type worker
chooses e = eL .
y(H, e), e es
w(e) =
y(L, e) e < es
Hybrid or Partially Pooling Equilibria: The H type worker chooses e = eh always and
the L type worker chooses e = eh with probability and e = el with probability 1 .
The firms beliefs after observing eh must be updated using Bayes rule.
0
q y(H, eh ) + (1 q 0 )y(L, eh ), e eh
w(e) =
y(L, eL )
e < eh
98
29.5
In monopoly:
2
1
d
3
m .
m .
99